id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1506.05703
R\'emi Lebret
R\'emi Lebret and Ronan Collobert
"The Sum of Its Parts": Joint Learning of Word and Phrase Representations with Autoencoders
Deep Learning Workshop, ICML 2015
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, there has been a lot of effort to represent words in continuous vector spaces. Those representations have been shown to capture both semantic and syntactic information about words. However, distributed representations of phrases remain a challenge. We introduce a novel model that jointly learns word vector representations and their summation. Word representations are learnt using the word co-occurrence statistical information. To embed sequences of words (i.e. phrases) with different sizes into a common semantic space, we propose to average word vector representations. In contrast with previous methods which reported a posteriori some compositionality aspects by simple summation, we simultaneously train words to sum, while keeping the maximum information from the original vectors. We evaluate the quality of the word representations on several classical word evaluation tasks, and we introduce a novel task to evaluate the quality of the phrase representations. While our distributed representations compete with other methods of learning word representations on word evaluations, we show that they give better performance on the phrase evaluation. Such representations of phrases could be interesting for many tasks in natural language processing.
[ { "created": "Thu, 18 Jun 2015 14:46:44 GMT", "version": "v1" } ]
2015-06-19
[ [ "Lebret", "Rémi", "" ], [ "Collobert", "Ronan", "" ] ]
Recently, there has been a lot of effort to represent words in continuous vector spaces. Those representations have been shown to capture both semantic and syntactic information about words. However, distributed representations of phrases remain a challenge. We introduce a novel model that jointly learns word vector representations and their summation. Word representations are learnt using the word co-occurrence statistical information. To embed sequences of words (i.e. phrases) with different sizes into a common semantic space, we propose to average word vector representations. In contrast with previous methods which reported a posteriori some compositionality aspects by simple summation, we simultaneously train words to sum, while keeping the maximum information from the original vectors. We evaluate the quality of the word representations on several classical word evaluation tasks, and we introduce a novel task to evaluate the quality of the phrase representations. While our distributed representations compete with other methods of learning word representations on word evaluations, we show that they give better performance on the phrase evaluation. Such representations of phrases could be interesting for many tasks in natural language processing.
2402.00775
Andreas Christou
Andreas Christou, Antonio J. del-Ama, Juan C. Moreno and Sethu Vijayakumar
Adaptive Control for Triadic Human-Robot-FES Collaboration in Gait Rehabilitation: A Pilot Study
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The hybridisation of robot-assisted gait training and functional electrical stimulation (FES) can provide numerous physiological benefits to neurological patients. However, the design of an effective hybrid controller poses significant challenges. In this over-actuated system, it is extremely difficult to find the right balance between robotic assistance and FES that will provide personalised assistance, prevent muscle fatigue and encourage the patient's active participation in order to accelerate recovery. In this paper, we present an adaptive hybrid robot-FES controller to do this and enable the triadic collaboration between the patient, the robot and FES. A patient-driven controller is designed where the voluntary movement of the patient is prioritised and assistance is provided using FES and the robot in a hierarchical order depending on the patient's performance and their muscles' fitness. The performance of this hybrid adaptive controller is tested in simulation and on one healthy subject. Our results indicate an increase in tracking performance with lower overall assistance, and less muscle fatigue when the hybrid adaptive controller is used, compared to its non adaptive equivalent. This suggests that our hybrid adaptive controller may be able to adapt to the behaviour of the user to provide assistance as needed and prevent the early termination of physical therapy due to muscle fatigue.
[ { "created": "Thu, 1 Feb 2024 17:04:41 GMT", "version": "v1" }, { "created": "Fri, 8 Mar 2024 11:05:39 GMT", "version": "v2" } ]
2024-03-11
[ [ "Christou", "Andreas", "" ], [ "del-Ama", "Antonio J.", "" ], [ "Moreno", "Juan C.", "" ], [ "Vijayakumar", "Sethu", "" ] ]
The hybridisation of robot-assisted gait training and functional electrical stimulation (FES) can provide numerous physiological benefits to neurological patients. However, the design of an effective hybrid controller poses significant challenges. In this over-actuated system, it is extremely difficult to find the right balance between robotic assistance and FES that will provide personalised assistance, prevent muscle fatigue and encourage the patient's active participation in order to accelerate recovery. In this paper, we present an adaptive hybrid robot-FES controller to do this and enable the triadic collaboration between the patient, the robot and FES. A patient-driven controller is designed where the voluntary movement of the patient is prioritised and assistance is provided using FES and the robot in a hierarchical order depending on the patient's performance and their muscles' fitness. The performance of this hybrid adaptive controller is tested in simulation and on one healthy subject. Our results indicate an increase in tracking performance with lower overall assistance, and less muscle fatigue when the hybrid adaptive controller is used, compared to its non adaptive equivalent. This suggests that our hybrid adaptive controller may be able to adapt to the behaviour of the user to provide assistance as needed and prevent the early termination of physical therapy due to muscle fatigue.
cs/0608104
Uday Khedker
Uday Khedker, Amitabha Sanyal, and Amey Karkare
Heap Reference Analysis Using Access Graphs
Accepted for printing by ACM TOPLAS. This version incorporates referees' comments
ACM TOPLAS, 30(1), 2007
10.1145/1290520.1290521
null
cs.PL cs.SE
null
Despite significant progress in the theory and practice of program analysis, analysing properties of heap data has not reached the same level of maturity as the analysis of static and stack data. The spatial and temporal structure of stack and static data is well understood while that of heap data seems arbitrary and is unbounded. We devise bounded representations which summarize properties of the heap data. This summarization is based on the structure of the program which manipulates the heap. The resulting summary representations are certain kinds of graphs called access graphs. The boundedness of these representations and the monotonicity of the operations to manipulate them make it possible to compute them through data flow analysis. An important application which benefits from heap reference analysis is garbage collection, where currently liveness is conservatively approximated by reachability from program variables. As a consequence, current garbage collectors leave a lot of garbage uncollected, a fact which has been confirmed by several empirical studies. We propose the first ever end-to-end static analysis to distinguish live objects from reachable objects. We use this information to make dead objects unreachable by modifying the program. This application is interesting because it requires discovering data flow information representing complex semantics. In particular, we discover four properties of heap data: liveness, aliasing, availability, and anticipability. Together, they cover all combinations of directions of analysis (i.e. forward and backward) and confluence of information (i.e. union and intersection). Our analysis can also be used for plugging memory leaks in C/C++ languages.
[ { "created": "Mon, 28 Aug 2006 11:15:00 GMT", "version": "v1" }, { "created": "Wed, 22 Nov 2006 06:37:51 GMT", "version": "v2" }, { "created": "Sat, 1 Sep 2007 14:52:17 GMT", "version": "v3" } ]
2013-04-25
[ [ "Khedker", "Uday", "" ], [ "Sanyal", "Amitabha", "" ], [ "Karkare", "Amey", "" ] ]
Despite significant progress in the theory and practice of program analysis, analysing properties of heap data has not reached the same level of maturity as the analysis of static and stack data. The spatial and temporal structure of stack and static data is well understood while that of heap data seems arbitrary and is unbounded. We devise bounded representations which summarize properties of the heap data. This summarization is based on the structure of the program which manipulates the heap. The resulting summary representations are certain kinds of graphs called access graphs. The boundedness of these representations and the monotonicity of the operations to manipulate them make it possible to compute them through data flow analysis. An important application which benefits from heap reference analysis is garbage collection, where currently liveness is conservatively approximated by reachability from program variables. As a consequence, current garbage collectors leave a lot of garbage uncollected, a fact which has been confirmed by several empirical studies. We propose the first ever end-to-end static analysis to distinguish live objects from reachable objects. We use this information to make dead objects unreachable by modifying the program. This application is interesting because it requires discovering data flow information representing complex semantics. In particular, we discover four properties of heap data: liveness, aliasing, availability, and anticipability. Together, they cover all combinations of directions of analysis (i.e. forward and backward) and confluence of information (i.e. union and intersection). Our analysis can also be used for plugging memory leaks in C/C++ languages.
1808.07535
Seunghoon Hong
Seunghoon Hong and Xinchen Yan and Thomas Huang and Honglak Lee
Learning Hierarchical Semantic Image Manipulation through Structured Representations
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding, reasoning, and manipulating semantic concepts of images have been a fundamental research problem for decades. Previous work mainly focused on direct manipulation on natural image manifold through color strokes, key-points, textures, and holes-to-fill. In this work, we present a novel hierarchical framework for semantic image manipulation. Key to our hierarchical framework is that we employ a structured semantic layout as our intermediate representation for manipulation. Initialized with coarse-level bounding boxes, our structure generator first creates pixel-wise semantic layout capturing the object shape, object-object interactions, and object-scene relations. Then our image generator fills in the pixel-level textures guided by the semantic layout. Such framework allows a user to manipulate images at object-level by adding, removing, and moving one bounding box at a time. Experimental evaluations demonstrate the advantages of the hierarchical manipulation framework over existing image generation and context hole-filing models, both qualitatively and quantitatively. Benefits of the hierarchical framework are further demonstrated in applications such as semantic object manipulation, interactive image editing, and data-driven image manipulation.
[ { "created": "Wed, 22 Aug 2018 19:33:31 GMT", "version": "v1" }, { "created": "Tue, 28 Aug 2018 00:33:27 GMT", "version": "v2" } ]
2018-08-29
[ [ "Hong", "Seunghoon", "" ], [ "Yan", "Xinchen", "" ], [ "Huang", "Thomas", "" ], [ "Lee", "Honglak", "" ] ]
Understanding, reasoning, and manipulating semantic concepts of images have been a fundamental research problem for decades. Previous work mainly focused on direct manipulation on natural image manifold through color strokes, key-points, textures, and holes-to-fill. In this work, we present a novel hierarchical framework for semantic image manipulation. Key to our hierarchical framework is that we employ a structured semantic layout as our intermediate representation for manipulation. Initialized with coarse-level bounding boxes, our structure generator first creates pixel-wise semantic layout capturing the object shape, object-object interactions, and object-scene relations. Then our image generator fills in the pixel-level textures guided by the semantic layout. Such framework allows a user to manipulate images at object-level by adding, removing, and moving one bounding box at a time. Experimental evaluations demonstrate the advantages of the hierarchical manipulation framework over existing image generation and context hole-filing models, both qualitatively and quantitatively. Benefits of the hierarchical framework are further demonstrated in applications such as semantic object manipulation, interactive image editing, and data-driven image manipulation.
2012.15070
Rongzhou Bao
Rongzhou Bao, Jiayi Wang, Zhuosheng Zhang, Hai Zhao
Enhancing Pre-trained Language Model with Lexical Simplification
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For both human readers and pre-trained language models (PrLMs), lexical diversity may lead to confusion and inaccuracy when understanding the underlying semantic meanings of given sentences. By substituting complex words with simple alternatives, lexical simplification (LS) is a recognized method to reduce such lexical diversity, and therefore to improve the understandability of sentences. In this paper, we leverage LS and propose a novel approach which can effectively improve the performance of PrLMs in text classification. A rule-based simplification process is applied to a given sentence. PrLMs are encouraged to predict the real label of the given sentence with auxiliary inputs from the simplified version. Using strong PrLMs (BERT and ELECTRA) as baselines, our approach can still further improve the performance in various text classification tasks.
[ { "created": "Wed, 30 Dec 2020 07:49:00 GMT", "version": "v1" } ]
2021-01-01
[ [ "Bao", "Rongzhou", "" ], [ "Wang", "Jiayi", "" ], [ "Zhang", "Zhuosheng", "" ], [ "Zhao", "Hai", "" ] ]
For both human readers and pre-trained language models (PrLMs), lexical diversity may lead to confusion and inaccuracy when understanding the underlying semantic meanings of given sentences. By substituting complex words with simple alternatives, lexical simplification (LS) is a recognized method to reduce such lexical diversity, and therefore to improve the understandability of sentences. In this paper, we leverage LS and propose a novel approach which can effectively improve the performance of PrLMs in text classification. A rule-based simplification process is applied to a given sentence. PrLMs are encouraged to predict the real label of the given sentence with auxiliary inputs from the simplified version. Using strong PrLMs (BERT and ELECTRA) as baselines, our approach can still further improve the performance in various text classification tasks.
2311.12764
Renu Sharma
Renu Sharma, Redwan Sony, Arun Ross
Investigating Weight-Perturbed Deep Neural Networks With Application in Iris Presentation Attack Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Deep neural networks (DNNs) exhibit superior performance in various machine learning tasks, e.g., image classification, speech recognition, biometric recognition, object detection, etc. However, it is essential to analyze their sensitivity to parameter perturbations before deploying them in real-world applications. In this work, we assess the sensitivity of DNNs against perturbations to their weight and bias parameters. The sensitivity analysis involves three DNN architectures (VGG, ResNet, and DenseNet), three types of parameter perturbations (Gaussian noise, weight zeroing, and weight scaling), and two settings (entire network and layer-wise). We perform experiments in the context of iris presentation attack detection and evaluate on two publicly available datasets: LivDet-Iris-2017 and LivDet-Iris-2020. Based on the sensitivity analysis, we propose improved models simply by perturbing parameters of the network without undergoing training. We further combine these perturbed models at the score-level and at the parameter-level to improve the performance over the original model. The ensemble at the parameter-level shows an average improvement of 43.58% on the LivDet-Iris-2017 dataset and 9.25% on the LivDet-Iris-2020 dataset. The source code is available at https://github.com/redwankarimsony/WeightPerturbation-MSU.
[ { "created": "Tue, 21 Nov 2023 18:18:50 GMT", "version": "v1" }, { "created": "Wed, 22 Nov 2023 18:52:11 GMT", "version": "v2" } ]
2023-11-23
[ [ "Sharma", "Renu", "" ], [ "Sony", "Redwan", "" ], [ "Ross", "Arun", "" ] ]
Deep neural networks (DNNs) exhibit superior performance in various machine learning tasks, e.g., image classification, speech recognition, biometric recognition, object detection, etc. However, it is essential to analyze their sensitivity to parameter perturbations before deploying them in real-world applications. In this work, we assess the sensitivity of DNNs against perturbations to their weight and bias parameters. The sensitivity analysis involves three DNN architectures (VGG, ResNet, and DenseNet), three types of parameter perturbations (Gaussian noise, weight zeroing, and weight scaling), and two settings (entire network and layer-wise). We perform experiments in the context of iris presentation attack detection and evaluate on two publicly available datasets: LivDet-Iris-2017 and LivDet-Iris-2020. Based on the sensitivity analysis, we propose improved models simply by perturbing parameters of the network without undergoing training. We further combine these perturbed models at the score-level and at the parameter-level to improve the performance over the original model. The ensemble at the parameter-level shows an average improvement of 43.58% on the LivDet-Iris-2017 dataset and 9.25% on the LivDet-Iris-2020 dataset. The source code is available at https://github.com/redwankarimsony/WeightPerturbation-MSU.
1603.08844
Saeed Manaffam
S. Manaffam and M. K. Talebi and A. K. Jain and A. Behal
Synchronization in Networks of Identical Systems via Pinning: Application to Distributed Secondary Control of Microgrids
11 pages, 9 figures, submitted to Transactions on Control Systems Technology
null
null
null
cs.SY cs.ET cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the need for fast synchronized operation of power microgrids, we analyze the problem of single and multiple pinning in networked systems. We derive lower and upper bounds on the algebraic connectivity of the network with respect to the reference signal. These bounds are utilized to devise a suboptimal algorithm with polynomial complexity to find a suitable set of nodes to pin the network effectively and efficiently. The results are applied to secondary voltage pinning control design for a microgrid in islanded operation mode. Comparisons with existing single and multiple pinning strategies clearly demonstrates the efficacy of the obtained results.
[ { "created": "Tue, 29 Mar 2016 16:56:59 GMT", "version": "v1" }, { "created": "Mon, 8 Aug 2016 20:43:37 GMT", "version": "v2" } ]
2017-09-19
[ [ "Manaffam", "S.", "" ], [ "Talebi", "M. K.", "" ], [ "Jain", "A. K.", "" ], [ "Behal", "A.", "" ] ]
Motivated by the need for fast synchronized operation of power microgrids, we analyze the problem of single and multiple pinning in networked systems. We derive lower and upper bounds on the algebraic connectivity of the network with respect to the reference signal. These bounds are utilized to devise a suboptimal algorithm with polynomial complexity to find a suitable set of nodes to pin the network effectively and efficiently. The results are applied to secondary voltage pinning control design for a microgrid in islanded operation mode. Comparisons with existing single and multiple pinning strategies clearly demonstrates the efficacy of the obtained results.
2306.05069
Masood Feyzbakhsh Rankooh
Masood Feyzbakhsh Rankooh and Tomi Janhunen
Capturing (Optimal) Relaxed Plans with Stable and Supported Models of Logic Programs
Paper presented at the 39th International Conference on Logic Programming (ICLP 2023), 14 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
We establish a novel relation between delete-free planning, an important task for the AI Planning community also known as relaxed planning, and logic programming. We show that given a planning problem, all subsets of actions that could be ordered to produce relaxed plans for the problem can be bijectively captured with stable models of a logic program describing the corresponding relaxed planning problem. We also consider the supported model semantics of logic programs, and introduce one causal and one diagnostic encoding of the relaxed planning problem as logic programs, both capturing relaxed plans with their supported models. Our experimental results show that these new encodings can provide major performance gain when computing optimal relaxed plans, with our diagnostic encoding outperforming state-of-the-art approaches to relaxed planning regardless of the given time limit when measured on a wide collection of STRIPS planning benchmarks.
[ { "created": "Thu, 8 Jun 2023 09:34:38 GMT", "version": "v1" } ]
2023-06-09
[ [ "Rankooh", "Masood Feyzbakhsh", "" ], [ "Janhunen", "Tomi", "" ] ]
We establish a novel relation between delete-free planning, an important task for the AI Planning community also known as relaxed planning, and logic programming. We show that given a planning problem, all subsets of actions that could be ordered to produce relaxed plans for the problem can be bijectively captured with stable models of a logic program describing the corresponding relaxed planning problem. We also consider the supported model semantics of logic programs, and introduce one causal and one diagnostic encoding of the relaxed planning problem as logic programs, both capturing relaxed plans with their supported models. Our experimental results show that these new encodings can provide major performance gain when computing optimal relaxed plans, with our diagnostic encoding outperforming state-of-the-art approaches to relaxed planning regardless of the given time limit when measured on a wide collection of STRIPS planning benchmarks.
1511.08066
Andreas Brandstadt
Andreas Brandstadt and Raffaele Mosca
Maximum Weight Independent Sets for ($P_7$,Triangle)-Free Graphs in Polynomial Time
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Maximum Weight Independent Set (MWIS) problem on finite undirected graphs with vertex weights asks for a set of pairwise nonadjacent vertices of maximum weight sum. MWIS is one of the most investigated and most important algorithmic graph problems; it is well known to be NP-complete, and it remains NP-complete even under various strong restrictions such as for triangle-free graphs. Its complexity was an open problem for $P_k$-free graphs, $k \ge 5$. Recently, Lokshtanov, Vatshelle, and Villanger proved that MWIS can be solved in polynomial time for $P_5$-free graphs, and Lokshtanov, Pilipczuk, and van Leeuwen proved that MWIS can be solved in quasi-polynomial time for $P_6$-free graphs. It still remains an open problem whether MWIS can be solved in polynomial time for $P_k$-free graphs, $k \geq 6$ or in quasi-polynomial time for $P_k$-free graphs, $k \geq 7$. Some characterizations of $P_k$-free graphs and some progress are known in the literature but so far did not solve the problem. In this paper, we show that MWIS can be solved in polynomial time for ($P_7$,triangle)-free graphs. This extends the corresponding result for ($P_6$,triangle)-free graphs and may provide some progress in the study of MWIS for $P_7$-free graphs.
[ { "created": "Wed, 25 Nov 2015 14:08:30 GMT", "version": "v1" }, { "created": "Fri, 13 May 2016 14:08:14 GMT", "version": "v2" }, { "created": "Thu, 30 Jun 2016 07:42:23 GMT", "version": "v3" } ]
2016-07-01
[ [ "Brandstadt", "Andreas", "" ], [ "Mosca", "Raffaele", "" ] ]
The Maximum Weight Independent Set (MWIS) problem on finite undirected graphs with vertex weights asks for a set of pairwise nonadjacent vertices of maximum weight sum. MWIS is one of the most investigated and most important algorithmic graph problems; it is well known to be NP-complete, and it remains NP-complete even under various strong restrictions such as for triangle-free graphs. Its complexity was an open problem for $P_k$-free graphs, $k \ge 5$. Recently, Lokshtanov, Vatshelle, and Villanger proved that MWIS can be solved in polynomial time for $P_5$-free graphs, and Lokshtanov, Pilipczuk, and van Leeuwen proved that MWIS can be solved in quasi-polynomial time for $P_6$-free graphs. It still remains an open problem whether MWIS can be solved in polynomial time for $P_k$-free graphs, $k \geq 6$ or in quasi-polynomial time for $P_k$-free graphs, $k \geq 7$. Some characterizations of $P_k$-free graphs and some progress are known in the literature but so far did not solve the problem. In this paper, we show that MWIS can be solved in polynomial time for ($P_7$,triangle)-free graphs. This extends the corresponding result for ($P_6$,triangle)-free graphs and may provide some progress in the study of MWIS for $P_7$-free graphs.
1601.05539
Xiang Wang
Xiang Wang, Fang-Wei Fu
Constructions of Snake-in-the-Box Codes under $\ell_{\infty}$-metric for Rank Modulation
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the rank modulation scheme, Gray codes are very useful in the realization of flash memories. For a Gray code in this scheme, two adjacent codewords are obtained by using one "push-to-the-top" operation. Moreover, snake-in-the-box codes under the $\ell_{\infty}$-metric are Gray codes, which can be capable of detecting one $\ell_{\infty}$-error. In this paper, we give two constructions of $\ell_{\infty}$-snakes. On the one hand, inspired by Yehezkeally and Schwartz's construction, we present a new construction of the $\ell_{\infty}$-snake. The length of this $\ell_{\infty}$-snake is longer than the length of the $\ell_{\infty}$-snake constructed by Yehezkeally and Schwartz. On the other hand, we also give another construction of $\ell_{\infty}$-snakes by using $\mathcal{K}$-snakes and obtain the longer $\ell_{\infty}$-snakes than the previously known ones.
[ { "created": "Thu, 21 Jan 2016 08:09:35 GMT", "version": "v1" } ]
2016-01-22
[ [ "Wang", "Xiang", "" ], [ "Fu", "Fang-Wei", "" ] ]
In the rank modulation scheme, Gray codes are very useful in the realization of flash memories. For a Gray code in this scheme, two adjacent codewords are obtained by using one "push-to-the-top" operation. Moreover, snake-in-the-box codes under the $\ell_{\infty}$-metric are Gray codes, which can be capable of detecting one $\ell_{\infty}$-error. In this paper, we give two constructions of $\ell_{\infty}$-snakes. On the one hand, inspired by Yehezkeally and Schwartz's construction, we present a new construction of the $\ell_{\infty}$-snake. The length of this $\ell_{\infty}$-snake is longer than the length of the $\ell_{\infty}$-snake constructed by Yehezkeally and Schwartz. On the other hand, we also give another construction of $\ell_{\infty}$-snakes by using $\mathcal{K}$-snakes and obtain the longer $\ell_{\infty}$-snakes than the previously known ones.
2406.10235
Hakim El Massari
Sajida Mhammedi, Hakim El Massari, Noreddine Gherabi, Amnai Mohamed
CF Recommender System Based on Ontology and Nonnegative Matrix Factorization (NMF)
null
Lecture Notes in Networks and Systems, Volume 635 LNNS, Pages 313 - 318, 2023
10.1007/978-3-031-26254-8_44
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Recommender systems are a kind of data filtering that guides the user to interesting and valuable resources within an extensive dataset. by providing suggestions of products that are expected to match their preferences. However, due to data overloading, recommender systems struggle to handle large volumes of data reliably and accurately before offering suggestions. The main purpose of this work is to address the recommender system's data sparsity and accuracy problems by using the matrix factorization algorithm of collaborative filtering based on the dimensional reduction method and, more precisely, the Nonnegative Matrix Factorization (NMF) combined with ontology. We tested the method and compared the results to other classic methods. The findings showed that the implemented approach efficiently reduces the sparsity of CF suggestions, improves their accuracy, and gives more relevant items as recommendations.
[ { "created": "Fri, 31 May 2024 14:50:53 GMT", "version": "v1" } ]
2024-06-18
[ [ "Mhammedi", "Sajida", "" ], [ "Massari", "Hakim El", "" ], [ "Gherabi", "Noreddine", "" ], [ "Mohamed", "Amnai", "" ] ]
Recommender systems are a kind of data filtering that guides the user to interesting and valuable resources within an extensive dataset. by providing suggestions of products that are expected to match their preferences. However, due to data overloading, recommender systems struggle to handle large volumes of data reliably and accurately before offering suggestions. The main purpose of this work is to address the recommender system's data sparsity and accuracy problems by using the matrix factorization algorithm of collaborative filtering based on the dimensional reduction method and, more precisely, the Nonnegative Matrix Factorization (NMF) combined with ontology. We tested the method and compared the results to other classic methods. The findings showed that the implemented approach efficiently reduces the sparsity of CF suggestions, improves their accuracy, and gives more relevant items as recommendations.
2305.17214
Mingxiao Li
Jingyuan Sun, Mingxiao Li, Zijiao Chen, Yunhao Zhang, Shaonan Wang, Marie-Francine Moens
Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain Activities
Accepted by NeurIPS2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Decoding visual stimuli from neural responses recorded by functional Magnetic Resonance Imaging (fMRI) presents an intriguing intersection between cognitive neuroscience and machine learning, promising advancements in understanding human visual perception and building non-invasive brain-machine interfaces. However, the task is challenging due to the noisy nature of fMRI signals and the intricate pattern of brain visual representations. To mitigate these challenges, we introduce a two-phase fMRI representation learning framework. The first phase pre-trains an fMRI feature learner with a proposed Double-contrastive Mask Auto-encoder to learn denoised representations. The second phase tunes the feature learner to attend to neural activation patterns most informative for visual reconstruction with guidance from an image auto-encoder. The optimized fMRI feature learner then conditions a latent diffusion model to reconstruct image stimuli from brain activities. Experimental results demonstrate our model's superiority in generating high-resolution and semantically accurate images, substantially exceeding previous state-of-the-art methods by 39.34% in the 50-way-top-1 semantic classification accuracy. Our research invites further exploration of the decoding task's potential and contributes to the development of non-invasive brain-machine interfaces.
[ { "created": "Fri, 26 May 2023 19:16:23 GMT", "version": "v1" }, { "created": "Mon, 30 Oct 2023 08:27:19 GMT", "version": "v2" }, { "created": "Sat, 23 Dec 2023 15:04:33 GMT", "version": "v3" }, { "created": "Wed, 27 Dec 2023 09:39:41 GMT", "version": "v4" } ]
2023-12-29
[ [ "Sun", "Jingyuan", "" ], [ "Li", "Mingxiao", "" ], [ "Chen", "Zijiao", "" ], [ "Zhang", "Yunhao", "" ], [ "Wang", "Shaonan", "" ], [ "Moens", "Marie-Francine", "" ] ]
Decoding visual stimuli from neural responses recorded by functional Magnetic Resonance Imaging (fMRI) presents an intriguing intersection between cognitive neuroscience and machine learning, promising advancements in understanding human visual perception and building non-invasive brain-machine interfaces. However, the task is challenging due to the noisy nature of fMRI signals and the intricate pattern of brain visual representations. To mitigate these challenges, we introduce a two-phase fMRI representation learning framework. The first phase pre-trains an fMRI feature learner with a proposed Double-contrastive Mask Auto-encoder to learn denoised representations. The second phase tunes the feature learner to attend to neural activation patterns most informative for visual reconstruction with guidance from an image auto-encoder. The optimized fMRI feature learner then conditions a latent diffusion model to reconstruct image stimuli from brain activities. Experimental results demonstrate our model's superiority in generating high-resolution and semantically accurate images, substantially exceeding previous state-of-the-art methods by 39.34% in the 50-way-top-1 semantic classification accuracy. Our research invites further exploration of the decoding task's potential and contributes to the development of non-invasive brain-machine interfaces.
2202.12365
Talia Moore
Karthik Urs and Challen Enninful Adu and Elliott J. Rouse and Talia Y. Moore
Alternative Metrics to Select Motors for Quasi-Direct Drive Actuators
14 pages, 4 figures
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Robotic systems for legged locomotion -- including legged robots, exoskeletons, and prosthetics -- require actuators with low inertia and high output torque. Traditionally, motors have been selected for these applications by maximizing the motor gap radius. We present alternative metrics for motor selection that are invariant to transmission ratio. The proposed metrics reward minimizing the motor inertia while maximizing the torque and motor constants without special consideration for gap radius, providing a better balance of properties for legged locomotion applications. We rigorously characterize the T-Motor RI50 and demonstrate the use of the metrics by comparing the RI50 to the widely-used T-Motor U8 as a case study.
[ { "created": "Thu, 24 Feb 2022 21:13:00 GMT", "version": "v1" } ]
2022-02-28
[ [ "Urs", "Karthik", "" ], [ "Adu", "Challen Enninful", "" ], [ "Rouse", "Elliott J.", "" ], [ "Moore", "Talia Y.", "" ] ]
Robotic systems for legged locomotion -- including legged robots, exoskeletons, and prosthetics -- require actuators with low inertia and high output torque. Traditionally, motors have been selected for these applications by maximizing the motor gap radius. We present alternative metrics for motor selection that are invariant to transmission ratio. The proposed metrics reward minimizing the motor inertia while maximizing the torque and motor constants without special consideration for gap radius, providing a better balance of properties for legged locomotion applications. We rigorously characterize the T-Motor RI50 and demonstrate the use of the metrics by comparing the RI50 to the widely-used T-Motor U8 as a case study.
2012.00614
Markus Leippold
Thomas Diggelmann and Jordan Boyd-Graber and Jannis Bulian and Massimiliano Ciaramita and Markus Leippold
CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims
Accepted for the Tackling Climate Change with Machine Learning Workshop at NeurIPS 2020
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce CLIMATE-FEVER, a new publicly available dataset for verification of climate change-related claims. By providing a dataset for the research community, we aim to facilitate and encourage work on improving algorithms for retrieving evidential support for climate-specific claims, addressing the underlying language understanding challenges, and ultimately help alleviate the impact of misinformation on climate change. We adapt the methodology of FEVER [1], the largest dataset of artificially designed claims, to real-life claims collected from the Internet. While during this process, we could rely on the expertise of renowned climate scientists, it turned out to be no easy task. We discuss the surprising, subtle complexity of modeling real-world climate-related claims within the \textsc{fever} framework, which we believe provides a valuable challenge for general natural language understanding. We hope that our work will mark the beginning of a new exciting long-term joint effort by the climate science and AI community.
[ { "created": "Tue, 1 Dec 2020 16:32:54 GMT", "version": "v1" }, { "created": "Sat, 2 Jan 2021 16:07:48 GMT", "version": "v2" } ]
2021-01-05
[ [ "Diggelmann", "Thomas", "" ], [ "Boyd-Graber", "Jordan", "" ], [ "Bulian", "Jannis", "" ], [ "Ciaramita", "Massimiliano", "" ], [ "Leippold", "Markus", "" ] ]
We introduce CLIMATE-FEVER, a new publicly available dataset for verification of climate change-related claims. By providing a dataset for the research community, we aim to facilitate and encourage work on improving algorithms for retrieving evidential support for climate-specific claims, addressing the underlying language understanding challenges, and ultimately help alleviate the impact of misinformation on climate change. We adapt the methodology of FEVER [1], the largest dataset of artificially designed claims, to real-life claims collected from the Internet. While during this process, we could rely on the expertise of renowned climate scientists, it turned out to be no easy task. We discuss the surprising, subtle complexity of modeling real-world climate-related claims within the \textsc{fever} framework, which we believe provides a valuable challenge for general natural language understanding. We hope that our work will mark the beginning of a new exciting long-term joint effort by the climate science and AI community.
2305.13774
Yan Zhao
Jiangyan Yi, Jianhua Tao, Ruibo Fu, Xinrui Yan, Chenglong Wang, Tao Wang, Chu Yuan Zhang, Xiaohui Zhang, Yan Zhao, Yong Ren, Le Xu, Junzuo Zhou, Hao Gu, Zhengqi Wen, Shan Liang, Zheng Lian, Shuai Nie, Haizhou Li
ADD 2023: the Second Audio Deepfake Detection Challenge
null
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Audio deepfake detection is an emerging topic in the artificial intelligence community. The second Audio Deepfake Detection Challenge (ADD 2023) aims to spur researchers around the world to build new innovative technologies that can further accelerate and foster research on detecting and analyzing deepfake speech utterances. Different from previous challenges (e.g. ADD 2022), ADD 2023 focuses on surpassing the constraints of binary real/fake classification, and actually localizing the manipulated intervals in a partially fake speech as well as pinpointing the source responsible for generating any fake audio. Furthermore, ADD 2023 includes more rounds of evaluation for the fake audio game sub-challenge. The ADD 2023 challenge includes three subchallenges: audio fake game (FG), manipulation region location (RL) and deepfake algorithm recognition (AR). This paper describes the datasets, evaluation metrics, and protocols. Some findings are also reported in audio deepfake detection tasks.
[ { "created": "Tue, 23 May 2023 07:42:52 GMT", "version": "v1" } ]
2023-05-24
[ [ "Yi", "Jiangyan", "" ], [ "Tao", "Jianhua", "" ], [ "Fu", "Ruibo", "" ], [ "Yan", "Xinrui", "" ], [ "Wang", "Chenglong", "" ], [ "Wang", "Tao", "" ], [ "Zhang", "Chu Yuan", "" ], [ "Zhang", "Xiaohui", "" ], [ "Zhao", "Yan", "" ], [ "Ren", "Yong", "" ], [ "Xu", "Le", "" ], [ "Zhou", "Junzuo", "" ], [ "Gu", "Hao", "" ], [ "Wen", "Zhengqi", "" ], [ "Liang", "Shan", "" ], [ "Lian", "Zheng", "" ], [ "Nie", "Shuai", "" ], [ "Li", "Haizhou", "" ] ]
Audio deepfake detection is an emerging topic in the artificial intelligence community. The second Audio Deepfake Detection Challenge (ADD 2023) aims to spur researchers around the world to build new innovative technologies that can further accelerate and foster research on detecting and analyzing deepfake speech utterances. Different from previous challenges (e.g. ADD 2022), ADD 2023 focuses on surpassing the constraints of binary real/fake classification, and actually localizing the manipulated intervals in a partially fake speech as well as pinpointing the source responsible for generating any fake audio. Furthermore, ADD 2023 includes more rounds of evaluation for the fake audio game sub-challenge. The ADD 2023 challenge includes three subchallenges: audio fake game (FG), manipulation region location (RL) and deepfake algorithm recognition (AR). This paper describes the datasets, evaluation metrics, and protocols. Some findings are also reported in audio deepfake detection tasks.
1807.03655
Tamal Dey
Tamal K. Dey
Computing Height Persistence and Homology Generators in $\mathbb{R}^3$ Efficiently
null
SODA 2019
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently it has been shown that computing the dimension of the first homology group $H_1(K)$ of a simplicial $2$-complex $K$ embedded linearly in $\mathbb{R}^4$ is as hard as computing the rank of a sparse $0-1$ matrix. This puts a major roadblock to computing persistence and a homology basis (generators) for complexes embedded in $\mathbb{R}^4$ and beyond in less than quadratic or even near-quadratic time. But, what about dimension three? It is known that persistence for piecewise linear functions on a complex $K$ with $n$ simplices can be computed in $O(n\log n)$ time and a set of generators of total size $k$ can be computed in $O(n+k)$ time when $K$ is a graph or a surface linearly embedded in $\mathbb{R}^3$. But, the question for general simplicial complexes $K$ linearly embedded in $\mathbb{R}^3$ is not completely settled. No algorithm with a complexity better than that of the matrix multiplication is known for this important case. We show that the persistence for {\em height functions} on such complexes, hence called {\em height persistence}, can be computed in $O(n\log n)$ time. This allows us to compute a basis (generators) of $H_i(K)$, $i=1,2$, in $O(n\log n+k)$ time where $k$ is the size of the output. This improves significantly the current best bound of $O(n^{\omega})$, $\omega$ being the matrix multiplication exponent. We achieve these improved bounds by leveraging recent results on zigzag persistence in computational topology, new observations about Reeb graphs, and some efficient geometric data structures.
[ { "created": "Tue, 10 Jul 2018 14:02:35 GMT", "version": "v1" }, { "created": "Thu, 14 Mar 2019 18:47:53 GMT", "version": "v2" } ]
2019-03-18
[ [ "Dey", "Tamal K.", "" ] ]
Recently it has been shown that computing the dimension of the first homology group $H_1(K)$ of a simplicial $2$-complex $K$ embedded linearly in $\mathbb{R}^4$ is as hard as computing the rank of a sparse $0-1$ matrix. This puts a major roadblock to computing persistence and a homology basis (generators) for complexes embedded in $\mathbb{R}^4$ and beyond in less than quadratic or even near-quadratic time. But, what about dimension three? It is known that persistence for piecewise linear functions on a complex $K$ with $n$ simplices can be computed in $O(n\log n)$ time and a set of generators of total size $k$ can be computed in $O(n+k)$ time when $K$ is a graph or a surface linearly embedded in $\mathbb{R}^3$. But, the question for general simplicial complexes $K$ linearly embedded in $\mathbb{R}^3$ is not completely settled. No algorithm with a complexity better than that of the matrix multiplication is known for this important case. We show that the persistence for {\em height functions} on such complexes, hence called {\em height persistence}, can be computed in $O(n\log n)$ time. This allows us to compute a basis (generators) of $H_i(K)$, $i=1,2$, in $O(n\log n+k)$ time where $k$ is the size of the output. This improves significantly the current best bound of $O(n^{\omega})$, $\omega$ being the matrix multiplication exponent. We achieve these improved bounds by leveraging recent results on zigzag persistence in computational topology, new observations about Reeb graphs, and some efficient geometric data structures.
2404.08936
Yang Hu
Yang Hu, Jinxia Zhang, Kaihua Zhang, Yin Yuan
Shifting Spotlight for Co-supervision: A Simple yet Efficient Single-branch Network to See Through Camouflage
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Efficient and accurate camouflaged object detection (COD) poses a challenge in the field of computer vision. Recent approaches explored the utility of edge information for network co-supervision, achieving notable advancements. However, these approaches introduce an extra branch for complex edge extraction, complicate the model architecture and increases computational demands. Addressing this issue, our work replicates the effect that animal's camouflage can be easily revealed under a shifting spotlight, and leverages it for network co-supervision to form a compact yet efficient single-branch network, the Co-Supervised Spotlight Shifting Network (CS$^3$Net). The spotlight shifting strategy allows CS$^3$Net to learn additional prior within a single-branch framework, obviating the need for resource demanding multi-branch design. To leverage the prior of spotlight shifting co-supervision, we propose Shadow Refinement Module (SRM) and Projection Aware Attention (PAA) for feature refinement and enhancement. To ensure the continuity of multi-scale features aggregation, we utilize the Extended Neighbor Connection Decoder (ENCD) for generating the final predictions. Empirical evaluations on public datasets confirm that our CS$^3$Net offers an optimal balance between efficiency and performance: it accomplishes a 32.13% reduction in Multiply-Accumulate (MACs) operations compared to leading efficient COD models, while also delivering superior performance.
[ { "created": "Sat, 13 Apr 2024 09:10:33 GMT", "version": "v1" } ]
2024-04-16
[ [ "Hu", "Yang", "" ], [ "Zhang", "Jinxia", "" ], [ "Zhang", "Kaihua", "" ], [ "Yuan", "Yin", "" ] ]
Efficient and accurate camouflaged object detection (COD) poses a challenge in the field of computer vision. Recent approaches explored the utility of edge information for network co-supervision, achieving notable advancements. However, these approaches introduce an extra branch for complex edge extraction, complicate the model architecture and increases computational demands. Addressing this issue, our work replicates the effect that animal's camouflage can be easily revealed under a shifting spotlight, and leverages it for network co-supervision to form a compact yet efficient single-branch network, the Co-Supervised Spotlight Shifting Network (CS$^3$Net). The spotlight shifting strategy allows CS$^3$Net to learn additional prior within a single-branch framework, obviating the need for resource demanding multi-branch design. To leverage the prior of spotlight shifting co-supervision, we propose Shadow Refinement Module (SRM) and Projection Aware Attention (PAA) for feature refinement and enhancement. To ensure the continuity of multi-scale features aggregation, we utilize the Extended Neighbor Connection Decoder (ENCD) for generating the final predictions. Empirical evaluations on public datasets confirm that our CS$^3$Net offers an optimal balance between efficiency and performance: it accomplishes a 32.13% reduction in Multiply-Accumulate (MACs) operations compared to leading efficient COD models, while also delivering superior performance.
2012.05825
Alexandru \c{T}ifrea
Alexandru \c{T}ifrea, Eric Stavarache, Fanny Yang
Semi-supervised novelty detection using ensembles with regularized disagreement
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks often predict samples with high confidence even when they come from unseen classes and should instead be flagged for expert evaluation. Current novelty detection algorithms cannot reliably identify such near OOD points unless they have access to labeled data that is similar to these novel samples. In this paper, we develop a new ensemble-based procedure for semi-supervised novelty detection (SSND) that successfully leverages a mixture of unlabeled ID and novel-class samples to achieve good detection performance. In particular, we show how to achieve disagreement only on OOD data using early stopping regularization. While we prove this fact for a simple data distribution, our extensive experiments suggest that it holds true for more complex scenarios: our approach significantly outperforms state-of-the-art SSND methods on standard image data sets (SVHN/CIFAR-10/CIFAR-100) and medical image data sets with only a negligible increase in computation cost.
[ { "created": "Thu, 10 Dec 2020 16:55:13 GMT", "version": "v1" }, { "created": "Mon, 6 Sep 2021 19:03:50 GMT", "version": "v2" }, { "created": "Thu, 10 Mar 2022 21:30:48 GMT", "version": "v3" } ]
2022-03-14
[ [ "Ţifrea", "Alexandru", "" ], [ "Stavarache", "Eric", "" ], [ "Yang", "Fanny", "" ] ]
Deep neural networks often predict samples with high confidence even when they come from unseen classes and should instead be flagged for expert evaluation. Current novelty detection algorithms cannot reliably identify such near OOD points unless they have access to labeled data that is similar to these novel samples. In this paper, we develop a new ensemble-based procedure for semi-supervised novelty detection (SSND) that successfully leverages a mixture of unlabeled ID and novel-class samples to achieve good detection performance. In particular, we show how to achieve disagreement only on OOD data using early stopping regularization. While we prove this fact for a simple data distribution, our extensive experiments suggest that it holds true for more complex scenarios: our approach significantly outperforms state-of-the-art SSND methods on standard image data sets (SVHN/CIFAR-10/CIFAR-100) and medical image data sets with only a negligible increase in computation cost.
cs/0003024
Hans Tompits
James P. Delgrande, Torsten Schaub, Hans Tompits
A Compiler for Ordered Logic Programs
null
null
null
null
cs.AI
null
This paper describes a system, called PLP, for compiling ordered logic programs into standard logic programs under the answer set semantics. In an ordered logic program, rules are named by unique terms, and preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Since the result of the translation is an extended logic program, existing logic programming systems can be used as underlying reasoning engine. In particular, PLP is conceived as a front-end to the logic programming systems dlv and smodels.
[ { "created": "Wed, 8 Mar 2000 10:15:51 GMT", "version": "v1" } ]
2007-05-23
[ [ "Delgrande", "James P.", "" ], [ "Schaub", "Torsten", "" ], [ "Tompits", "Hans", "" ] ]
This paper describes a system, called PLP, for compiling ordered logic programs into standard logic programs under the answer set semantics. In an ordered logic program, rules are named by unique terms, and preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Since the result of the translation is an extended logic program, existing logic programming systems can be used as underlying reasoning engine. In particular, PLP is conceived as a front-end to the logic programming systems dlv and smodels.
2009.10256
EPTCS
Zhun Yang
Extending Answer Set Programs with Neural Networks
In Proceedings ICLP 2020, arXiv:2009.09158
EPTCS 325, 2020, pp. 313-322
10.4204/EPTCS.325.41
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The integration of low-level perception with high-level reasoning is one of the oldest problems in Artificial Intelligence. Recently, several proposals were made to implement the reasoning process in complex neural network architectures. While these works aim at extending neural networks with the capability of reasoning, a natural question that we consider is: can we extend answer set programs with neural networks to allow complex and high-level reasoning on neural network outputs? As a preliminary result, we propose NeurASP -- a simple extension of answer set programs by embracing neural networks where neural network outputs are treated as probability distributions over atomic facts in answer set programs. We show that NeurASP can not only improve the perception accuracy of a pre-trained neural network, but also help to train a neural network better by giving restrictions through logic rules. However, training with NeurASP would take much more time than pure neural network training due to the internal use of a symbolic reasoning engine. For future work, we plan to investigate the potential ways to solve the scalability issue of NeurASP. One potential way is to embed logic programs directly in neural networks. On this route, we plan to first design a SAT solver using neural networks, then extend such a solver to allow logic programs.
[ { "created": "Tue, 22 Sep 2020 00:52:30 GMT", "version": "v1" } ]
2020-09-23
[ [ "Yang", "Zhun", "" ] ]
The integration of low-level perception with high-level reasoning is one of the oldest problems in Artificial Intelligence. Recently, several proposals were made to implement the reasoning process in complex neural network architectures. While these works aim at extending neural networks with the capability of reasoning, a natural question that we consider is: can we extend answer set programs with neural networks to allow complex and high-level reasoning on neural network outputs? As a preliminary result, we propose NeurASP -- a simple extension of answer set programs by embracing neural networks where neural network outputs are treated as probability distributions over atomic facts in answer set programs. We show that NeurASP can not only improve the perception accuracy of a pre-trained neural network, but also help to train a neural network better by giving restrictions through logic rules. However, training with NeurASP would take much more time than pure neural network training due to the internal use of a symbolic reasoning engine. For future work, we plan to investigate the potential ways to solve the scalability issue of NeurASP. One potential way is to embed logic programs directly in neural networks. On this route, we plan to first design a SAT solver using neural networks, then extend such a solver to allow logic programs.
1102.2787
Anas Chaaban
Anas Chaaban, Aydin Sezgin, and Salman Avestimehr
On the Sum Capacity of the Y-Channel
12 pages, 8 figures, submitted to ISIT 2011
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A network where three users communicate with each other via a relay is considered. Users do not receive other users' signals via a direct link, and thus the relay is essential for their communication. Each user is assumed to have an individual message to be delivered to each other user. Thus, each user wants to send two messages and to decode two messages. In general, the transmit signals of different nodes can be dependent since they can depend on previously received symbols. We call this case the general case. The sum-capacity is studied, and upper bounds and lower bounds are given. If all nodes have the same power, the sum-capacity is characterized to within a gap of 5/2 bits or a factor of 3 for all values of channel coefficients. This gap is also shown to approach 3/2 bits as the transmit power increases. Moreover, for the symmetric case with equal channel coefficients, the gap is shown to be less than 1 bit. The restricted case is also considered where the transmit signal does not depend on previously received symbols. In this case, the sum-capacity is characterized to within a gap of 2 bits or a factor of 3 for all values of channel coefficients, and approaches 1 bit as the transmit power increases.
[ { "created": "Mon, 14 Feb 2011 14:57:18 GMT", "version": "v1" } ]
2011-02-15
[ [ "Chaaban", "Anas", "" ], [ "Sezgin", "Aydin", "" ], [ "Avestimehr", "Salman", "" ] ]
A network where three users communicate with each other via a relay is considered. Users do not receive other users' signals via a direct link, and thus the relay is essential for their communication. Each user is assumed to have an individual message to be delivered to each other user. Thus, each user wants to send two messages and to decode two messages. In general, the transmit signals of different nodes can be dependent since they can depend on previously received symbols. We call this case the general case. The sum-capacity is studied, and upper bounds and lower bounds are given. If all nodes have the same power, the sum-capacity is characterized to within a gap of 5/2 bits or a factor of 3 for all values of channel coefficients. This gap is also shown to approach 3/2 bits as the transmit power increases. Moreover, for the symmetric case with equal channel coefficients, the gap is shown to be less than 1 bit. The restricted case is also considered where the transmit signal does not depend on previously received symbols. In this case, the sum-capacity is characterized to within a gap of 2 bits or a factor of 3 for all values of channel coefficients, and approaches 1 bit as the transmit power increases.
1909.12977
Sijie Zhu
Sijie Zhu, Taojiannan Yang, Chen Chen
Visual Explanation for Deep Metric Learning
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This work explores the visual explanation for deep metric learning and its applications. As an important problem for learning representation, metric learning has attracted much attention recently, while the interpretation of such model is not as well studied as classification. To this end, we propose an intuitive idea to show where contributes the most to the overall similarity of two input images by decomposing the final activation. Instead of only providing the overall activation map of each image, we propose to generate point-to-point activation intensity between two images so that the relationship between different regions is uncovered. We show that the proposed framework can be directly deployed to a large range of metric learning applications and provides valuable information for understanding the model. Furthermore, our experiments show its effectiveness on two potential applications, i.e. cross-view pattern discovery and interactive retrieval. The source code is available at \url{https://github.com/Jeff-Zilence/Explain_Metric_Learning}.
[ { "created": "Fri, 27 Sep 2019 22:30:58 GMT", "version": "v1" }, { "created": "Sat, 11 Jan 2020 04:10:53 GMT", "version": "v2" }, { "created": "Thu, 30 Jul 2020 21:16:50 GMT", "version": "v3" }, { "created": "Sat, 28 Aug 2021 21:11:03 GMT", "version": "v4" } ]
2021-08-31
[ [ "Zhu", "Sijie", "" ], [ "Yang", "Taojiannan", "" ], [ "Chen", "Chen", "" ] ]
This work explores the visual explanation for deep metric learning and its applications. As an important problem for learning representation, metric learning has attracted much attention recently, while the interpretation of such model is not as well studied as classification. To this end, we propose an intuitive idea to show where contributes the most to the overall similarity of two input images by decomposing the final activation. Instead of only providing the overall activation map of each image, we propose to generate point-to-point activation intensity between two images so that the relationship between different regions is uncovered. We show that the proposed framework can be directly deployed to a large range of metric learning applications and provides valuable information for understanding the model. Furthermore, our experiments show its effectiveness on two potential applications, i.e. cross-view pattern discovery and interactive retrieval. The source code is available at \url{https://github.com/Jeff-Zilence/Explain_Metric_Learning}.
1504.04975
Sudarsan Vasista Srinivasan Ranganathan
Sudarsan V. S. Ranganathan, Dariush Divsalar, and Richard D. Wesel
On the Girth of (3,L) Quasi-Cyclic LDPC Codes based on Complete Protographs
6 pages, 2 figures, 5-page version to appear in the Proceedings of 2015 IEEE International Symposium on Information Theory. Update 1 - 05/29/2015 - Minor changes and added a reference
null
10.1109/ISIT.2015.7282491
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of constructing $(3,L)$ quasi-cyclic low-density parity-check (LDPC) codes from complete protographs. A complete protograph is a small bipartite graph with two disjoint vertex sets such that every vertex in the variable-node set is connected to every vertex in the check-node set by a unique edge. This paper analyzes the required lifting factor for achieving girths of six or eight in the resulting quasi-cyclic codes with constraints on lifting. The required lifting factors provide lower bounds on the block-length of such codes.
[ { "created": "Mon, 20 Apr 2015 08:58:06 GMT", "version": "v1" }, { "created": "Sat, 30 May 2015 00:32:55 GMT", "version": "v2" } ]
2016-11-17
[ [ "Ranganathan", "Sudarsan V. S.", "" ], [ "Divsalar", "Dariush", "" ], [ "Wesel", "Richard D.", "" ] ]
We consider the problem of constructing $(3,L)$ quasi-cyclic low-density parity-check (LDPC) codes from complete protographs. A complete protograph is a small bipartite graph with two disjoint vertex sets such that every vertex in the variable-node set is connected to every vertex in the check-node set by a unique edge. This paper analyzes the required lifting factor for achieving girths of six or eight in the resulting quasi-cyclic codes with constraints on lifting. The required lifting factors provide lower bounds on the block-length of such codes.
1609.00775
Abdul Latif Sarker
Md. Abdul Latif Sarker
An Error Covariance Splitting Technique for Multi-User MIMO Interference Environment
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates an error covariance matrix splitting technique for multiuser multiple input and multiple output (MIMO) interference downlink channel. Most of the related work has thus far considered the traditional error covariance matrix which has not been well-shaped for maximizing the system capacity. Thus, we split and propose a new iterative error covariance matrix to mitigate the system error and maximize the system capacity in this paper. Numerical results illustrate that our proposed method is strictly better than the traditional method.
[ { "created": "Sat, 3 Sep 2016 00:28:36 GMT", "version": "v1" } ]
2016-09-06
[ [ "Sarker", "Md. Abdul Latif", "" ] ]
This paper investigates an error covariance matrix splitting technique for multiuser multiple input and multiple output (MIMO) interference downlink channel. Most of the related work has thus far considered the traditional error covariance matrix which has not been well-shaped for maximizing the system capacity. Thus, we split and propose a new iterative error covariance matrix to mitigate the system error and maximize the system capacity in this paper. Numerical results illustrate that our proposed method is strictly better than the traditional method.
2405.14654
Corentin Dancette
Julien Khlaut, Corentin Dancette, Elodie Ferreres, Alaedine Bennani, Paul H\'erent, Pierre Manceron
Efficient Medical Question Answering with Knowledge-Augmented Question Generation
Accepted at the Clinical Natural Language Processing Workshop, NAACL 2024
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
In the expanding field of language model applications, medical knowledge representation remains a significant challenge due to the specialized nature of the domain. Large language models, such as GPT-4, obtain reasonable scores on medical question answering tasks, but smaller models are far behind. In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach. We first fine-tune the model on a corpus of medical textbooks. Then, we use GPT-4 to generate questions similar to the downstream task, prompted with textbook knowledge, and use them to fine-tune the model. Additionally, we introduce ECN-QA, a novel medical question answering dataset containing ``progressive questions'' composed of related sequential questions. We show the benefits of our training strategy on this dataset. The study's findings highlight the potential of small language models in the medical domain when appropriately fine-tuned. The code and weights are available at https://github.com/raidium-med/MQG.
[ { "created": "Thu, 23 May 2024 14:53:52 GMT", "version": "v1" } ]
2024-05-24
[ [ "Khlaut", "Julien", "" ], [ "Dancette", "Corentin", "" ], [ "Ferreres", "Elodie", "" ], [ "Bennani", "Alaedine", "" ], [ "Hérent", "Paul", "" ], [ "Manceron", "Pierre", "" ] ]
In the expanding field of language model applications, medical knowledge representation remains a significant challenge due to the specialized nature of the domain. Large language models, such as GPT-4, obtain reasonable scores on medical question answering tasks, but smaller models are far behind. In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach. We first fine-tune the model on a corpus of medical textbooks. Then, we use GPT-4 to generate questions similar to the downstream task, prompted with textbook knowledge, and use them to fine-tune the model. Additionally, we introduce ECN-QA, a novel medical question answering dataset containing ``progressive questions'' composed of related sequential questions. We show the benefits of our training strategy on this dataset. The study's findings highlight the potential of small language models in the medical domain when appropriately fine-tuned. The code and weights are available at https://github.com/raidium-med/MQG.
2203.12187
Tian Xie
Tian Xie, Xinyi Yang, Angela S. Lin, Feihong Wu, Kazuma Hashimoto, Jin Qu, Young Mo Kang, Wenpeng Yin, Huan Wang, Semih Yavuz, Gang Wu, Michael Jones, Richard Socher, Yingbo Zhou, Wenhao Liu, Caiming Xiong
Converse: A Tree-Based Modular Task-Oriented Dialogue System
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Creating a system that can have meaningful conversations with humans to help accomplish tasks is one of the ultimate goals of Artificial Intelligence (AI). It has defined the meaning of AI since the beginning. A lot has been accomplished in this area recently, with voice assistant products entering our daily lives and chat bot systems becoming commonplace in customer service. At first glance there seems to be no shortage of options for dialogue systems. However, the frequently deployed dialogue systems today seem to all struggle with a critical weakness - they are hard to build and harder to maintain. At the core of the struggle is the need to script every single turn of interactions between the bot and the human user. This makes the dialogue systems more difficult to maintain as the tasks become more complex and more tasks are added to the system. In this paper, we propose Converse, a flexible tree-based modular task-oriented dialogue system. Converse uses an and-or tree structure to represent tasks and offers powerful multi-task dialogue management. Converse supports task dependency and task switching, which are unique features compared to other open-source dialogue frameworks. At the same time, Converse aims to make the bot building process easy and simple, for both professional and non-professional software developers. The code is available at https://github.com/salesforce/Converse.
[ { "created": "Wed, 23 Mar 2022 04:19:05 GMT", "version": "v1" }, { "created": "Wed, 30 Mar 2022 23:48:44 GMT", "version": "v2" }, { "created": "Mon, 9 May 2022 20:38:58 GMT", "version": "v3" } ]
2022-05-11
[ [ "Xie", "Tian", "" ], [ "Yang", "Xinyi", "" ], [ "Lin", "Angela S.", "" ], [ "Wu", "Feihong", "" ], [ "Hashimoto", "Kazuma", "" ], [ "Qu", "Jin", "" ], [ "Kang", "Young Mo", "" ], [ "Yin", "Wenpeng", "" ], [ "Wang", "Huan", "" ], [ "Yavuz", "Semih", "" ], [ "Wu", "Gang", "" ], [ "Jones", "Michael", "" ], [ "Socher", "Richard", "" ], [ "Zhou", "Yingbo", "" ], [ "Liu", "Wenhao", "" ], [ "Xiong", "Caiming", "" ] ]
Creating a system that can have meaningful conversations with humans to help accomplish tasks is one of the ultimate goals of Artificial Intelligence (AI). It has defined the meaning of AI since the beginning. A lot has been accomplished in this area recently, with voice assistant products entering our daily lives and chat bot systems becoming commonplace in customer service. At first glance there seems to be no shortage of options for dialogue systems. However, the frequently deployed dialogue systems today seem to all struggle with a critical weakness - they are hard to build and harder to maintain. At the core of the struggle is the need to script every single turn of interactions between the bot and the human user. This makes the dialogue systems more difficult to maintain as the tasks become more complex and more tasks are added to the system. In this paper, we propose Converse, a flexible tree-based modular task-oriented dialogue system. Converse uses an and-or tree structure to represent tasks and offers powerful multi-task dialogue management. Converse supports task dependency and task switching, which are unique features compared to other open-source dialogue frameworks. At the same time, Converse aims to make the bot building process easy and simple, for both professional and non-professional software developers. The code is available at https://github.com/salesforce/Converse.
2111.07187
Lei Zou
Lei Zou, Danqing Liao, Nina S.N. Lam, Michelle Meyer, Nasir G. Gharaibeh, Heng Cai, Bing Zhou, Dongying Li
Social Media for Emergency Rescue: An Analysis of Rescue Requests on Twitter during Hurricane Harvey
24 pages, 9 figures, 6 tables
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social media plays increasingly significant roles in disaster response, but effectively leveraging social media for rescue is challenging. This study analyzed rescue requests on Twitter during the 2017 Hurricane Harvey, in which many residents resorted to social media to call for help. The objectives include (1) understanding the characteristics of rescue-request messages; (2) revealing the spatial-temporal patterns of rescue requests; (3) determining the social-geographical conditions of communities needing rescue; and (4) identifying the challenges of using social media for rescue and propose improvement strategies. About half of rescue requests either did not provide sufficient information or neglected to include rescue-related hashtags or accounts. Of the 824 geocoded unique rescue requests, 41% were from FEMA-defined minimal flood risk zones. Communities sending more rescue requests on Twitter were environmentally and socioeconomically more vulnerable. Finally, we derived a framework summarizing the steps and strategies needed to improve social media use for rescue operations.
[ { "created": "Sat, 13 Nov 2021 20:32:39 GMT", "version": "v1" } ]
2021-11-16
[ [ "Zou", "Lei", "" ], [ "Liao", "Danqing", "" ], [ "Lam", "Nina S. N.", "" ], [ "Meyer", "Michelle", "" ], [ "Gharaibeh", "Nasir G.", "" ], [ "Cai", "Heng", "" ], [ "Zhou", "Bing", "" ], [ "Li", "Dongying", "" ] ]
Social media plays increasingly significant roles in disaster response, but effectively leveraging social media for rescue is challenging. This study analyzed rescue requests on Twitter during the 2017 Hurricane Harvey, in which many residents resorted to social media to call for help. The objectives include (1) understanding the characteristics of rescue-request messages; (2) revealing the spatial-temporal patterns of rescue requests; (3) determining the social-geographical conditions of communities needing rescue; and (4) identifying the challenges of using social media for rescue and propose improvement strategies. About half of rescue requests either did not provide sufficient information or neglected to include rescue-related hashtags or accounts. Of the 824 geocoded unique rescue requests, 41% were from FEMA-defined minimal flood risk zones. Communities sending more rescue requests on Twitter were environmentally and socioeconomically more vulnerable. Finally, we derived a framework summarizing the steps and strategies needed to improve social media use for rescue operations.
1612.08510
Jian Shi
Jian Shi, Yue Dong, Hao Su, Stella X. Yu
Learning Non-Lambertian Object Intrinsics across ShapeNet Categories
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the non-Lambertian object intrinsic problem of recovering diffuse albedo, shading, and specular highlights from a single image of an object. We build a large-scale object intrinsics database based on existing 3D models in the ShapeNet database. Rendered with realistic environment maps, millions of synthetic images of objects and their corresponding albedo, shading, and specular ground-truth images are used to train an encoder-decoder CNN. Once trained, the network can decompose an image into the product of albedo and shading components, along with an additive specular component. Our CNN delivers accurate and sharp results in this classical inverse problem of computer vision, sharp details attributed to skip layer connections at corresponding resolutions from the encoder to the decoder. Benchmarked on our ShapeNet and MIT intrinsics datasets, our model consistently outperforms the state-of-the-art by a large margin. We train and test our CNN on different object categories. Perhaps surprising especially from the CNN classification perspective, our intrinsics CNN generalizes very well across categories. Our analysis shows that feature learning at the encoder stage is more crucial for developing a universal representation across categories. We apply our synthetic data trained model to images and videos downloaded from the internet, and observe robust and realistic intrinsics results. Quality non-Lambertian intrinsics could open up many interesting applications such as image-based albedo and specular editing.
[ { "created": "Tue, 27 Dec 2016 06:38:43 GMT", "version": "v1" } ]
2016-12-28
[ [ "Shi", "Jian", "" ], [ "Dong", "Yue", "" ], [ "Su", "Hao", "" ], [ "Yu", "Stella X.", "" ] ]
We consider the non-Lambertian object intrinsic problem of recovering diffuse albedo, shading, and specular highlights from a single image of an object. We build a large-scale object intrinsics database based on existing 3D models in the ShapeNet database. Rendered with realistic environment maps, millions of synthetic images of objects and their corresponding albedo, shading, and specular ground-truth images are used to train an encoder-decoder CNN. Once trained, the network can decompose an image into the product of albedo and shading components, along with an additive specular component. Our CNN delivers accurate and sharp results in this classical inverse problem of computer vision, sharp details attributed to skip layer connections at corresponding resolutions from the encoder to the decoder. Benchmarked on our ShapeNet and MIT intrinsics datasets, our model consistently outperforms the state-of-the-art by a large margin. We train and test our CNN on different object categories. Perhaps surprising especially from the CNN classification perspective, our intrinsics CNN generalizes very well across categories. Our analysis shows that feature learning at the encoder stage is more crucial for developing a universal representation across categories. We apply our synthetic data trained model to images and videos downloaded from the internet, and observe robust and realistic intrinsics results. Quality non-Lambertian intrinsics could open up many interesting applications such as image-based albedo and specular editing.
2406.16356
Rem Hida
Rem Hida, Junki Ohmura, Toshiyuki Sekiya
Evaluation of Instruction-Following Ability for Large Language Models on Story-Ending Generation
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Instruction-tuned Large Language Models (LLMs) have achieved remarkable performance across various benchmark tasks. While providing instructions to LLMs for guiding their generations is user-friendly, assessing their instruction-following capabilities is still unclarified due to a lack of evaluation metrics. In this paper, we focus on evaluating the instruction-following ability of LLMs in the context of story-ending generation, which requires diverse and context-specific instructions. We propose an automatic evaluation pipeline that utilizes a machine reading comprehension (MRC) model to determine whether the generated story-ending reflects instruction. Our findings demonstrate that our proposed metric aligns with human evaluation. Furthermore, our experiments confirm that recent open-source LLMs can achieve instruction-following performance close to GPT-3.5, as assessed through automatic evaluation.
[ { "created": "Mon, 24 Jun 2024 06:53:36 GMT", "version": "v1" } ]
2024-06-25
[ [ "Hida", "Rem", "" ], [ "Ohmura", "Junki", "" ], [ "Sekiya", "Toshiyuki", "" ] ]
Instruction-tuned Large Language Models (LLMs) have achieved remarkable performance across various benchmark tasks. While providing instructions to LLMs for guiding their generations is user-friendly, assessing their instruction-following capabilities is still unclarified due to a lack of evaluation metrics. In this paper, we focus on evaluating the instruction-following ability of LLMs in the context of story-ending generation, which requires diverse and context-specific instructions. We propose an automatic evaluation pipeline that utilizes a machine reading comprehension (MRC) model to determine whether the generated story-ending reflects instruction. Our findings demonstrate that our proposed metric aligns with human evaluation. Furthermore, our experiments confirm that recent open-source LLMs can achieve instruction-following performance close to GPT-3.5, as assessed through automatic evaluation.
cs/0603072
Raghuraman Mudumbai
R. Mudumbai, J. Hespanha, U. Madhow, G. Barriac
Distributed Transmit Beamforming using Feedback Control
null
null
null
null
cs.IT math.IT
null
A simple feedback control algorithm is presented for distributed beamforming in a wireless network. A network of wireless sensors that seek to cooperatively transmit a common message signal to a Base Station (BS) is considered. In this case, it is well-known that substantial energy efficiencies are possible by using distributed beamforming. The feedback algorithm is shown to achieve the carrier phase coherence required for beamforming in a scalable and distributed manner. In the proposed algorithm, each sensor independently makes a random adjustment to its carrier phase. Assuming that the BS is able to broadcast one bit of feedback each timeslot about the change in received signal to noise ratio (SNR), the sensors are able to keep the favorable phase adjustments and discard the unfavorable ones, asymptotically achieving perfect phase coherence. A novel analytical model is derived that accurately predicts the convergence rate. The analytical model is used to optimize the algorithm for fast convergence and to establish the scalability of the algorithm.
[ { "created": "Sat, 18 Mar 2006 01:59:25 GMT", "version": "v1" } ]
2007-07-16
[ [ "Mudumbai", "R.", "" ], [ "Hespanha", "J.", "" ], [ "Madhow", "U.", "" ], [ "Barriac", "G.", "" ] ]
A simple feedback control algorithm is presented for distributed beamforming in a wireless network. A network of wireless sensors that seek to cooperatively transmit a common message signal to a Base Station (BS) is considered. In this case, it is well-known that substantial energy efficiencies are possible by using distributed beamforming. The feedback algorithm is shown to achieve the carrier phase coherence required for beamforming in a scalable and distributed manner. In the proposed algorithm, each sensor independently makes a random adjustment to its carrier phase. Assuming that the BS is able to broadcast one bit of feedback each timeslot about the change in received signal to noise ratio (SNR), the sensors are able to keep the favorable phase adjustments and discard the unfavorable ones, asymptotically achieving perfect phase coherence. A novel analytical model is derived that accurately predicts the convergence rate. The analytical model is used to optimize the algorithm for fast convergence and to establish the scalability of the algorithm.
1710.06745
Samuel Burden Ph.D.
Bora S. Banjanin, Samuel A. Burden
Nonsmooth optimal value and policy functions in mechanical systems subject to unilateral constraints
Submitted to IEEE CSL
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art approaches to optimal control use smooth approximations of value and policy functions and gradient-based algorithms for improving approximator parameters. Unfortunately, we show that value and policy functions that arise in optimal control of mechanical systems subject to unilateral constraints -- i.e. the contact-rich dynamics of robot locomotion and manipulation -- are generally nonsmooth due to the underlying dynamics exhibiting discontinuous or piecewise-differentiable trajectory outcomes. Simple mechanical systems are used to illustrate this result and the implications for optimal control of contact-rich robot dynamics.
[ { "created": "Wed, 18 Oct 2017 14:21:14 GMT", "version": "v1" }, { "created": "Tue, 27 Aug 2019 20:02:17 GMT", "version": "v2" } ]
2019-08-29
[ [ "Banjanin", "Bora S.", "" ], [ "Burden", "Samuel A.", "" ] ]
State-of-the-art approaches to optimal control use smooth approximations of value and policy functions and gradient-based algorithms for improving approximator parameters. Unfortunately, we show that value and policy functions that arise in optimal control of mechanical systems subject to unilateral constraints -- i.e. the contact-rich dynamics of robot locomotion and manipulation -- are generally nonsmooth due to the underlying dynamics exhibiting discontinuous or piecewise-differentiable trajectory outcomes. Simple mechanical systems are used to illustrate this result and the implications for optimal control of contact-rich robot dynamics.
2312.02839
Mohammad Naseritehrani
Mohammad NaseriTehrani, MohammadJavad Salehi and Antti T\"olli
Low-complexity Linear Multicast Beamforming for Cache-aided MIMO Communications
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
A practical and scalable multicast beamformer design in multi-input multi-output~(MIMO) coded caching~(CC) systems is introduced in this paper. The proposed approach allows multicast transmission to multiple groups with partially overlapping user sets using receiver dimensions to distinguish between different group-specific streams. Additionally, it provides flexibility in accommodating various parameter configurations of the MIMO-CC setup and overcomes practical limitations, such as the requirement to use successive interference cancellation~(SIC) at the receiver, while achieving the same degrees-of-freedom~(DoF). To evaluate the proposed scheme, we define the symmetric rate as the sum rate of the partially overlapping streams received per user, comprising a linear multistream multicast transmission vector and the linear minimum mean square error~(LMMSE) receiver. The resulting non-convex symmetric rate maximization problem is solved using alternative optimization and successive convex approximation~(SCA). Moreover, a fast iterative Lagrangian-based algorithm is developed, significantly reducing the computational overhead compared to previous designs. The effectiveness of our proposed method is demonstrated by extensive simulations.
[ { "created": "Tue, 5 Dec 2023 15:45:19 GMT", "version": "v1" } ]
2023-12-06
[ [ "NaseriTehrani", "Mohammad", "" ], [ "Salehi", "MohammadJavad", "" ], [ "Tölli", "Antti", "" ] ]
A practical and scalable multicast beamformer design in multi-input multi-output~(MIMO) coded caching~(CC) systems is introduced in this paper. The proposed approach allows multicast transmission to multiple groups with partially overlapping user sets using receiver dimensions to distinguish between different group-specific streams. Additionally, it provides flexibility in accommodating various parameter configurations of the MIMO-CC setup and overcomes practical limitations, such as the requirement to use successive interference cancellation~(SIC) at the receiver, while achieving the same degrees-of-freedom~(DoF). To evaluate the proposed scheme, we define the symmetric rate as the sum rate of the partially overlapping streams received per user, comprising a linear multistream multicast transmission vector and the linear minimum mean square error~(LMMSE) receiver. The resulting non-convex symmetric rate maximization problem is solved using alternative optimization and successive convex approximation~(SCA). Moreover, a fast iterative Lagrangian-based algorithm is developed, significantly reducing the computational overhead compared to previous designs. The effectiveness of our proposed method is demonstrated by extensive simulations.
1712.02085
Yaolu Qin
Feng Shu, Yaolu Qin, Tingting Liu, Linqing Gui, Yijin Zhang, Jun Li, and Zhu Han
Low-Complexity and High-Resolution DOA Estimation for Hybrid Analog and Digital Massive MIMO Receive Array
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A large-scale fully-digital receive antenna array can provide very high-resolution direction of arrival (DOA) estimation, but resulting in a significantly high RF-chain circuit cost. Thus, a hybrid analog and digital (HAD) structure is preferred. Two phase alignment (PA) methods, HAD PA (HADPA) and hybrid digital and analog PA (HDAPA), are proposed to estimate DOA based on the parametric method. Compared to analog phase alignment (APA), they can significantly reduce the complexity in the PA phases. Subsequently, a fast root multiple signal classification HDAPA (Root-MUSIC-HDAPA) method is proposed specially for this hybrid structure to implement an approximately analytical solution. Due to the HAD structure, there exists the effect of direction-finding ambiguity. A smart strategy of maximizing the average receive power is adopted to delete those spurious solutions and preserve the true optimal solution by linear searching over a set of limited finite candidate directions. This results in a significant reduction in computational complexity. Eventually, the Cramer-Rao lower bound (CRLB) of finding emitter direction using the HAD structure is derived. Simulation results show that our proposed methods, Root-MUSIC-HDAPA and HDAPA, can achieve the hybrid CRLB with their complexities being significantly lower than those of pure linear searching-based methods, such as APA.
[ { "created": "Wed, 6 Dec 2017 08:50:37 GMT", "version": "v1" } ]
2017-12-07
[ [ "Shu", "Feng", "" ], [ "Qin", "Yaolu", "" ], [ "Liu", "Tingting", "" ], [ "Gui", "Linqing", "" ], [ "Zhang", "Yijin", "" ], [ "Li", "Jun", "" ], [ "Han", "Zhu", "" ] ]
A large-scale fully-digital receive antenna array can provide very high-resolution direction of arrival (DOA) estimation, but resulting in a significantly high RF-chain circuit cost. Thus, a hybrid analog and digital (HAD) structure is preferred. Two phase alignment (PA) methods, HAD PA (HADPA) and hybrid digital and analog PA (HDAPA), are proposed to estimate DOA based on the parametric method. Compared to analog phase alignment (APA), they can significantly reduce the complexity in the PA phases. Subsequently, a fast root multiple signal classification HDAPA (Root-MUSIC-HDAPA) method is proposed specially for this hybrid structure to implement an approximately analytical solution. Due to the HAD structure, there exists the effect of direction-finding ambiguity. A smart strategy of maximizing the average receive power is adopted to delete those spurious solutions and preserve the true optimal solution by linear searching over a set of limited finite candidate directions. This results in a significant reduction in computational complexity. Eventually, the Cramer-Rao lower bound (CRLB) of finding emitter direction using the HAD structure is derived. Simulation results show that our proposed methods, Root-MUSIC-HDAPA and HDAPA, can achieve the hybrid CRLB with their complexities being significantly lower than those of pure linear searching-based methods, such as APA.
1805.04658
Hao Peng
Hao Peng, Sam Thomson, and Noah A. Smith
Backpropagating through Structured Argmax using a SPIGOT
ACL 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the structured projection of intermediate gradients optimization technique (SPIGOT), a new method for backpropagating through neural networks that include hard-decision structured predictions (e.g., parsing) in intermediate layers. SPIGOT requires no marginal inference, unlike structured attention networks (Kim et al., 2017) and some reinforcement learning-inspired solutions (Yogatama et al., 2017). Like so-called straight-through estimators (Hinton, 2012), SPIGOT defines gradient-like quantities associated with intermediate nondifferentiable operations, allowing backpropagation before and after them; SPIGOT's proxy aims to ensure that, after a parameter update, the intermediate structure will remain well-formed. We experiment on two structured NLP pipelines: syntactic-then-semantic dependency parsing, and semantic parsing followed by sentiment classification. We show that training with SPIGOT leads to a larger improvement on the downstream task than a modularly-trained pipeline, the straight-through estimator, and structured attention, reaching a new state of the art on semantic dependency parsing.
[ { "created": "Sat, 12 May 2018 05:27:45 GMT", "version": "v1" } ]
2018-05-15
[ [ "Peng", "Hao", "" ], [ "Thomson", "Sam", "" ], [ "Smith", "Noah A.", "" ] ]
We introduce the structured projection of intermediate gradients optimization technique (SPIGOT), a new method for backpropagating through neural networks that include hard-decision structured predictions (e.g., parsing) in intermediate layers. SPIGOT requires no marginal inference, unlike structured attention networks (Kim et al., 2017) and some reinforcement learning-inspired solutions (Yogatama et al., 2017). Like so-called straight-through estimators (Hinton, 2012), SPIGOT defines gradient-like quantities associated with intermediate nondifferentiable operations, allowing backpropagation before and after them; SPIGOT's proxy aims to ensure that, after a parameter update, the intermediate structure will remain well-formed. We experiment on two structured NLP pipelines: syntactic-then-semantic dependency parsing, and semantic parsing followed by sentiment classification. We show that training with SPIGOT leads to a larger improvement on the downstream task than a modularly-trained pipeline, the straight-through estimator, and structured attention, reaching a new state of the art on semantic dependency parsing.
2102.08201
Mohammadi Zaki
Mohammadi Zaki, Avinash Mohan, Aditya Gopalan and Shie Mannor
Improper Reinforcement Learning with Gradient-based Policy Optimization
null
null
null
null
cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider an improper reinforcement learning setting where a learner is given $M$ base controllers for an unknown Markov decision process, and wishes to combine them optimally to produce a potentially new controller that can outperform each of the base ones. This can be useful in tuning across controllers, learnt possibly in mismatched or simulated environments, to obtain a good controller for a given target environment with relatively few trials. \par We propose a gradient-based approach that operates over a class of improper mixtures of the controllers. We derive convergence rate guarantees for the approach assuming access to a gradient oracle. The value function of the mixture and its gradient may not be available in closed-form; however, we show that we can employ rollouts and simultaneous perturbation stochastic approximation (SPSA) for explicit gradient descent optimization. Numerical results on (i) the standard control theoretic benchmark of stabilizing an inverted pendulum and (ii) a constrained queueing task show that our improper policy optimization algorithm can stabilize the system even when the base policies at its disposal are unstable\footnote{Under review. Please do not distribute.}.
[ { "created": "Tue, 16 Feb 2021 14:53:55 GMT", "version": "v1" }, { "created": "Sun, 21 Feb 2021 05:38:08 GMT", "version": "v2" }, { "created": "Sat, 3 Jul 2021 06:11:25 GMT", "version": "v3" } ]
2021-07-06
[ [ "Zaki", "Mohammadi", "" ], [ "Mohan", "Avinash", "" ], [ "Gopalan", "Aditya", "" ], [ "Mannor", "Shie", "" ] ]
We consider an improper reinforcement learning setting where a learner is given $M$ base controllers for an unknown Markov decision process, and wishes to combine them optimally to produce a potentially new controller that can outperform each of the base ones. This can be useful in tuning across controllers, learnt possibly in mismatched or simulated environments, to obtain a good controller for a given target environment with relatively few trials. \par We propose a gradient-based approach that operates over a class of improper mixtures of the controllers. We derive convergence rate guarantees for the approach assuming access to a gradient oracle. The value function of the mixture and its gradient may not be available in closed-form; however, we show that we can employ rollouts and simultaneous perturbation stochastic approximation (SPSA) for explicit gradient descent optimization. Numerical results on (i) the standard control theoretic benchmark of stabilizing an inverted pendulum and (ii) a constrained queueing task show that our improper policy optimization algorithm can stabilize the system even when the base policies at its disposal are unstable\footnote{Under review. Please do not distribute.}.
2012.09372
Pengju Zhang
Pengju Zhang, Yihong Wu, Jiagang Zhu
Semi-Global Shape-aware Network
8 pages, 6 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-local operations are usually used to capture long-range dependencies via aggregating global context to each position recently. However, most of the methods cannot preserve object shapes since they only focus on feature similarity but ignore proximity between central and other positions for capturing long-range dependencies, while shape-awareness is beneficial to many computer vision tasks. In this paper, we propose a Semi-Global Shape-aware Network (SGSNet) considering both feature similarity and proximity for preserving object shapes when modeling long-range dependencies. A hierarchical way is taken to aggregate global context. In the first level, each position in the whole feature map only aggregates contextual information in vertical and horizontal directions according to both similarity and proximity. And then the result is input into the second level to do the same operations. By this hierarchical way, each central position gains supports from all other positions, and the combination of similarity and proximity makes each position gain supports mostly from the same semantic object. Moreover, we also propose a linear time algorithm for the aggregation of contextual information, where each of rows and columns in the feature map is treated as a binary tree to reduce similarity computation cost. Experiments on semantic segmentation and image retrieval show that adding SGSNet to existing networks gains solid improvements on both accuracy and efficiency.
[ { "created": "Thu, 17 Dec 2020 02:52:10 GMT", "version": "v1" } ]
2020-12-18
[ [ "Zhang", "Pengju", "" ], [ "Wu", "Yihong", "" ], [ "Zhu", "Jiagang", "" ] ]
Non-local operations are usually used to capture long-range dependencies via aggregating global context to each position recently. However, most of the methods cannot preserve object shapes since they only focus on feature similarity but ignore proximity between central and other positions for capturing long-range dependencies, while shape-awareness is beneficial to many computer vision tasks. In this paper, we propose a Semi-Global Shape-aware Network (SGSNet) considering both feature similarity and proximity for preserving object shapes when modeling long-range dependencies. A hierarchical way is taken to aggregate global context. In the first level, each position in the whole feature map only aggregates contextual information in vertical and horizontal directions according to both similarity and proximity. And then the result is input into the second level to do the same operations. By this hierarchical way, each central position gains supports from all other positions, and the combination of similarity and proximity makes each position gain supports mostly from the same semantic object. Moreover, we also propose a linear time algorithm for the aggregation of contextual information, where each of rows and columns in the feature map is treated as a binary tree to reduce similarity computation cost. Experiments on semantic segmentation and image retrieval show that adding SGSNet to existing networks gains solid improvements on both accuracy and efficiency.
2306.15470
Zhe Wang
Zhe Wang, Yansha Deng, and A. Hamid Aghvami
Goal-oriented Semantic Communications for Avatar-centric Augmented Reality
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Upon the advent of the emerging metaverse and its related applications in Augmented Reality (AR), the current bit-oriented network struggles to support real-time changes for the vast amount of associated information, hindering its development. Thus, a critical revolution in the Sixth Generation (6G) networks is envisioned through the joint exploitation of information context and its importance to the task, leading to a communication paradigm shift towards semantic and effectiveness levels. However, current research has not yet proposed any explicit and systematic communication framework for AR applications that incorporate these two levels. To fill this research gap, this paper presents a task-oriented and semantics-aware communication framework for augmented reality (TSAR) to enhance communication efficiency and effectiveness in 6G. Specifically, we first analyse the traditional wireless AR point cloud communication framework and then summarize our proposed semantic information along with the end-to-end wireless communication. We then detail the design blocks of the TSAR framework, covering both semantic and effectiveness levels. Finally, numerous experiments have been conducted to demonstrate that, compared to the traditional point cloud communication framework, our proposed TSAR significantly reduces wireless AR application transmission latency by 95.6%, while improving communication effectiveness in geometry and color aspects by up to 82.4% and 20.4%, respectively.
[ { "created": "Tue, 27 Jun 2023 13:41:54 GMT", "version": "v1" }, { "created": "Tue, 4 Jul 2023 13:17:42 GMT", "version": "v2" }, { "created": "Tue, 26 Mar 2024 17:32:47 GMT", "version": "v3" }, { "created": "Mon, 17 Jun 2024 20:57:29 GMT", "version": "v4" } ]
2024-06-19
[ [ "Wang", "Zhe", "" ], [ "Deng", "Yansha", "" ], [ "Aghvami", "A. Hamid", "" ] ]
Upon the advent of the emerging metaverse and its related applications in Augmented Reality (AR), the current bit-oriented network struggles to support real-time changes for the vast amount of associated information, hindering its development. Thus, a critical revolution in the Sixth Generation (6G) networks is envisioned through the joint exploitation of information context and its importance to the task, leading to a communication paradigm shift towards semantic and effectiveness levels. However, current research has not yet proposed any explicit and systematic communication framework for AR applications that incorporate these two levels. To fill this research gap, this paper presents a task-oriented and semantics-aware communication framework for augmented reality (TSAR) to enhance communication efficiency and effectiveness in 6G. Specifically, we first analyse the traditional wireless AR point cloud communication framework and then summarize our proposed semantic information along with the end-to-end wireless communication. We then detail the design blocks of the TSAR framework, covering both semantic and effectiveness levels. Finally, numerous experiments have been conducted to demonstrate that, compared to the traditional point cloud communication framework, our proposed TSAR significantly reduces wireless AR application transmission latency by 95.6%, while improving communication effectiveness in geometry and color aspects by up to 82.4% and 20.4%, respectively.
2111.07518
Qi Song
Qiquan Zhang, Qi Song, Zhaoheng Ni, Aaron Nicolson, Haizhou Li
Time-Frequency Attention for Monaural Speech Enhancement
5 pages, 4 figures, Accepted and presented at ICASSP 2022
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Most studies on speech enhancement generally don't consider the energy distribution of speech in time-frequency (T-F) representation, which is important for accurate prediction of mask or spectra. In this paper, we present a simple yet effective T-F attention (TFA) module, where a 2-D attention map is produced to provide differentiated weights to the spectral components of T-F representation. To validate the effectiveness of our proposed TFA module, we use the residual temporal convolution network (ResTCN) as the backbone network and conduct extensive experiments on two commonly used training targets. Our experiments demonstrate that applying our TFA module significantly improves the performance in terms of five objective evaluation metrics with negligible parameter overhead. The evaluation results show that the proposed ResTCN with the TFA module (ResTCN+TFA) consistently outperforms other baselines by a large margin.
[ { "created": "Mon, 15 Nov 2021 03:43:04 GMT", "version": "v1" }, { "created": "Wed, 17 Nov 2021 06:24:34 GMT", "version": "v2" }, { "created": "Wed, 9 Mar 2022 09:02:50 GMT", "version": "v3" } ]
2022-03-10
[ [ "Zhang", "Qiquan", "" ], [ "Song", "Qi", "" ], [ "Ni", "Zhaoheng", "" ], [ "Nicolson", "Aaron", "" ], [ "Li", "Haizhou", "" ] ]
Most studies on speech enhancement generally don't consider the energy distribution of speech in time-frequency (T-F) representation, which is important for accurate prediction of mask or spectra. In this paper, we present a simple yet effective T-F attention (TFA) module, where a 2-D attention map is produced to provide differentiated weights to the spectral components of T-F representation. To validate the effectiveness of our proposed TFA module, we use the residual temporal convolution network (ResTCN) as the backbone network and conduct extensive experiments on two commonly used training targets. Our experiments demonstrate that applying our TFA module significantly improves the performance in terms of five objective evaluation metrics with negligible parameter overhead. The evaluation results show that the proposed ResTCN with the TFA module (ResTCN+TFA) consistently outperforms other baselines by a large margin.
2209.03356
Ruikang Luo Mr
Ruikang Luo, Yaofeng Song, Liping Huang, Yicheng Zhang and Rong Su
AST-GIN: Attribute-Augmented Spatial-Temporal Graph Informer Network for Electric Vehicle Charging Station Availability Forecasting
10 pages; 17 figures; Under review for IEEE Transaction on Vehicular Technology
null
null
null
cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electric Vehicle (EV) charging demand and charging station availability forecasting is one of the challenges in the intelligent transportation system. With the accurate EV station situation prediction, suitable charging behaviors could be scheduled in advance to relieve range anxiety. Many existing deep learning methods are proposed to address this issue, however, due to the complex road network structure and comprehensive external factors, such as point of interests (POIs) and weather effects, many commonly used algorithms could just extract the historical usage information without considering comprehensive influence of external factors. To enhance the prediction accuracy and interpretability, the Attribute-Augmented Spatial-Temporal Graph Informer (AST-GIN) structure is proposed in this study by combining the Graph Convolutional Network (GCN) layer and the Informer layer to extract both external and internal spatial-temporal dependence of relevant transportation data. And the external factors are modeled as dynamic attributes by the attribute-augmented encoder for training. AST-GIN model is tested on the data collected in Dundee City and experimental results show the effectiveness of our model considering external factors influence over various horizon settings compared with other baselines.
[ { "created": "Wed, 7 Sep 2022 13:51:45 GMT", "version": "v1" } ]
2022-09-09
[ [ "Luo", "Ruikang", "" ], [ "Song", "Yaofeng", "" ], [ "Huang", "Liping", "" ], [ "Zhang", "Yicheng", "" ], [ "Su", "Rong", "" ] ]
Electric Vehicle (EV) charging demand and charging station availability forecasting is one of the challenges in the intelligent transportation system. With the accurate EV station situation prediction, suitable charging behaviors could be scheduled in advance to relieve range anxiety. Many existing deep learning methods are proposed to address this issue, however, due to the complex road network structure and comprehensive external factors, such as point of interests (POIs) and weather effects, many commonly used algorithms could just extract the historical usage information without considering comprehensive influence of external factors. To enhance the prediction accuracy and interpretability, the Attribute-Augmented Spatial-Temporal Graph Informer (AST-GIN) structure is proposed in this study by combining the Graph Convolutional Network (GCN) layer and the Informer layer to extract both external and internal spatial-temporal dependence of relevant transportation data. And the external factors are modeled as dynamic attributes by the attribute-augmented encoder for training. AST-GIN model is tested on the data collected in Dundee City and experimental results show the effectiveness of our model considering external factors influence over various horizon settings compared with other baselines.
2004.09630
Francisco J. Escribano
Francisco J. Escribano, Jos\'e S\'aez-Landete and Alexandre Wagemakers
Optimization of Chaos-based Coded Modulations for Compensation of Amplifier Nonlinearities
2 pages, 3 figures
Electronics Letters, vol. 52, no. 22, pp. 1855-1857, 27 10 2016
10.1049/el.2016.2864
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we expand the possibilities of a known class of chaos-based coded modulation (CCM) systems to address the problem of high-power amplifier (HPA) nonlinearity. The hypothesis is that the CCM nonlinear nature can be exploited to counteract its effects on the system performance, avoiding the need of a predistorter. We propose an optimization method for the design of the CCM to prove this hypothesis. The results show that, for a given back-off level, a nonlinear mapping function for the chaos-based encoder and decoder can be designed to compensate the HPA effect on the error rate.
[ { "created": "Mon, 20 Apr 2020 20:47:29 GMT", "version": "v1" } ]
2020-04-22
[ [ "Escribano", "Francisco J.", "" ], [ "Sáez-Landete", "José", "" ], [ "Wagemakers", "Alexandre", "" ] ]
In this work we expand the possibilities of a known class of chaos-based coded modulation (CCM) systems to address the problem of high-power amplifier (HPA) nonlinearity. The hypothesis is that the CCM nonlinear nature can be exploited to counteract its effects on the system performance, avoiding the need of a predistorter. We propose an optimization method for the design of the CCM to prove this hypothesis. The results show that, for a given back-off level, a nonlinear mapping function for the chaos-based encoder and decoder can be designed to compensate the HPA effect on the error rate.
1806.09823
Ilya Razenshteyn
Alexandr Andoni, Piotr Indyk, Ilya Razenshteyn
Approximate Nearest Neighbor Search in High Dimensions
27 pages, no figures; to appear in the proceedings of ICM 2018 (accompanying the talk by P. Indyk)
null
null
null
cs.DS cs.CG cs.DB stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The nearest neighbor problem is defined as follows: Given a set $P$ of $n$ points in some metric space $(X,D)$, build a data structure that, given any point $q$, returns a point in $P$ that is closest to $q$ (its "nearest neighbor" in $P$). The data structure stores additional information about the set $P$, which is then used to find the nearest neighbor without computing all distances between $q$ and $P$. The problem has a wide range of applications in machine learning, computer vision, databases and other fields. To reduce the time needed to find nearest neighbors and the amount of memory used by the data structure, one can formulate the {\em approximate} nearest neighbor problem, where the the goal is to return any point $p' \in P$ such that the distance from $q$ to $p'$ is at most $c \cdot \min_{p \in P} D(q,p)$, for some $c \geq 1$. Over the last two decades, many efficient solutions to this problem were developed. In this article we survey these developments, as well as their connections to questions in geometric functional analysis and combinatorial geometry.
[ { "created": "Tue, 26 Jun 2018 07:35:45 GMT", "version": "v1" } ]
2018-06-27
[ [ "Andoni", "Alexandr", "" ], [ "Indyk", "Piotr", "" ], [ "Razenshteyn", "Ilya", "" ] ]
The nearest neighbor problem is defined as follows: Given a set $P$ of $n$ points in some metric space $(X,D)$, build a data structure that, given any point $q$, returns a point in $P$ that is closest to $q$ (its "nearest neighbor" in $P$). The data structure stores additional information about the set $P$, which is then used to find the nearest neighbor without computing all distances between $q$ and $P$. The problem has a wide range of applications in machine learning, computer vision, databases and other fields. To reduce the time needed to find nearest neighbors and the amount of memory used by the data structure, one can formulate the {\em approximate} nearest neighbor problem, where the the goal is to return any point $p' \in P$ such that the distance from $q$ to $p'$ is at most $c \cdot \min_{p \in P} D(q,p)$, for some $c \geq 1$. Over the last two decades, many efficient solutions to this problem were developed. In this article we survey these developments, as well as their connections to questions in geometric functional analysis and combinatorial geometry.
2202.07215
Byeongjun Park
Byeongjun Park, Jeongsoo Kim, Seungju Cho, Heeseon Kim, Changick Kim
Balancing Domain Experts for Long-Tailed Camera-Trap Recognition
5 pages, 4 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Label distributions in camera-trap images are highly imbalanced and long-tailed, resulting in neural networks tending to be biased towards head-classes that appear frequently. Although long-tail learning has been extremely explored to address data imbalances, few studies have been conducted to consider camera-trap characteristics, such as multi-domain and multi-frame setup. Here, we propose a unified framework and introduce two datasets for long-tailed camera-trap recognition. We first design domain experts, where each expert learns to balance imperfect decision boundaries caused by data imbalances and complement each other to generate domain-balanced decision boundaries. Also, we propose a flow consistency loss to focus on moving objects, expecting class activation maps of multi-frame matches the flow with optical flow maps for input images. Moreover, two long-tailed camera-trap datasets, WCS-LT and DMZ-LT, are introduced to validate our methods. Experimental results show the effectiveness of our framework, and proposed methods outperform previous methods on recessive domain samples.
[ { "created": "Tue, 15 Feb 2022 06:08:13 GMT", "version": "v1" }, { "created": "Wed, 16 Feb 2022 01:41:01 GMT", "version": "v2" } ]
2022-02-17
[ [ "Park", "Byeongjun", "" ], [ "Kim", "Jeongsoo", "" ], [ "Cho", "Seungju", "" ], [ "Kim", "Heeseon", "" ], [ "Kim", "Changick", "" ] ]
Label distributions in camera-trap images are highly imbalanced and long-tailed, resulting in neural networks tending to be biased towards head-classes that appear frequently. Although long-tail learning has been extremely explored to address data imbalances, few studies have been conducted to consider camera-trap characteristics, such as multi-domain and multi-frame setup. Here, we propose a unified framework and introduce two datasets for long-tailed camera-trap recognition. We first design domain experts, where each expert learns to balance imperfect decision boundaries caused by data imbalances and complement each other to generate domain-balanced decision boundaries. Also, we propose a flow consistency loss to focus on moving objects, expecting class activation maps of multi-frame matches the flow with optical flow maps for input images. Moreover, two long-tailed camera-trap datasets, WCS-LT and DMZ-LT, are introduced to validate our methods. Experimental results show the effectiveness of our framework, and proposed methods outperform previous methods on recessive domain samples.
2307.01570
Thien Van Luong Dr
Vu-Duc Ngo, Tuan-Cuong Vuong, Thien Van Luong, and Hung Tran
Machine Learning-Based Intrusion Detection: Feature Selection versus Feature Extraction
null
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
Internet of things (IoT) has been playing an important role in many sectors, such as smart cities, smart agriculture, smart healthcare, and smart manufacturing. However, IoT devices are highly vulnerable to cyber-attacks, which may result in security breaches and data leakages. To effectively prevent these attacks, a variety of machine learning-based network intrusion detection methods for IoT networks have been developed, which often rely on either feature extraction or feature selection techniques for reducing the dimension of input data before being fed into machine learning models. This aims to make the detection complexity low enough for real-time operations, which is particularly vital in any intrusion detection systems. This paper provides a comprehensive comparison between these two feature reduction methods of intrusion detection in terms of various performance metrics, namely, precision rate, recall rate, detection accuracy, as well as runtime complexity, in the presence of the modern UNSW-NB15 dataset as well as both binary and multiclass classification. For example, in general, the feature selection method not only provides better detection performance but also lower training and inference time compared to its feature extraction counterpart, especially when the number of reduced features K increases. However, the feature extraction method is much more reliable than its selection counterpart, particularly when K is very small, such as K = 4. Additionally, feature extraction is less sensitive to changing the number of reduced features K than feature selection, and this holds true for both binary and multiclass classifications. Based on this comparison, we provide a useful guideline for selecting a suitable intrusion detection type for each specific scenario, as detailed in Tab. 14 at the end of Section IV.
[ { "created": "Tue, 4 Jul 2023 08:48:01 GMT", "version": "v1" } ]
2023-07-06
[ [ "Ngo", "Vu-Duc", "" ], [ "Vuong", "Tuan-Cuong", "" ], [ "Van Luong", "Thien", "" ], [ "Tran", "Hung", "" ] ]
Internet of things (IoT) has been playing an important role in many sectors, such as smart cities, smart agriculture, smart healthcare, and smart manufacturing. However, IoT devices are highly vulnerable to cyber-attacks, which may result in security breaches and data leakages. To effectively prevent these attacks, a variety of machine learning-based network intrusion detection methods for IoT networks have been developed, which often rely on either feature extraction or feature selection techniques for reducing the dimension of input data before being fed into machine learning models. This aims to make the detection complexity low enough for real-time operations, which is particularly vital in any intrusion detection systems. This paper provides a comprehensive comparison between these two feature reduction methods of intrusion detection in terms of various performance metrics, namely, precision rate, recall rate, detection accuracy, as well as runtime complexity, in the presence of the modern UNSW-NB15 dataset as well as both binary and multiclass classification. For example, in general, the feature selection method not only provides better detection performance but also lower training and inference time compared to its feature extraction counterpart, especially when the number of reduced features K increases. However, the feature extraction method is much more reliable than its selection counterpart, particularly when K is very small, such as K = 4. Additionally, feature extraction is less sensitive to changing the number of reduced features K than feature selection, and this holds true for both binary and multiclass classifications. Based on this comparison, we provide a useful guideline for selecting a suitable intrusion detection type for each specific scenario, as detailed in Tab. 14 at the end of Section IV.
1003.3386
Hakan Ozadam
Edgar Martinez-Moro, Hakan Ozadam, Ferruh Ozbudak, Steve Szabo
Monomial-like codes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a generalization of cyclic codes of length p^s over F_{p^a}, we study n-dimensional cyclic codes of length p^{s_1} X ... X p^{s_n} over F_{p^a} generated by a single "monomial". Namely, we study multi-variable cyclic codes of the form <(x_1 - 1)^{i_1} ... (x_n - 1)^{i_n}> in F_{p^a}[x_1...x_n] / < x_1^{p^{s_1}}-1, ..., x_n^{p^{s_n}}-1 >. We call such codes monomial-like codes. We show that these codes arise from the product of certain single variable codes and we determine their minimum Hamming distance. We determine the dual of monomial-like codes yielding a parity check matrix. We also present an alternative way of constructing a parity check matrix using the Hasse derivative. We study the weight hierarchy of certain monomial like codes. We simplify an expression that gives us the weight hierarchy of these codes.
[ { "created": "Wed, 17 Mar 2010 15:03:14 GMT", "version": "v1" } ]
2010-03-18
[ [ "Martinez-Moro", "Edgar", "" ], [ "Ozadam", "Hakan", "" ], [ "Ozbudak", "Ferruh", "" ], [ "Szabo", "Steve", "" ] ]
As a generalization of cyclic codes of length p^s over F_{p^a}, we study n-dimensional cyclic codes of length p^{s_1} X ... X p^{s_n} over F_{p^a} generated by a single "monomial". Namely, we study multi-variable cyclic codes of the form <(x_1 - 1)^{i_1} ... (x_n - 1)^{i_n}> in F_{p^a}[x_1...x_n] / < x_1^{p^{s_1}}-1, ..., x_n^{p^{s_n}}-1 >. We call such codes monomial-like codes. We show that these codes arise from the product of certain single variable codes and we determine their minimum Hamming distance. We determine the dual of monomial-like codes yielding a parity check matrix. We also present an alternative way of constructing a parity check matrix using the Hasse derivative. We study the weight hierarchy of certain monomial like codes. We simplify an expression that gives us the weight hierarchy of these codes.
2204.12261
Djordje Jevdjic
Dehui Lin, Yasamin Tabatabaee, Yash Pote, and Djordje Jevdjic
Managing Reliability Skew in DNA Storage
In Proceedings of the International Symposium on Computer Architecture (ISCA 2022)
null
10.1145/3470496.3527441
null
cs.ET cs.AR cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
DNA is emerging as an increasingly attractive medium for data storage due to a number of important and unique advantages it offers, most notably the unprecedented durability and density. While the technology is evolving rapidly, the prohibitive cost of reads and writes, the high frequency and the peculiar nature of errors occurring in DNA storage pose a significant challenge to its adoption. In this work we make a novel observation that the probability of successful recovery of a given bit from any type of a DNA-based storage system highly depends on its physical location within the DNA molecule. In other words, when used as a storage medium, some parts of DNA molecules appear significantly more reliable than others. We show that large differences in reliability between different parts of DNA molecules lead to highly inefficient use of error-correction resources, and that commonly used techniques such as unequal error-correction cannot be used to bridge the reliability gap between different locations in the context of DNA storage. We then propose two approaches to address the problem. The first approach is general and applies to any types of data; it stripes the data and ECC codewords across DNA molecules in a particular fashion such that the effects of errors are spread out evenly across different codewords and molecules, effectively de-biasing the underlying storage medium. The second approach is application-specific, and seeks to leverage the underlying reliability bias by using application-aware mapping of data onto DNA molecules such that data that requires higher reliability is stored in more reliable locations, whereas data that needs lower reliability is stored in less reliable parts of DNA molecules. We show that the proposed data mapping can be used to achieve graceful degradation in the presence of high error rates, or to implement the concept of approximate storage in DNA.
[ { "created": "Tue, 26 Apr 2022 12:34:46 GMT", "version": "v1" }, { "created": "Fri, 29 Apr 2022 22:09:56 GMT", "version": "v2" } ]
2022-05-03
[ [ "Lin", "Dehui", "" ], [ "Tabatabaee", "Yasamin", "" ], [ "Pote", "Yash", "" ], [ "Jevdjic", "Djordje", "" ] ]
DNA is emerging as an increasingly attractive medium for data storage due to a number of important and unique advantages it offers, most notably the unprecedented durability and density. While the technology is evolving rapidly, the prohibitive cost of reads and writes, the high frequency and the peculiar nature of errors occurring in DNA storage pose a significant challenge to its adoption. In this work we make a novel observation that the probability of successful recovery of a given bit from any type of a DNA-based storage system highly depends on its physical location within the DNA molecule. In other words, when used as a storage medium, some parts of DNA molecules appear significantly more reliable than others. We show that large differences in reliability between different parts of DNA molecules lead to highly inefficient use of error-correction resources, and that commonly used techniques such as unequal error-correction cannot be used to bridge the reliability gap between different locations in the context of DNA storage. We then propose two approaches to address the problem. The first approach is general and applies to any types of data; it stripes the data and ECC codewords across DNA molecules in a particular fashion such that the effects of errors are spread out evenly across different codewords and molecules, effectively de-biasing the underlying storage medium. The second approach is application-specific, and seeks to leverage the underlying reliability bias by using application-aware mapping of data onto DNA molecules such that data that requires higher reliability is stored in more reliable locations, whereas data that needs lower reliability is stored in less reliable parts of DNA molecules. We show that the proposed data mapping can be used to achieve graceful degradation in the presence of high error rates, or to implement the concept of approximate storage in DNA.
1509.02995
Gene Cheung
Wei Dai, Gene Cheung, Ngai-Man Cheung, Antonio Ortega, Oscar C. Au
Merge Frame Design for Video Stream Switching using Piecewise Constant Functions
null
null
10.1109/TIP.2016.2571564
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to efficiently switch from one pre-encoded video stream to another (e.g., for bitrate adaptation or view switching) is important for many interactive streaming applications. Recently, stream-switching mechanisms based on distributed source coding (DSC) have been proposed. In order to reduce the overall transmission rate, these approaches provide a "merge" mechanism, where information is sent to the decoder such that the exact same frame can be reconstructed given that any one of a known set of side information (SI) frames is available at the decoder (e.g., each SI frame may correspond to a different stream from which we are switching). However, the use of bit-plane coding and channel coding in many DSC approaches leads to complex coding and decoding. In this paper, we propose an alternative approach for merging multiple SI frames, using a piecewise constant (PWC) function as the merge operator. In our approach, for each block to be reconstructed, a series of parameters of these PWC merge functions are transmitted in order to guarantee identical reconstruction given the known side information blocks. We consider two different scenarios. In the first case, a target frame is first given, and then merge parameters are chosen so that this frame can be reconstructed exactly at the decoder. In contrast, in the second scenario, the reconstructed frame and merge parameters are jointly optimized to meet a rate-distortion criteria. Experiments show that for both scenarios, our proposed merge techniques can outperform both a recent approach based on DSC and the SP-frame approach in H.264, in terms of compression efficiency and decoder complexity.
[ { "created": "Thu, 10 Sep 2015 03:27:33 GMT", "version": "v1" } ]
2016-06-29
[ [ "Dai", "Wei", "" ], [ "Cheung", "Gene", "" ], [ "Cheung", "Ngai-Man", "" ], [ "Ortega", "Antonio", "" ], [ "Au", "Oscar C.", "" ] ]
The ability to efficiently switch from one pre-encoded video stream to another (e.g., for bitrate adaptation or view switching) is important for many interactive streaming applications. Recently, stream-switching mechanisms based on distributed source coding (DSC) have been proposed. In order to reduce the overall transmission rate, these approaches provide a "merge" mechanism, where information is sent to the decoder such that the exact same frame can be reconstructed given that any one of a known set of side information (SI) frames is available at the decoder (e.g., each SI frame may correspond to a different stream from which we are switching). However, the use of bit-plane coding and channel coding in many DSC approaches leads to complex coding and decoding. In this paper, we propose an alternative approach for merging multiple SI frames, using a piecewise constant (PWC) function as the merge operator. In our approach, for each block to be reconstructed, a series of parameters of these PWC merge functions are transmitted in order to guarantee identical reconstruction given the known side information blocks. We consider two different scenarios. In the first case, a target frame is first given, and then merge parameters are chosen so that this frame can be reconstructed exactly at the decoder. In contrast, in the second scenario, the reconstructed frame and merge parameters are jointly optimized to meet a rate-distortion criteria. Experiments show that for both scenarios, our proposed merge techniques can outperform both a recent approach based on DSC and the SP-frame approach in H.264, in terms of compression efficiency and decoder complexity.
1704.03647
Sleiman Mhanna Dr.
Sleiman Mhanna, Gregor Verbic and Archie Chapman
A Component-Based Dual Decomposition Method for the OPF Problem
null
null
null
null
cs.DC math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a component-based dual decomposition of the nonconvex AC optimal power flow (OPF) problem, where the modified dual function is solved in a distributed fashion. The main contribution of this work is that is demonstrates that a distributed method with carefully tuned parameters can converge to globally optimal solutions despite the inherent nonconvexity of the problem and the absence of theoretical guarantees of convergence. This paper is the first to conduct extensive numerical analysis resulting in the identification and tabulation of the algorithmic parameter settings that are crucial for the convergence of the method on 72 AC OPF test instances. Moreover, this work provides a deeper insight into the geometry of the modified Lagrange dual function of the OPF problem and highlights the conditions that make this function differentiable. This numerical demonstration of convergence coupled with the scalability and the privacy preserving nature of the proposed method makes it well suited for smart grid applications such as multi-period OPF with demand response (DR) and security constrained unit commitment (SCUC) with contingency constraints and multiple transmission system operators (TSOs).
[ { "created": "Wed, 12 Apr 2017 07:30:53 GMT", "version": "v1" }, { "created": "Wed, 19 Apr 2017 11:49:25 GMT", "version": "v2" }, { "created": "Wed, 3 May 2017 02:25:00 GMT", "version": "v3" }, { "created": "Sun, 18 Jun 2017 06:08:14 GMT", "version": "v4" }, { "created": "Thu, 29 Jun 2017 14:32:37 GMT", "version": "v5" }, { "created": "Fri, 30 Jun 2017 02:48:58 GMT", "version": "v6" }, { "created": "Fri, 7 Jul 2017 00:51:17 GMT", "version": "v7" }, { "created": "Tue, 22 Aug 2017 04:41:46 GMT", "version": "v8" } ]
2017-08-23
[ [ "Mhanna", "Sleiman", "" ], [ "Verbic", "Gregor", "" ], [ "Chapman", "Archie", "" ] ]
This paper proposes a component-based dual decomposition of the nonconvex AC optimal power flow (OPF) problem, where the modified dual function is solved in a distributed fashion. The main contribution of this work is that is demonstrates that a distributed method with carefully tuned parameters can converge to globally optimal solutions despite the inherent nonconvexity of the problem and the absence of theoretical guarantees of convergence. This paper is the first to conduct extensive numerical analysis resulting in the identification and tabulation of the algorithmic parameter settings that are crucial for the convergence of the method on 72 AC OPF test instances. Moreover, this work provides a deeper insight into the geometry of the modified Lagrange dual function of the OPF problem and highlights the conditions that make this function differentiable. This numerical demonstration of convergence coupled with the scalability and the privacy preserving nature of the proposed method makes it well suited for smart grid applications such as multi-period OPF with demand response (DR) and security constrained unit commitment (SCUC) with contingency constraints and multiple transmission system operators (TSOs).
2307.08504
Chaoya Jiang
Chaoya Jiang, Haiyang Xu, Wei Ye, Qinghao Ye, Chenliang Li, Ming Yan, Bin Bi, Shikun Zhang, Fei Huang, Songfang Huang
BUS:Efficient and Effective Vision-language Pre-training with Bottom-Up Patch Summarization
Accepted on ICCV2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Vision Transformer (ViT) based Vision-Language Pre-training (VLP) models have demonstrated impressive performance in various tasks. However, the lengthy visual token sequences fed into ViT can lead to training inefficiency and ineffectiveness. Existing efforts address the challenge by either bottom-level patch extraction in the ViT backbone or top-level patch abstraction outside, not balancing training efficiency and effectiveness well. Inspired by text summarization in natural language processing, we propose a Bottom-Up Patch Summarization approach named BUS, coordinating bottom-level extraction and top-level abstraction to learn a concise summary of lengthy visual token sequences efficiently. Specifically, We incorporate a Text-Semantics-Aware Patch Selector (TSPS) into the ViT backbone to perform a coarse-grained visual token extraction and then attach a flexible Transformer-based Patch Abstraction Decoder (PAD) upon the backbone for top-level visual abstraction. This bottom-up collaboration enables our BUS to yield high training efficiency while maintaining or even improving effectiveness. We evaluate our approach on various visual-language understanding and generation tasks and show competitive downstream task performance while boosting the training efficiency by 50\%. Additionally, our model achieves state-of-the-art performance on many downstream tasks by increasing input image resolution without increasing computational costs over baselines.
[ { "created": "Mon, 17 Jul 2023 14:08:17 GMT", "version": "v1" }, { "created": "Sat, 24 Feb 2024 03:54:37 GMT", "version": "v2" } ]
2024-02-27
[ [ "Jiang", "Chaoya", "" ], [ "Xu", "Haiyang", "" ], [ "Ye", "Wei", "" ], [ "Ye", "Qinghao", "" ], [ "Li", "Chenliang", "" ], [ "Yan", "Ming", "" ], [ "Bi", "Bin", "" ], [ "Zhang", "Shikun", "" ], [ "Huang", "Fei", "" ], [ "Huang", "Songfang", "" ] ]
Vision Transformer (ViT) based Vision-Language Pre-training (VLP) models have demonstrated impressive performance in various tasks. However, the lengthy visual token sequences fed into ViT can lead to training inefficiency and ineffectiveness. Existing efforts address the challenge by either bottom-level patch extraction in the ViT backbone or top-level patch abstraction outside, not balancing training efficiency and effectiveness well. Inspired by text summarization in natural language processing, we propose a Bottom-Up Patch Summarization approach named BUS, coordinating bottom-level extraction and top-level abstraction to learn a concise summary of lengthy visual token sequences efficiently. Specifically, We incorporate a Text-Semantics-Aware Patch Selector (TSPS) into the ViT backbone to perform a coarse-grained visual token extraction and then attach a flexible Transformer-based Patch Abstraction Decoder (PAD) upon the backbone for top-level visual abstraction. This bottom-up collaboration enables our BUS to yield high training efficiency while maintaining or even improving effectiveness. We evaluate our approach on various visual-language understanding and generation tasks and show competitive downstream task performance while boosting the training efficiency by 50\%. Additionally, our model achieves state-of-the-art performance on many downstream tasks by increasing input image resolution without increasing computational costs over baselines.
2101.02691
Zaiwei Zhang
Zaiwei Zhang, Rohit Girdhar, Armand Joulin, Ishan Misra
Self-Supervised Pretraining of 3D Features on any Point-Cloud
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pretraining on large labeled datasets is a prerequisite to achieve good performance in many computer vision tasks like 2D object recognition, video classification etc. However, pretraining is not widely used for 3D recognition tasks where state-of-the-art methods train models from scratch. A primary reason is the lack of large annotated datasets because 3D data is both difficult to acquire and time consuming to label. We present a simple self-supervised pertaining method that can work with any 3D data - single or multiview, indoor or outdoor, acquired by varied sensors, without 3D registration. We pretrain standard point cloud and voxel based model architectures, and show that joint pretraining further improves performance. We evaluate our models on 9 benchmarks for object detection, semantic segmentation, and object classification, where they achieve state-of-the-art results and can outperform supervised pretraining. We set a new state-of-the-art for object detection on ScanNet (69.0% mAP) and SUNRGBD (63.5% mAP). Our pretrained models are label efficient and improve performance for classes with few examples.
[ { "created": "Thu, 7 Jan 2021 18:55:21 GMT", "version": "v1" } ]
2021-01-08
[ [ "Zhang", "Zaiwei", "" ], [ "Girdhar", "Rohit", "" ], [ "Joulin", "Armand", "" ], [ "Misra", "Ishan", "" ] ]
Pretraining on large labeled datasets is a prerequisite to achieve good performance in many computer vision tasks like 2D object recognition, video classification etc. However, pretraining is not widely used for 3D recognition tasks where state-of-the-art methods train models from scratch. A primary reason is the lack of large annotated datasets because 3D data is both difficult to acquire and time consuming to label. We present a simple self-supervised pertaining method that can work with any 3D data - single or multiview, indoor or outdoor, acquired by varied sensors, without 3D registration. We pretrain standard point cloud and voxel based model architectures, and show that joint pretraining further improves performance. We evaluate our models on 9 benchmarks for object detection, semantic segmentation, and object classification, where they achieve state-of-the-art results and can outperform supervised pretraining. We set a new state-of-the-art for object detection on ScanNet (69.0% mAP) and SUNRGBD (63.5% mAP). Our pretrained models are label efficient and improve performance for classes with few examples.
2210.05404
Zifan Jiang
Zifan Jiang, Amit Moryossef, Mathias M\"uller, Sarah Ebling
Machine Translation between Spoken Languages and Signed Languages Represented in SignWriting
Accepted at EACL 2023 (Findings)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents work on novel machine translation (MT) systems between spoken and signed languages, where signed languages are represented in SignWriting, a sign language writing system. Our work seeks to address the lack of out-of-the-box support for signed languages in current MT systems and is based on the SignBank dataset, which contains pairs of spoken language text and SignWriting content. We introduce novel methods to parse, factorize, decode, and evaluate SignWriting, leveraging ideas from neural factored MT. In a bilingual setup--translating from American Sign Language to (American) English--our method achieves over 30 BLEU, while in two multilingual setups--translating in both directions between spoken languages and signed languages--we achieve over 20 BLEU. We find that common MT techniques used to improve spoken language translation similarly affect the performance of sign language translation. These findings validate our use of an intermediate text representation for signed languages to include them in natural language processing research.
[ { "created": "Tue, 11 Oct 2022 12:28:06 GMT", "version": "v1" }, { "created": "Thu, 23 Feb 2023 10:08:01 GMT", "version": "v2" } ]
2023-02-24
[ [ "Jiang", "Zifan", "" ], [ "Moryossef", "Amit", "" ], [ "Müller", "Mathias", "" ], [ "Ebling", "Sarah", "" ] ]
This paper presents work on novel machine translation (MT) systems between spoken and signed languages, where signed languages are represented in SignWriting, a sign language writing system. Our work seeks to address the lack of out-of-the-box support for signed languages in current MT systems and is based on the SignBank dataset, which contains pairs of spoken language text and SignWriting content. We introduce novel methods to parse, factorize, decode, and evaluate SignWriting, leveraging ideas from neural factored MT. In a bilingual setup--translating from American Sign Language to (American) English--our method achieves over 30 BLEU, while in two multilingual setups--translating in both directions between spoken languages and signed languages--we achieve over 20 BLEU. We find that common MT techniques used to improve spoken language translation similarly affect the performance of sign language translation. These findings validate our use of an intermediate text representation for signed languages to include them in natural language processing research.
2404.01878
Shahzeb Naeem
Shahzeb Naeem, Ramzi Al-Sharawi, Muhammad Riyyan Khan, Usman Tariq, Abhinav Dhall and Hasan Al-Nashash
Real, fake and synthetic faces -- does the coin have three sides?
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
With the ever-growing power of generative artificial intelligence, deepfake and artificially generated (synthetic) media have continued to spread online, which creates various ethical and moral concerns regarding their usage. To tackle this, we thus present a novel exploration of the trends and patterns observed in real, deepfake and synthetic facial images. The proposed analysis is done in two parts: firstly, we incorporate eight deep learning models and analyze their performances in distinguishing between the three classes of images. Next, we look to further delve into the similarities and differences between these three sets of images by investigating their image properties both in the context of the entire image as well as in the context of specific regions within the image. ANOVA test was also performed and provided further clarity amongst the patterns associated between the images of the three classes. From our findings, we observe that the investigated deeplearning models found it easier to detect synthetic facial images, with the ViT Patch-16 model performing best on this task with a class-averaged sensitivity, specificity, precision, and accuracy of 97.37%, 98.69%, 97.48%, and 98.25%, respectively. This observation was supported by further analysis of various image properties. We saw noticeable differences across the three category of images. This analysis can help us build better algorithms for facial image generation, and also shows that synthetic, deepfake and real face images are indeed three different classes.
[ { "created": "Tue, 2 Apr 2024 12:08:26 GMT", "version": "v1" } ]
2024-04-03
[ [ "Naeem", "Shahzeb", "" ], [ "Al-Sharawi", "Ramzi", "" ], [ "Khan", "Muhammad Riyyan", "" ], [ "Tariq", "Usman", "" ], [ "Dhall", "Abhinav", "" ], [ "Al-Nashash", "Hasan", "" ] ]
With the ever-growing power of generative artificial intelligence, deepfake and artificially generated (synthetic) media have continued to spread online, which creates various ethical and moral concerns regarding their usage. To tackle this, we thus present a novel exploration of the trends and patterns observed in real, deepfake and synthetic facial images. The proposed analysis is done in two parts: firstly, we incorporate eight deep learning models and analyze their performances in distinguishing between the three classes of images. Next, we look to further delve into the similarities and differences between these three sets of images by investigating their image properties both in the context of the entire image as well as in the context of specific regions within the image. ANOVA test was also performed and provided further clarity amongst the patterns associated between the images of the three classes. From our findings, we observe that the investigated deeplearning models found it easier to detect synthetic facial images, with the ViT Patch-16 model performing best on this task with a class-averaged sensitivity, specificity, precision, and accuracy of 97.37%, 98.69%, 97.48%, and 98.25%, respectively. This observation was supported by further analysis of various image properties. We saw noticeable differences across the three category of images. This analysis can help us build better algorithms for facial image generation, and also shows that synthetic, deepfake and real face images are indeed three different classes.
1604.02253
Erlend Magnus Viggen
Tor Arne Reinen, Arne Lie, Finn Tore Knudsen
SensIs - Underwater acoustic network for ice-monitoring
10 pages, 5 figures, 2 tables; part of the Proceedings of the 39th Scandinavian Symposium on Physical Acoustics (arXiv:1604.01763)
null
null
null
cs.NI physics.ao-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Routing for low latency underwater acoustic network-communication is investigated. The application is monitoring of ice-threats to offshore operations in the Arctic - to provide warnings that enable operators to react to such threats. The scenario produces relatively high traffic load, and the network should favour low delay and adequate reliability rather than energy usage minimization. The ICRP (Information-Carrying based Routing Protocol), originally proposed by Wei Liang et al. in 2007, is chosen as basis. ICRP obtains unicast routing paths by sending data payload as broadcast packets when no route information is available. Thus, data can be delivered without the cost of reactive signalling latency. In this paper we explore the capabilities of a slightly enhanced/adapted ICRP, tailored to the ice monitoring application. By simulations and experiments at sea it is demonstrated that the protocol performs well and can manage the applications high traffic load - this provided that the point-to-point links provide sufficient bit rates and capacity headroom.
[ { "created": "Fri, 8 Apr 2016 07:25:47 GMT", "version": "v1" } ]
2016-04-11
[ [ "Reinen", "Tor Arne", "" ], [ "Lie", "Arne", "" ], [ "Knudsen", "Finn Tore", "" ] ]
Routing for low latency underwater acoustic network-communication is investigated. The application is monitoring of ice-threats to offshore operations in the Arctic - to provide warnings that enable operators to react to such threats. The scenario produces relatively high traffic load, and the network should favour low delay and adequate reliability rather than energy usage minimization. The ICRP (Information-Carrying based Routing Protocol), originally proposed by Wei Liang et al. in 2007, is chosen as basis. ICRP obtains unicast routing paths by sending data payload as broadcast packets when no route information is available. Thus, data can be delivered without the cost of reactive signalling latency. In this paper we explore the capabilities of a slightly enhanced/adapted ICRP, tailored to the ice monitoring application. By simulations and experiments at sea it is demonstrated that the protocol performs well and can manage the applications high traffic load - this provided that the point-to-point links provide sufficient bit rates and capacity headroom.
2405.06306
Javier Coronado-Bl\'azquez
Javier Coronado-Bl\'azquez
A NLP Approach to "Review Bombing" in Metacritic PC Videogames User Ratings
11 pages, 4 figures. Accepted by Discover Artificial Intelligence but withdrawn due to APC
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Many videogames suffer "review bombing" -a large volume of unusually low scores that in many cases do not reflect the real quality of the product- when rated by users. By taking Metacritic's 50,000+ user score aggregations for PC games in English language, we use a Natural Language Processing (NLP) approach to try to understand the main words and concepts appearing in such cases, reaching a 0.88 accuracy on a validation set when distinguishing between just bad ratings and review bombings. By uncovering and analyzing the patterns driving this phenomenon, these results could be used to further mitigate these situations.
[ { "created": "Fri, 10 May 2024 08:31:04 GMT", "version": "v1" } ]
2024-05-13
[ [ "Coronado-Blázquez", "Javier", "" ] ]
Many videogames suffer "review bombing" -a large volume of unusually low scores that in many cases do not reflect the real quality of the product- when rated by users. By taking Metacritic's 50,000+ user score aggregations for PC games in English language, we use a Natural Language Processing (NLP) approach to try to understand the main words and concepts appearing in such cases, reaching a 0.88 accuracy on a validation set when distinguishing between just bad ratings and review bombings. By uncovering and analyzing the patterns driving this phenomenon, these results could be used to further mitigate these situations.
2407.19610
Mohammed Al-Maamari
Mohammed Al-Maamari, Mehdi Ben Amor, Michael Granitzer
Mixture of Modular Experts: Distilling Knowledge from a Multilingual Teacher into Specialized Modular Language Models
Preprint
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This research combines Knowledge Distillation (KD) and Mixture of Experts (MoE) to develop modular, efficient multilingual language models. Key objectives include evaluating adaptive versus fixed alpha methods in KD and comparing modular MoE architectures for handling multi-domain inputs and preventing catastrophic forgetting. KD compresses large language models (LLMs) into smaller, efficient models, while MoE enhances modularity with specialized tasks. Experiments showed similar performance for both KD methods, with marginal improvements from adaptive alpha. A combined loss approach provided more stable learning. The router, trained to classify input sequences into English, French, German, or Python, achieved 99.95% precision, recall, and F1 score, with Logistic Regression being the most effective classifier. Evaluations of modular MoE architectures revealed that Pre-trained Language Experts (PLE) and Joint Expert Embedding Training (JEET) performed similarly, while the MoE with Common Expert (MoE-CE) setup showed slightly lower performance. Including a common expert in MoE-CE improved its performance. Studies on catastrophic forgetting indicated that sequential training led to significant forgetting, while single-session training with balanced batches and the MoE approach mitigated this issue. The MoE architecture preserved knowledge across multiple languages effectively. The research contributes open-sourced resources including the dataset (https://zenodo.org/doi/10.5281/zenodo.12677631), a balanced dataset creation tool (https://github.com/padas-lab-de/multi-language-dataset-creator), and the research codebase (https://github.com/ModMaamari/mixture-modular-experts).
[ { "created": "Sun, 28 Jul 2024 23:42:09 GMT", "version": "v1" } ]
2024-07-30
[ [ "Al-Maamari", "Mohammed", "" ], [ "Amor", "Mehdi Ben", "" ], [ "Granitzer", "Michael", "" ] ]
This research combines Knowledge Distillation (KD) and Mixture of Experts (MoE) to develop modular, efficient multilingual language models. Key objectives include evaluating adaptive versus fixed alpha methods in KD and comparing modular MoE architectures for handling multi-domain inputs and preventing catastrophic forgetting. KD compresses large language models (LLMs) into smaller, efficient models, while MoE enhances modularity with specialized tasks. Experiments showed similar performance for both KD methods, with marginal improvements from adaptive alpha. A combined loss approach provided more stable learning. The router, trained to classify input sequences into English, French, German, or Python, achieved 99.95% precision, recall, and F1 score, with Logistic Regression being the most effective classifier. Evaluations of modular MoE architectures revealed that Pre-trained Language Experts (PLE) and Joint Expert Embedding Training (JEET) performed similarly, while the MoE with Common Expert (MoE-CE) setup showed slightly lower performance. Including a common expert in MoE-CE improved its performance. Studies on catastrophic forgetting indicated that sequential training led to significant forgetting, while single-session training with balanced batches and the MoE approach mitigated this issue. The MoE architecture preserved knowledge across multiple languages effectively. The research contributes open-sourced resources including the dataset (https://zenodo.org/doi/10.5281/zenodo.12677631), a balanced dataset creation tool (https://github.com/padas-lab-de/multi-language-dataset-creator), and the research codebase (https://github.com/ModMaamari/mixture-modular-experts).
2008.06254
Ye Liu
Ye Liu, Junsong Yuan, Chang Wen Chen
ConsNet: Learning Consistency Graph for Zero-Shot Human-Object Interaction Detection
Accepted to Proceedings of the 28th ACM International Conference on Multimedia (MM 2020)
null
10.1145/3394171.3413600
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of Human-Object Interaction (HOI) Detection, which aims to locate and recognize HOI instances in the form of <human, action, object> in images. Most existing works treat HOIs as individual interaction categories, thus can not handle the problem of long-tail distribution and polysemy of action labels. We argue that multi-level consistencies among objects, actions and interactions are strong cues for generating semantic representations of rare or previously unseen HOIs. Leveraging the compositional and relational peculiarities of HOI labels, we propose ConsNet, a knowledge-aware framework that explicitly encodes the relations among objects, actions and interactions into an undirected graph called consistency graph, and exploits Graph Attention Networks (GATs) to propagate knowledge among HOI categories as well as their constituents. Our model takes visual features of candidate human-object pairs and word embeddings of HOI labels as inputs, maps them into visual-semantic joint embedding space and obtains detection results by measuring their similarities. We extensively evaluate our model on the challenging V-COCO and HICO-DET datasets, and results validate that our approach outperforms state-of-the-arts under both fully-supervised and zero-shot settings. Code is available at https://github.com/yeliudev/ConsNet.
[ { "created": "Fri, 14 Aug 2020 09:11:18 GMT", "version": "v1" }, { "created": "Tue, 15 Sep 2020 05:35:03 GMT", "version": "v2" }, { "created": "Wed, 7 Apr 2021 20:38:46 GMT", "version": "v3" }, { "created": "Sun, 27 Mar 2022 07:49:43 GMT", "version": "v4" } ]
2022-03-29
[ [ "Liu", "Ye", "" ], [ "Yuan", "Junsong", "" ], [ "Chen", "Chang Wen", "" ] ]
We consider the problem of Human-Object Interaction (HOI) Detection, which aims to locate and recognize HOI instances in the form of <human, action, object> in images. Most existing works treat HOIs as individual interaction categories, thus can not handle the problem of long-tail distribution and polysemy of action labels. We argue that multi-level consistencies among objects, actions and interactions are strong cues for generating semantic representations of rare or previously unseen HOIs. Leveraging the compositional and relational peculiarities of HOI labels, we propose ConsNet, a knowledge-aware framework that explicitly encodes the relations among objects, actions and interactions into an undirected graph called consistency graph, and exploits Graph Attention Networks (GATs) to propagate knowledge among HOI categories as well as their constituents. Our model takes visual features of candidate human-object pairs and word embeddings of HOI labels as inputs, maps them into visual-semantic joint embedding space and obtains detection results by measuring their similarities. We extensively evaluate our model on the challenging V-COCO and HICO-DET datasets, and results validate that our approach outperforms state-of-the-arts under both fully-supervised and zero-shot settings. Code is available at https://github.com/yeliudev/ConsNet.
1804.10795
Menghan Wang
Menghan Wang, Xiaolin Zheng, Kun Zhang
User-Sensitive Recommendation Ensemble with Clustered Multi-Task Learning
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers recommendation algorithm ensembles in a user-sensitive manner. Recently researchers have proposed various effective recommendation algorithms, which utilized different aspects of the data and different techniques. However, the "user skewed prediction" problem may exist for almost all recommendation algorithms -- algorithms with best average predictive accuracy may cover up that the algorithms may perform poorly for some part of users, which will lead to biased services in real scenarios. In this paper, we propose a user-sensitive ensemble method named "UREC" to address this issue. We first cluster users based on the recommendation predictions, then we use multi-task learning to learn the user-sensitive ensemble function for the users. In addition, to alleviate the negative effects of new user problem to clustering users, we propose an approximate approach based on a spectral relaxation. Experiments on real-world datasets demonstrate the superiority of our methods.
[ { "created": "Sat, 28 Apr 2018 12:35:27 GMT", "version": "v1" } ]
2018-05-01
[ [ "Wang", "Menghan", "" ], [ "Zheng", "Xiaolin", "" ], [ "Zhang", "Kun", "" ] ]
This paper considers recommendation algorithm ensembles in a user-sensitive manner. Recently researchers have proposed various effective recommendation algorithms, which utilized different aspects of the data and different techniques. However, the "user skewed prediction" problem may exist for almost all recommendation algorithms -- algorithms with best average predictive accuracy may cover up that the algorithms may perform poorly for some part of users, which will lead to biased services in real scenarios. In this paper, we propose a user-sensitive ensemble method named "UREC" to address this issue. We first cluster users based on the recommendation predictions, then we use multi-task learning to learn the user-sensitive ensemble function for the users. In addition, to alleviate the negative effects of new user problem to clustering users, we propose an approximate approach based on a spectral relaxation. Experiments on real-world datasets demonstrate the superiority of our methods.
1707.01898
Shiyu Ji
Shiyu Ji, Kun Wan
Adaptive Modular Exponentiation Methods v.s. Python's Power Function
4 pages
null
null
null
cs.DS cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we use Python to implement two efficient modular exponentiation methods: the adaptive m-ary method and the adaptive sliding-window method of window size k, where both m's are adaptively chosen based on the length of exponent. We also conduct the benchmark for both methods. Evaluation results show that compared to the industry-standard efficient implementations of modular power function in CPython and Pypy, our algorithms can reduce 1-5% computing time for exponents with more than 3072 bits.
[ { "created": "Thu, 6 Jul 2017 04:12:25 GMT", "version": "v1" } ]
2017-07-10
[ [ "Ji", "Shiyu", "" ], [ "Wan", "Kun", "" ] ]
In this paper we use Python to implement two efficient modular exponentiation methods: the adaptive m-ary method and the adaptive sliding-window method of window size k, where both m's are adaptively chosen based on the length of exponent. We also conduct the benchmark for both methods. Evaluation results show that compared to the industry-standard efficient implementations of modular power function in CPython and Pypy, our algorithms can reduce 1-5% computing time for exponents with more than 3072 bits.
2209.11083
Tobias Schr\"ader
Tobias Schr\"ader, Robert Graubohm, Nayel Fabian Salem, and Markus Maurer
Designing an Automated Vehicle: Strategies for Handling Tasks of a Previously Required Accompanying Person
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
When using a conventional passenger car, several groups of people are reliant on the assistance of an accompanying person, for example when getting in and out of the car. For the independent use of an automatically driving vehicle by those groups, the absence of a previously required accompanying person needs to be compensated. During the design process of an autonomous family vehicle, we found that a low-barrier vehicle design can only partly contribute to the compensation for the absence of a required human companion. In this paper, we present four strategies we identified for handling the tasks of a previously required accompanying individual. The presented top-down approach supports developers in identifying unresolved problems, in finding, structuring, and selecting solutions as well as in uncovering upcoming problems at an early stage in the development of novel concepts for driverless vehicles. As an example, we consider the hypothetical exit of persons in need of assistance. The application of the four strategies in this example demonstrates the far-reaching impact of consistently considering users in need of support in the development of automated vehicles.
[ { "created": "Thu, 22 Sep 2022 15:15:10 GMT", "version": "v1" } ]
2022-09-23
[ [ "Schräder", "Tobias", "" ], [ "Graubohm", "Robert", "" ], [ "Salem", "Nayel Fabian", "" ], [ "Maurer", "Markus", "" ] ]
When using a conventional passenger car, several groups of people are reliant on the assistance of an accompanying person, for example when getting in and out of the car. For the independent use of an automatically driving vehicle by those groups, the absence of a previously required accompanying person needs to be compensated. During the design process of an autonomous family vehicle, we found that a low-barrier vehicle design can only partly contribute to the compensation for the absence of a required human companion. In this paper, we present four strategies we identified for handling the tasks of a previously required accompanying individual. The presented top-down approach supports developers in identifying unresolved problems, in finding, structuring, and selecting solutions as well as in uncovering upcoming problems at an early stage in the development of novel concepts for driverless vehicles. As an example, we consider the hypothetical exit of persons in need of assistance. The application of the four strategies in this example demonstrates the far-reaching impact of consistently considering users in need of support in the development of automated vehicles.
2112.03553
Binh Le M.
Binh M. Le and Simon S. Woo
ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images
null
Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite significant advancements of deep learning-based forgery detectors for distinguishing manipulated deepfake images, most detection approaches suffer from moderate to significant performance degradation with low-quality compressed deepfake images. Because of the limited information in low-quality images, detecting low-quality deepfake remains an important challenge. In this work, we apply frequency domain learning and optimal transport theory in knowledge distillation (KD) to specifically improve the detection of low-quality compressed deepfake images. We explore transfer learning capability in KD to enable a student network to learn discriminative features from low-quality images effectively. In particular, we propose the Attention-based Deepfake detection Distiller (ADD), which consists of two novel distillations: 1) frequency attention distillation that effectively retrieves the removed high-frequency components in the student network, and 2) multi-view attention distillation that creates multiple attention vectors by slicing the teacher's and student's tensors under different views to transfer the teacher tensor's distribution to the student more efficiently. Our extensive experimental results demonstrate that our approach outperforms state-of-the-art baselines in detecting low-quality compressed deepfake images.
[ { "created": "Tue, 7 Dec 2021 07:58:28 GMT", "version": "v1" } ]
2021-12-08
[ [ "Le", "Binh M.", "" ], [ "Woo", "Simon S.", "" ] ]
Despite significant advancements of deep learning-based forgery detectors for distinguishing manipulated deepfake images, most detection approaches suffer from moderate to significant performance degradation with low-quality compressed deepfake images. Because of the limited information in low-quality images, detecting low-quality deepfake remains an important challenge. In this work, we apply frequency domain learning and optimal transport theory in knowledge distillation (KD) to specifically improve the detection of low-quality compressed deepfake images. We explore transfer learning capability in KD to enable a student network to learn discriminative features from low-quality images effectively. In particular, we propose the Attention-based Deepfake detection Distiller (ADD), which consists of two novel distillations: 1) frequency attention distillation that effectively retrieves the removed high-frequency components in the student network, and 2) multi-view attention distillation that creates multiple attention vectors by slicing the teacher's and student's tensors under different views to transfer the teacher tensor's distribution to the student more efficiently. Our extensive experimental results demonstrate that our approach outperforms state-of-the-art baselines in detecting low-quality compressed deepfake images.
1911.08568
Michael Diodato
Michael Diodato, Yu Li, Antonia Lovjer, Minsu Yeom, Albert Song, Yiyang Zeng, Abhay Khosla, Benedikt Schifferer, Manik Goyal, Iddo Drori
Accurate Trajectory Prediction for Autonomous Vehicles
arXiv admin note: text overlap with arXiv:1910.10318, arXiv:1910.10317
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting vehicle trajectories, angle and speed is important for safe and comfortable driving. We demonstrate the best predicted angle, speed, and best performance overall winning the top three places of the ICCV 2019 Learning to Drive challenge. Our key contributions are (i) a general neural network system architecture which embeds and fuses together multiple inputs by encoding, and decodes multiple outputs using neural networks, (ii) using pre-trained neural networks for augmenting the given input data with segmentation maps and semantic information, and (iii) leveraging the form and distribution of the expected output in the model.
[ { "created": "Mon, 18 Nov 2019 06:38:33 GMT", "version": "v1" } ]
2019-11-21
[ [ "Diodato", "Michael", "" ], [ "Li", "Yu", "" ], [ "Lovjer", "Antonia", "" ], [ "Yeom", "Minsu", "" ], [ "Song", "Albert", "" ], [ "Zeng", "Yiyang", "" ], [ "Khosla", "Abhay", "" ], [ "Schifferer", "Benedikt", "" ], [ "Goyal", "Manik", "" ], [ "Drori", "Iddo", "" ] ]
Predicting vehicle trajectories, angle and speed is important for safe and comfortable driving. We demonstrate the best predicted angle, speed, and best performance overall winning the top three places of the ICCV 2019 Learning to Drive challenge. Our key contributions are (i) a general neural network system architecture which embeds and fuses together multiple inputs by encoding, and decodes multiple outputs using neural networks, (ii) using pre-trained neural networks for augmenting the given input data with segmentation maps and semantic information, and (iii) leveraging the form and distribution of the expected output in the model.
1912.10211
Qiuqiang Kong
Qiuqiang Kong, Yin Cao, Turab Iqbal, Yuxuan Wang, Wenwu Wang, Mark D. Plumbley
PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition
14 pages
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Audio pattern recognition is an important research topic in the machine learning area, and includes several tasks such as audio tagging, acoustic scene classification, music classification, speech emotion classification and sound event detection. Recently, neural networks have been applied to tackle audio pattern recognition problems. However, previous systems are built on specific datasets with limited durations. Recently, in computer vision and natural language processing, systems pretrained on large-scale datasets have generalized well to several tasks. However, there is limited research on pretraining systems on large-scale datasets for audio pattern recognition. In this paper, we propose pretrained audio neural networks (PANNs) trained on the large-scale AudioSet dataset. These PANNs are transferred to other audio related tasks. We investigate the performance and computational complexity of PANNs modeled by a variety of convolutional neural networks. We propose an architecture called Wavegram-Logmel-CNN using both log-mel spectrogram and waveform as input feature. Our best PANN system achieves a state-of-the-art mean average precision (mAP) of 0.439 on AudioSet tagging, outperforming the best previous system of 0.392. We transfer PANNs to six audio pattern recognition tasks, and demonstrate state-of-the-art performance in several of those tasks. We have released the source code and pretrained models of PANNs: https://github.com/qiuqiangkong/audioset_tagging_cnn.
[ { "created": "Sat, 21 Dec 2019 06:53:14 GMT", "version": "v1" }, { "created": "Sat, 4 Jan 2020 08:37:04 GMT", "version": "v2" }, { "created": "Mon, 13 Jan 2020 06:29:17 GMT", "version": "v3" }, { "created": "Thu, 9 Jul 2020 01:44:14 GMT", "version": "v4" }, { "created": "Sun, 23 Aug 2020 11:35:41 GMT", "version": "v5" } ]
2020-08-25
[ [ "Kong", "Qiuqiang", "" ], [ "Cao", "Yin", "" ], [ "Iqbal", "Turab", "" ], [ "Wang", "Yuxuan", "" ], [ "Wang", "Wenwu", "" ], [ "Plumbley", "Mark D.", "" ] ]
Audio pattern recognition is an important research topic in the machine learning area, and includes several tasks such as audio tagging, acoustic scene classification, music classification, speech emotion classification and sound event detection. Recently, neural networks have been applied to tackle audio pattern recognition problems. However, previous systems are built on specific datasets with limited durations. Recently, in computer vision and natural language processing, systems pretrained on large-scale datasets have generalized well to several tasks. However, there is limited research on pretraining systems on large-scale datasets for audio pattern recognition. In this paper, we propose pretrained audio neural networks (PANNs) trained on the large-scale AudioSet dataset. These PANNs are transferred to other audio related tasks. We investigate the performance and computational complexity of PANNs modeled by a variety of convolutional neural networks. We propose an architecture called Wavegram-Logmel-CNN using both log-mel spectrogram and waveform as input feature. Our best PANN system achieves a state-of-the-art mean average precision (mAP) of 0.439 on AudioSet tagging, outperforming the best previous system of 0.392. We transfer PANNs to six audio pattern recognition tasks, and demonstrate state-of-the-art performance in several of those tasks. We have released the source code and pretrained models of PANNs: https://github.com/qiuqiangkong/audioset_tagging_cnn.
1712.01877
Hadi Sarieddeen Mr.
H. Sarieddeen, M. M. Mansour, and A. Chehab
Large MIMO Detection Schemes Based on Channel Puncturing: Performance and Complexity Analysis
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A family of low-complexity detection schemes based on channel matrix puncturing targeted for large multiple-input multiple-output (MIMO) systems is proposed. It is well-known that the computational cost of MIMO detection based on QR decomposition is directly proportional to the number of non-zero entries involved in back-substitution and slicing operations in the triangularized channel matrix, which can be too high for low-latency applications involving large MIMO dimensions. By systematically puncturing the channel to have a specific structure, it is demonstrated that the detection process can be accelerated by employing standard schemes such as chase detection, list detection, nulling-and-cancellation detection, and sub-space detection on the transformed matrix. The performance of these schemes is characterized and analyzed mathematically, and bounds on the achievable diversity gain and probability of bit error are derived. Surprisingly, it is shown that puncturing does not negatively impact the receive diversity gain in hard-output detectors. The analysis is extended to soft-output detection when computing per-layer bit log-likelihood ratios; it is shown that significant performance gains are attainable by ordering the layer of interest to be at the root when puncturing the channel. Simulations of coded and uncoded scenarios certify that the proposed schemes scale up efficiently both in the number of antennas and constellation size, as well as in the presence of correlated channels. In particular, soft-output per-layer sub-space detection is shown to achieve a 2.5dB SNR gain at $10^{-4}$ bit error rate in $256$-QAM $16\!\times\!16$ MIMO, while saving $77\%$ of nulling-and-cancellation computations.
[ { "created": "Tue, 5 Dec 2017 19:27:17 GMT", "version": "v1" } ]
2017-12-07
[ [ "Sarieddeen", "H.", "" ], [ "Mansour", "M. M.", "" ], [ "Chehab", "A.", "" ] ]
A family of low-complexity detection schemes based on channel matrix puncturing targeted for large multiple-input multiple-output (MIMO) systems is proposed. It is well-known that the computational cost of MIMO detection based on QR decomposition is directly proportional to the number of non-zero entries involved in back-substitution and slicing operations in the triangularized channel matrix, which can be too high for low-latency applications involving large MIMO dimensions. By systematically puncturing the channel to have a specific structure, it is demonstrated that the detection process can be accelerated by employing standard schemes such as chase detection, list detection, nulling-and-cancellation detection, and sub-space detection on the transformed matrix. The performance of these schemes is characterized and analyzed mathematically, and bounds on the achievable diversity gain and probability of bit error are derived. Surprisingly, it is shown that puncturing does not negatively impact the receive diversity gain in hard-output detectors. The analysis is extended to soft-output detection when computing per-layer bit log-likelihood ratios; it is shown that significant performance gains are attainable by ordering the layer of interest to be at the root when puncturing the channel. Simulations of coded and uncoded scenarios certify that the proposed schemes scale up efficiently both in the number of antennas and constellation size, as well as in the presence of correlated channels. In particular, soft-output per-layer sub-space detection is shown to achieve a 2.5dB SNR gain at $10^{-4}$ bit error rate in $256$-QAM $16\!\times\!16$ MIMO, while saving $77\%$ of nulling-and-cancellation computations.
2111.15546
Byol Kim
Byol Kim and Rina Foygel Barber
Black-box tests for algorithmic stability
37 pages. Minor edits to match the journal-submitted version
null
null
null
cs.LG math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Algorithmic stability is a concept from learning theory that expresses the degree to which changes to the input data (e.g., removal of a single data point) may affect the outputs of a regression algorithm. Knowing an algorithm's stability properties is often useful for many downstream applications -- for example, stability is known to lead to desirable generalization properties and predictive inference guarantees. However, many modern algorithms currently used in practice are too complex for a theoretical analysis of their stability properties, and thus we can only attempt to establish these properties through an empirical exploration of the algorithm's behavior on various data sets. In this work, we lay out a formal statistical framework for this kind of "black-box testing" without any assumptions on the algorithm or the data distribution and establish fundamental bounds on the ability of any black-box test to identify algorithmic stability.
[ { "created": "Tue, 30 Nov 2021 16:36:58 GMT", "version": "v1" }, { "created": "Wed, 25 May 2022 16:18:07 GMT", "version": "v2" }, { "created": "Thu, 25 Aug 2022 04:39:06 GMT", "version": "v3" }, { "created": "Sat, 10 Sep 2022 17:08:10 GMT", "version": "v4" }, { "created": "Fri, 2 Dec 2022 18:21:44 GMT", "version": "v5" }, { "created": "Wed, 21 Dec 2022 19:51:33 GMT", "version": "v6" } ]
2022-12-23
[ [ "Kim", "Byol", "" ], [ "Barber", "Rina Foygel", "" ] ]
Algorithmic stability is a concept from learning theory that expresses the degree to which changes to the input data (e.g., removal of a single data point) may affect the outputs of a regression algorithm. Knowing an algorithm's stability properties is often useful for many downstream applications -- for example, stability is known to lead to desirable generalization properties and predictive inference guarantees. However, many modern algorithms currently used in practice are too complex for a theoretical analysis of their stability properties, and thus we can only attempt to establish these properties through an empirical exploration of the algorithm's behavior on various data sets. In this work, we lay out a formal statistical framework for this kind of "black-box testing" without any assumptions on the algorithm or the data distribution and establish fundamental bounds on the ability of any black-box test to identify algorithmic stability.
2310.05723
Trevor McInroe
Trevor McInroe, Adam Jelley, Stefano V. Albrecht, Amos Storkey
Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning
10 pages, 17 figures, published at RLC 2024
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Offline pretraining with a static dataset followed by online fine-tuning (offline-to-online, or OtO) is a paradigm well matched to a real-world RL deployment process. In this scenario, we aim to find the best-performing policy within a limited budget of online interactions. Previous work in the OtO setting has focused on correcting for bias introduced by the policy-constraint mechanisms of offline RL algorithms. Such constraints keep the learned policy close to the behavior policy that collected the dataset, but we show this can unnecessarily limit policy performance if the behavior policy is far from optimal. Instead, we forgo constraints and frame OtO RL as an exploration problem that aims to maximize the benefit of online data-collection. We first study the major online RL exploration methods based on intrinsic rewards and UCB in the OtO setting, showing that intrinsic rewards add training instability through reward-function modification, and UCB methods are myopic and it is unclear which learned-component's ensemble to use for action selection. We then introduce an algorithm for planning to go out-of-distribution (PTGOOD) that avoids these issues. PTGOOD uses a non-myopic planning procedure that targets exploration in relatively high-reward regions of the state-action space unlikely to be visited by the behavior policy. By leveraging concepts from the Conditional Entropy Bottleneck, PTGOOD encourages data collected online to provide new information relevant to improving the final deployment policy without altering rewards. We show empirically in several continuous control tasks that PTGOOD significantly improves agent returns during online fine-tuning and avoids the suboptimal policy convergence that many of our baselines exhibit in several environments.
[ { "created": "Mon, 9 Oct 2023 13:47:05 GMT", "version": "v1" }, { "created": "Wed, 27 Mar 2024 09:48:34 GMT", "version": "v2" }, { "created": "Fri, 21 Jun 2024 13:13:15 GMT", "version": "v3" } ]
2024-06-24
[ [ "McInroe", "Trevor", "" ], [ "Jelley", "Adam", "" ], [ "Albrecht", "Stefano V.", "" ], [ "Storkey", "Amos", "" ] ]
Offline pretraining with a static dataset followed by online fine-tuning (offline-to-online, or OtO) is a paradigm well matched to a real-world RL deployment process. In this scenario, we aim to find the best-performing policy within a limited budget of online interactions. Previous work in the OtO setting has focused on correcting for bias introduced by the policy-constraint mechanisms of offline RL algorithms. Such constraints keep the learned policy close to the behavior policy that collected the dataset, but we show this can unnecessarily limit policy performance if the behavior policy is far from optimal. Instead, we forgo constraints and frame OtO RL as an exploration problem that aims to maximize the benefit of online data-collection. We first study the major online RL exploration methods based on intrinsic rewards and UCB in the OtO setting, showing that intrinsic rewards add training instability through reward-function modification, and UCB methods are myopic and it is unclear which learned-component's ensemble to use for action selection. We then introduce an algorithm for planning to go out-of-distribution (PTGOOD) that avoids these issues. PTGOOD uses a non-myopic planning procedure that targets exploration in relatively high-reward regions of the state-action space unlikely to be visited by the behavior policy. By leveraging concepts from the Conditional Entropy Bottleneck, PTGOOD encourages data collected online to provide new information relevant to improving the final deployment policy without altering rewards. We show empirically in several continuous control tasks that PTGOOD significantly improves agent returns during online fine-tuning and avoids the suboptimal policy convergence that many of our baselines exhibit in several environments.
1402.2440
Regina Ammer
Regina Ammer, Matthias Markl, Vera J\"uchter, Carolin K\"orner, Ulrich R\"ude
Validation Experiments for LBM Simulations of Electron Beam Melting
submitted to "International Journal of Modern Physics C"
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper validates 3D simulation results of electron beam melting (EBM) processes comparing experimental and numerical data. The physical setup is presented which is discretized by a three dimensional (3D) thermal lattice Boltzmann method (LBM). An experimental process window is used for the validation depending on the line energy injected into the metal powder bed and the scan velocity of the electron beam. In the process window the EBM products are classified into the categories, porous, good and swelling, depending on the quality of the surface. The same parameter sets are used to generate a numerical process window. A comparison of numerical and experimental process windows shows a good agreement. This validates the EBM model and justifies simulations for future improvements of EBM processes. In particular numerical simulations can be used to explain future process window scenarios and find the best parameter set for a good surface quality and dense products.
[ { "created": "Tue, 11 Feb 2014 10:59:00 GMT", "version": "v1" } ]
2014-02-12
[ [ "Ammer", "Regina", "" ], [ "Markl", "Matthias", "" ], [ "Jüchter", "Vera", "" ], [ "Körner", "Carolin", "" ], [ "Rüde", "Ulrich", "" ] ]
This paper validates 3D simulation results of electron beam melting (EBM) processes comparing experimental and numerical data. The physical setup is presented which is discretized by a three dimensional (3D) thermal lattice Boltzmann method (LBM). An experimental process window is used for the validation depending on the line energy injected into the metal powder bed and the scan velocity of the electron beam. In the process window the EBM products are classified into the categories, porous, good and swelling, depending on the quality of the surface. The same parameter sets are used to generate a numerical process window. A comparison of numerical and experimental process windows shows a good agreement. This validates the EBM model and justifies simulations for future improvements of EBM processes. In particular numerical simulations can be used to explain future process window scenarios and find the best parameter set for a good surface quality and dense products.
1903.00802
Aviral Kumar
Aviral Kumar, Sunita Sarawagi
Calibration of Encoder Decoder Models for Neural Machine Translation
12 Pages
null
null
null
cs.LG cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the calibration of several state of the art neural machine translation(NMT) systems built on attention-based encoder-decoder models. For structured outputs like in NMT, calibration is important not just for reliable confidence with predictions, but also for proper functioning of beam-search inference. We show that most modern NMT models are surprisingly miscalibrated even when conditioned on the true previous tokens. Our investigation leads to two main reasons -- severe miscalibration of EOS (end of sequence marker) and suppression of attention uncertainty. We design recalibration methods based on these signals and demonstrate improved accuracy, better sequence-level calibration, and more intuitive results from beam-search.
[ { "created": "Sun, 3 Mar 2019 01:08:47 GMT", "version": "v1" } ]
2019-03-06
[ [ "Kumar", "Aviral", "" ], [ "Sarawagi", "Sunita", "" ] ]
We study the calibration of several state of the art neural machine translation(NMT) systems built on attention-based encoder-decoder models. For structured outputs like in NMT, calibration is important not just for reliable confidence with predictions, but also for proper functioning of beam-search inference. We show that most modern NMT models are surprisingly miscalibrated even when conditioned on the true previous tokens. Our investigation leads to two main reasons -- severe miscalibration of EOS (end of sequence marker) and suppression of attention uncertainty. We design recalibration methods based on these signals and demonstrate improved accuracy, better sequence-level calibration, and more intuitive results from beam-search.
2010.14864
Constantinos Daskalakis
Constantinos Daskalakis and Qinxuan Pan
Sample-Optimal and Efficient Learning of Tree Ising models
null
null
null
null
cs.LG cs.DS cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that $n$-variable tree-structured Ising models can be learned computationally-efficiently to within total variation distance $\epsilon$ from an optimal $O(n \ln n/\epsilon^2)$ samples, where $O(\cdot)$ hides an absolute constant which, importantly, does not depend on the model being learned - neither its tree nor the magnitude of its edge strengths, on which we place no assumptions. Our guarantees hold, in fact, for the celebrated Chow-Liu [1968] algorithm, using the plug-in estimator for estimating mutual information. While this (or any other) algorithm may fail to identify the structure of the underlying model correctly from a finite sample, we show that it will still learn a tree-structured model that is $\epsilon$-close to the true one in total variation distance, a guarantee called "proper learning." Our guarantees do not follow from known results for the Chow-Liu algorithm and the ensuing literature on learning graphical models, including a recent renaissance of algorithms on this learning challenge, which only yield asymptotic consistency results, or sample-inefficient and/or time-inefficient algorithms, unless further assumptions are placed on the graphical model, such as bounds on the "strengths" of the model's edges/hyperedges. While we establish guarantees for a widely known and simple algorithm, the analysis that this algorithm succeeds and is sample-optimal is quite complex, requiring a hierarchical classification of the edges into layers with different reconstruction guarantees, depending on their strength, combined with delicate uses of the subadditivity of the squared Hellinger distance over graphical models to control the error accumulation.
[ { "created": "Wed, 28 Oct 2020 10:17:48 GMT", "version": "v1" }, { "created": "Sun, 29 Nov 2020 22:50:21 GMT", "version": "v2" } ]
2020-12-01
[ [ "Daskalakis", "Constantinos", "" ], [ "Pan", "Qinxuan", "" ] ]
We show that $n$-variable tree-structured Ising models can be learned computationally-efficiently to within total variation distance $\epsilon$ from an optimal $O(n \ln n/\epsilon^2)$ samples, where $O(\cdot)$ hides an absolute constant which, importantly, does not depend on the model being learned - neither its tree nor the magnitude of its edge strengths, on which we place no assumptions. Our guarantees hold, in fact, for the celebrated Chow-Liu [1968] algorithm, using the plug-in estimator for estimating mutual information. While this (or any other) algorithm may fail to identify the structure of the underlying model correctly from a finite sample, we show that it will still learn a tree-structured model that is $\epsilon$-close to the true one in total variation distance, a guarantee called "proper learning." Our guarantees do not follow from known results for the Chow-Liu algorithm and the ensuing literature on learning graphical models, including a recent renaissance of algorithms on this learning challenge, which only yield asymptotic consistency results, or sample-inefficient and/or time-inefficient algorithms, unless further assumptions are placed on the graphical model, such as bounds on the "strengths" of the model's edges/hyperedges. While we establish guarantees for a widely known and simple algorithm, the analysis that this algorithm succeeds and is sample-optimal is quite complex, requiring a hierarchical classification of the edges into layers with different reconstruction guarantees, depending on their strength, combined with delicate uses of the subadditivity of the squared Hellinger distance over graphical models to control the error accumulation.
2105.02282
Matheo Zihao Wang Dr.
Zihao Wang, Herv\'e Delingette
Attention for Image Registration (AiR): an unsupervised Transformer approach
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image registration is a crucial task in signal processing, but it often encounters issues with stability and efficiency. Non-learning registration approaches rely on optimizing similarity metrics between fixed and moving images, which can be expensive in terms of time and space complexity. This problem can be exacerbated when the images are large or there are significant deformations between them. Recently, deep learning, specifically convolutional neural network (CNN)-based methods, have been explored as an effective solution to the weaknesses of non-learning approaches. To further advance learning approaches in image registration, we introduce an attention mechanism in the deformable image registration problem. Our proposed approach is based on a Transformer framework called AiR, which can be efficiently trained on GPGPU devices. We treat the image registration problem as a language translation task and use the Transformer to learn the deformation field. The method learns an unsupervised generated deformation map and is tested on two benchmark datasets. In summary, our approach shows promising effectiveness in addressing stability and efficiency issues in image registration tasks. The source code of AiR is available on Github.
[ { "created": "Wed, 5 May 2021 18:49:32 GMT", "version": "v1" }, { "created": "Fri, 24 Mar 2023 19:39:50 GMT", "version": "v2" } ]
2023-03-28
[ [ "Wang", "Zihao", "" ], [ "Delingette", "Hervé", "" ] ]
Image registration is a crucial task in signal processing, but it often encounters issues with stability and efficiency. Non-learning registration approaches rely on optimizing similarity metrics between fixed and moving images, which can be expensive in terms of time and space complexity. This problem can be exacerbated when the images are large or there are significant deformations between them. Recently, deep learning, specifically convolutional neural network (CNN)-based methods, have been explored as an effective solution to the weaknesses of non-learning approaches. To further advance learning approaches in image registration, we introduce an attention mechanism in the deformable image registration problem. Our proposed approach is based on a Transformer framework called AiR, which can be efficiently trained on GPGPU devices. We treat the image registration problem as a language translation task and use the Transformer to learn the deformation field. The method learns an unsupervised generated deformation map and is tested on two benchmark datasets. In summary, our approach shows promising effectiveness in addressing stability and efficiency issues in image registration tasks. The source code of AiR is available on Github.
1709.04752
Igor Sabo
I.I. Sabo, H.R. Lagoda
The wave method of building color palette and its application in computer graphics
11 pages, 5 figures
null
null
null
cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
This article describes a method of getting a harmonious combination of colors, developed by us on the basis of the relationship of color and acoustic waves. Presents a parallel between harmoniously matched colors and the concept of harmony in music theory (consonance). Describes the physical assumption of the essence of the phenomenon of harmony (consonance). The article also provides algorithm of implementation wave method for the sRGB color model.
[ { "created": "Sun, 10 Sep 2017 11:24:08 GMT", "version": "v1" } ]
2017-09-15
[ [ "Sabo", "I. I.", "" ], [ "Lagoda", "H. R.", "" ] ]
This article describes a method of getting a harmonious combination of colors, developed by us on the basis of the relationship of color and acoustic waves. Presents a parallel between harmoniously matched colors and the concept of harmony in music theory (consonance). Describes the physical assumption of the essence of the phenomenon of harmony (consonance). The article also provides algorithm of implementation wave method for the sRGB color model.
2312.09658
Alexander Shukhman
Leonid Legashev, Alexander Shukhman, Vadim Badikov
Algorithms for automatic intents extraction and utterances classification for goal-oriented dialogue systems
in Russian language This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern machine learning techniques in the natural language processing domain can be used to automatically generate scripts for goal-oriented dialogue systems. The current article presents a general framework for studying the automatic generation of scripts for goal-oriented dialogue systems. A method for preprocessing dialog data sets in JSON format is described. A comparison is made of two methods for extracting user intent based on BERTopic and latent Dirichlet allocation. A comparison has been made of two implemented algorithms for classifying statements of users of a goal-oriented dialogue system based on logistic regression and BERT transformer models. The BERT transformer approach using the bert-base-uncased model showed better results for the three metrics Precision (0.80), F1-score (0.78) and Matthews correlation coefficient (0.74) in comparison with other methods.
[ { "created": "Fri, 15 Dec 2023 10:12:43 GMT", "version": "v1" }, { "created": "Mon, 29 Apr 2024 15:53:27 GMT", "version": "v2" } ]
2024-04-30
[ [ "Legashev", "Leonid", "" ], [ "Shukhman", "Alexander", "" ], [ "Badikov", "Vadim", "" ] ]
Modern machine learning techniques in the natural language processing domain can be used to automatically generate scripts for goal-oriented dialogue systems. The current article presents a general framework for studying the automatic generation of scripts for goal-oriented dialogue systems. A method for preprocessing dialog data sets in JSON format is described. A comparison is made of two methods for extracting user intent based on BERTopic and latent Dirichlet allocation. A comparison has been made of two implemented algorithms for classifying statements of users of a goal-oriented dialogue system based on logistic regression and BERT transformer models. The BERT transformer approach using the bert-base-uncased model showed better results for the three metrics Precision (0.80), F1-score (0.78) and Matthews correlation coefficient (0.74) in comparison with other methods.
2112.04213
Liran Szlak
Liran Szlak, Ohad Shamir
Convergence Results For Q-Learning With Experience Replay
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
A commonly used heuristic in RL is experience replay (e.g.~\citet{lin1993reinforcement, mnih2015human}), in which a learner stores and re-uses past trajectories as if they were sampled online. In this work, we initiate a rigorous study of this heuristic in the setting of tabular Q-learning. We provide a convergence rate guarantee, and discuss how it compares to the convergence of Q-learning depending on important parameters such as the frequency and number of replay iterations. We also provide theoretical evidence showing when we might expect this heuristic to strictly improve performance, by introducing and analyzing a simple class of MDPs. Finally, we provide some experiments to support our theoretical findings.
[ { "created": "Wed, 8 Dec 2021 10:22:49 GMT", "version": "v1" } ]
2021-12-09
[ [ "Szlak", "Liran", "" ], [ "Shamir", "Ohad", "" ] ]
A commonly used heuristic in RL is experience replay (e.g.~\citet{lin1993reinforcement, mnih2015human}), in which a learner stores and re-uses past trajectories as if they were sampled online. In this work, we initiate a rigorous study of this heuristic in the setting of tabular Q-learning. We provide a convergence rate guarantee, and discuss how it compares to the convergence of Q-learning depending on important parameters such as the frequency and number of replay iterations. We also provide theoretical evidence showing when we might expect this heuristic to strictly improve performance, by introducing and analyzing a simple class of MDPs. Finally, we provide some experiments to support our theoretical findings.
2002.12620
Yiming Cui
Ziqing Yang, Yiming Cui, Zhipeng Chen, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
TextBrewer: An Open-Source Knowledge Distillation Toolkit for Natural Language Processing
To appear at ACL 2020 Demo Session
null
10.18653/v1/2020.acl-demos.2
null
cs.CL cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce TextBrewer, an open-source knowledge distillation toolkit designed for natural language processing. It works with different neural network models and supports various kinds of supervised learning tasks, such as text classification, reading comprehension, sequence labeling. TextBrewer provides a simple and uniform workflow that enables quick setting up of distillation experiments with highly flexible configurations. It offers a set of predefined distillation methods and can be extended with custom code. As a case study, we use TextBrewer to distill BERT on several typical NLP tasks. With simple configurations, we achieve results that are comparable with or even higher than the public distilled BERT models with similar numbers of parameters. Our toolkit is available through: http://textbrewer.hfl-rc.com
[ { "created": "Fri, 28 Feb 2020 09:44:07 GMT", "version": "v1" }, { "created": "Tue, 28 Apr 2020 02:34:38 GMT", "version": "v2" } ]
2020-12-14
[ [ "Yang", "Ziqing", "" ], [ "Cui", "Yiming", "" ], [ "Chen", "Zhipeng", "" ], [ "Che", "Wanxiang", "" ], [ "Liu", "Ting", "" ], [ "Wang", "Shijin", "" ], [ "Hu", "Guoping", "" ] ]
In this paper, we introduce TextBrewer, an open-source knowledge distillation toolkit designed for natural language processing. It works with different neural network models and supports various kinds of supervised learning tasks, such as text classification, reading comprehension, sequence labeling. TextBrewer provides a simple and uniform workflow that enables quick setting up of distillation experiments with highly flexible configurations. It offers a set of predefined distillation methods and can be extended with custom code. As a case study, we use TextBrewer to distill BERT on several typical NLP tasks. With simple configurations, we achieve results that are comparable with or even higher than the public distilled BERT models with similar numbers of parameters. Our toolkit is available through: http://textbrewer.hfl-rc.com
1711.00529
Angus Forbes
Angus G. Forbes, Kristine Lee, Gus Hahn-Powell, Marco A. Valenzuela-Esc\'arcega, Mihai Surdeanu
Text Annotation Graphs: Annotating Complex Natural Language Phenomena
Accepted to LREC'18, http://lrec2018.lrec-conf.org/en/conference-programme/accepted-papers/
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper introduces a new web-based software tool for annotating text, Text Annotation Graphs, or TAG. It provides functionality for representing complex relationships between words and word phrases that are not available in other software tools, including the ability to define and visualize relationships between the relationships themselves (semantic hypergraphs). Additionally, we include an approach to representing text annotations in which annotation subgraphs, or semantic summaries, are used to show relationships outside of the sequential context of the text itself. Users can use these subgraphs to quickly find similar structures within the current document or external annotated documents. Initially, TAG was developed to support information extraction tasks on a large database of biomedical articles. However, our software is flexible enough to support a wide range of annotation tasks for any domain. Examples are provided that showcase TAG's capabilities on morphological parsing and event extraction tasks. The TAG software is available at: https://github.com/ CreativeCodingLab/TextAnnotationGraphs.
[ { "created": "Wed, 1 Nov 2017 20:24:39 GMT", "version": "v1" }, { "created": "Thu, 1 Mar 2018 18:33:54 GMT", "version": "v2" } ]
2018-03-02
[ [ "Forbes", "Angus G.", "" ], [ "Lee", "Kristine", "" ], [ "Hahn-Powell", "Gus", "" ], [ "Valenzuela-Escárcega", "Marco A.", "" ], [ "Surdeanu", "Mihai", "" ] ]
This paper introduces a new web-based software tool for annotating text, Text Annotation Graphs, or TAG. It provides functionality for representing complex relationships between words and word phrases that are not available in other software tools, including the ability to define and visualize relationships between the relationships themselves (semantic hypergraphs). Additionally, we include an approach to representing text annotations in which annotation subgraphs, or semantic summaries, are used to show relationships outside of the sequential context of the text itself. Users can use these subgraphs to quickly find similar structures within the current document or external annotated documents. Initially, TAG was developed to support information extraction tasks on a large database of biomedical articles. However, our software is flexible enough to support a wide range of annotation tasks for any domain. Examples are provided that showcase TAG's capabilities on morphological parsing and event extraction tasks. The TAG software is available at: https://github.com/ CreativeCodingLab/TextAnnotationGraphs.
2205.06970
Peng Xu
Peng Xu, Hu Cheng, Jiankun Wang, Max Q.-H. Meng
Learning to Reorient Objects with Stable Placements Afforded by Extrinsic Supports
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Reorienting objects by using supports is a practical yet challenging manipulation task. Owing to the intricate geometry of objects and the constrained feasible motions of the robot, multiple manipulation steps are required for object reorientation. In this work, we propose a pipeline for predicting various object placements from point clouds. This pipeline comprises three stages: a pose generation stage, followed by a pose refinement stage, and culminating in a placement classification stage. We also propose an algorithm to construct manipulation graphs based on point clouds. Feasible manipulation sequences are determined for the robot to transfer object placements. Both simulated and real-world experiments demonstrate that our approach is effective. The simulation results underscore our pipeline's capacity to generalize to novel objects in random start poses. Our predicted placements exhibit a 20% enhancement in accuracy compared to the state-of-the-art baseline. Furthermore, the robot finds feasible sequential steps in the manipulation graphs constructed by our algorithm to accomplish object reorientation manipulation.
[ { "created": "Sat, 14 May 2022 05:08:23 GMT", "version": "v1" }, { "created": "Mon, 19 Dec 2022 08:23:12 GMT", "version": "v2" }, { "created": "Mon, 22 May 2023 06:03:11 GMT", "version": "v3" }, { "created": "Tue, 29 Aug 2023 06:16:08 GMT", "version": "v4" } ]
2023-08-30
[ [ "Xu", "Peng", "" ], [ "Cheng", "Hu", "" ], [ "Wang", "Jiankun", "" ], [ "Meng", "Max Q. -H.", "" ] ]
Reorienting objects by using supports is a practical yet challenging manipulation task. Owing to the intricate geometry of objects and the constrained feasible motions of the robot, multiple manipulation steps are required for object reorientation. In this work, we propose a pipeline for predicting various object placements from point clouds. This pipeline comprises three stages: a pose generation stage, followed by a pose refinement stage, and culminating in a placement classification stage. We also propose an algorithm to construct manipulation graphs based on point clouds. Feasible manipulation sequences are determined for the robot to transfer object placements. Both simulated and real-world experiments demonstrate that our approach is effective. The simulation results underscore our pipeline's capacity to generalize to novel objects in random start poses. Our predicted placements exhibit a 20% enhancement in accuracy compared to the state-of-the-art baseline. Furthermore, the robot finds feasible sequential steps in the manipulation graphs constructed by our algorithm to accomplish object reorientation manipulation.
2210.06138
Hongxiao Zhang
Hongxiao Zhang, Siyu Lai, Songming Zhang, Hui Huang, Yufeng Chen, Jinan Xu, Jian Liu
Improved Data Augmentation for Translation Suggestion
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Translation suggestion (TS) models are used to automatically provide alternative suggestions for incorrect spans in sentences generated by machine translation. This paper introduces the system used in our submission to the WMT'22 Translation Suggestion shared task. Our system is based on the ensemble of different translation architectures, including Transformer, SA-Transformer, and DynamicConv. We use three strategies to construct synthetic data from parallel corpora to compensate for the lack of supervised data. In addition, we introduce a multi-phase pre-training strategy, adding an additional pre-training phase with in-domain data. We rank second and third on the English-German and English-Chinese bidirectional tasks, respectively.
[ { "created": "Wed, 12 Oct 2022 12:46:43 GMT", "version": "v1" } ]
2022-10-13
[ [ "Zhang", "Hongxiao", "" ], [ "Lai", "Siyu", "" ], [ "Zhang", "Songming", "" ], [ "Huang", "Hui", "" ], [ "Chen", "Yufeng", "" ], [ "Xu", "Jinan", "" ], [ "Liu", "Jian", "" ] ]
Translation suggestion (TS) models are used to automatically provide alternative suggestions for incorrect spans in sentences generated by machine translation. This paper introduces the system used in our submission to the WMT'22 Translation Suggestion shared task. Our system is based on the ensemble of different translation architectures, including Transformer, SA-Transformer, and DynamicConv. We use three strategies to construct synthetic data from parallel corpora to compensate for the lack of supervised data. In addition, we introduce a multi-phase pre-training strategy, adding an additional pre-training phase with in-domain data. We rank second and third on the English-German and English-Chinese bidirectional tasks, respectively.
1808.09267
Kristopher Fair Mr
Kristopher M. Fair, Cameron Zachreson and Mikhail Prokopenko
Creating a surrogate commuter network from Australian Bureau of Statistics census data
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Between the 2011 and 2016 national censuses, the Australian Bureau of Statistics changed its anonymity policy compliance system for the distribution of census data. The new method has resulted in dramatic inconsistencies when comparing low-resolution data to aggregated high-resolution data. Hence, aggregated totals do not match true totals, and the mismatch gets worse as the data resolution gets finer. Here, we address several aspects of this inconsistency with respect to the 2016 usual-residence to place-of-work travel data. We introduce a re-sampling system that rectifies many of the artifacts introduced by the new ABS protocol, ensuring a higher level of consistency across partition sizes. We offer a surrogate high-resolution 2016 commuter dataset that reduces the difference between aggregated and true commuter totals from ~34% to only ~7%, which is on the order of the discrepancy across partition resolutions in data from earlier years.
[ { "created": "Mon, 27 Aug 2018 06:02:49 GMT", "version": "v1" }, { "created": "Wed, 20 Mar 2019 04:09:44 GMT", "version": "v2" } ]
2019-03-21
[ [ "Fair", "Kristopher M.", "" ], [ "Zachreson", "Cameron", "" ], [ "Prokopenko", "Mikhail", "" ] ]
Between the 2011 and 2016 national censuses, the Australian Bureau of Statistics changed its anonymity policy compliance system for the distribution of census data. The new method has resulted in dramatic inconsistencies when comparing low-resolution data to aggregated high-resolution data. Hence, aggregated totals do not match true totals, and the mismatch gets worse as the data resolution gets finer. Here, we address several aspects of this inconsistency with respect to the 2016 usual-residence to place-of-work travel data. We introduce a re-sampling system that rectifies many of the artifacts introduced by the new ABS protocol, ensuring a higher level of consistency across partition sizes. We offer a surrogate high-resolution 2016 commuter dataset that reduces the difference between aggregated and true commuter totals from ~34% to only ~7%, which is on the order of the discrepancy across partition resolutions in data from earlier years.
2308.08640
Andrea Mazzullo
Alessandro Artale, Andrea Mazzullo
Non-Rigid Designators in Epistemic and Temporal Free Description Logics (Extended Version)
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Definite descriptions, such as 'the smallest planet in the Solar System', have been recently recognised as semantically transparent devices for object identification in knowledge representation formalisms. Along with individual names, they have been introduced also in the context of description logic languages, enriching the expressivity of standard nominal constructors. Moreover, in the first-order modal logic literature, definite descriptions have been widely investigated for their non-rigid behaviour, which allows them to denote different objects at different states. In this direction, we introduce epistemic and temporal extensions of standard description logics, with nominals and the universal role, additionally equipped with definite descriptions constructors. Regarding names and descriptions, in these languages we allow for: possible lack of denotation, ensured by partial models, coming from free logic semantics as a generalisation of the classical ones; and non-rigid designation features, obtained by assigning to terms distinct values across states, as opposed to the standard rigidity condition on individual expressions. In the absence of the rigid designator assumption, we show that the satisfiability problem for epistemic free description logics is NExpTime-complete, while satisfiability for temporal free description logics over linear time structures is undecidable.
[ { "created": "Wed, 16 Aug 2023 19:27:47 GMT", "version": "v1" } ]
2023-08-21
[ [ "Artale", "Alessandro", "" ], [ "Mazzullo", "Andrea", "" ] ]
Definite descriptions, such as 'the smallest planet in the Solar System', have been recently recognised as semantically transparent devices for object identification in knowledge representation formalisms. Along with individual names, they have been introduced also in the context of description logic languages, enriching the expressivity of standard nominal constructors. Moreover, in the first-order modal logic literature, definite descriptions have been widely investigated for their non-rigid behaviour, which allows them to denote different objects at different states. In this direction, we introduce epistemic and temporal extensions of standard description logics, with nominals and the universal role, additionally equipped with definite descriptions constructors. Regarding names and descriptions, in these languages we allow for: possible lack of denotation, ensured by partial models, coming from free logic semantics as a generalisation of the classical ones; and non-rigid designation features, obtained by assigning to terms distinct values across states, as opposed to the standard rigidity condition on individual expressions. In the absence of the rigid designator assumption, we show that the satisfiability problem for epistemic free description logics is NExpTime-complete, while satisfiability for temporal free description logics over linear time structures is undecidable.
2111.00670
Aobo Yang
Aobo Yang, Nan Wang, Renqin Cai, Hongbo Deng, Hongning Wang
Comparative Explanations of Recommendations
11 pages, 4 figures
null
10.1145/3485447.3512031
null
cs.IR cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
As recommendation is essentially a comparative (or ranking) process, a good explanation should illustrate to users why an item is believed to be better than another, i.e., comparative explanations about the recommended items. Ideally, after reading the explanations, a user should reach the same ranking of items as the system's. Unfortunately, little research attention has yet been paid on such comparative explanations. In this work, we develop an extract-and-refine architecture to explain the relative comparisons among a set of ranked items from a recommender system. For each recommended item, we first extract one sentence from its associated reviews that best suits the desired comparison against a set of reference items. Then this extracted sentence is further articulated with respect to the target user through a generative model to better explain why the item is recommended. We design a new explanation quality metric based on BLEU to guide the end-to-end training of the extraction and refinement components, which avoids generation of generic content. Extensive offline evaluations on two large recommendation benchmark datasets and serious user studies against an array of state-of-the-art explainable recommendation algorithms demonstrate the necessity of comparative explanations and the effectiveness of our solution.
[ { "created": "Mon, 1 Nov 2021 02:55:56 GMT", "version": "v1" }, { "created": "Mon, 14 Feb 2022 04:19:06 GMT", "version": "v2" }, { "created": "Mon, 25 Apr 2022 15:42:58 GMT", "version": "v3" } ]
2022-04-26
[ [ "Yang", "Aobo", "" ], [ "Wang", "Nan", "" ], [ "Cai", "Renqin", "" ], [ "Deng", "Hongbo", "" ], [ "Wang", "Hongning", "" ] ]
As recommendation is essentially a comparative (or ranking) process, a good explanation should illustrate to users why an item is believed to be better than another, i.e., comparative explanations about the recommended items. Ideally, after reading the explanations, a user should reach the same ranking of items as the system's. Unfortunately, little research attention has yet been paid on such comparative explanations. In this work, we develop an extract-and-refine architecture to explain the relative comparisons among a set of ranked items from a recommender system. For each recommended item, we first extract one sentence from its associated reviews that best suits the desired comparison against a set of reference items. Then this extracted sentence is further articulated with respect to the target user through a generative model to better explain why the item is recommended. We design a new explanation quality metric based on BLEU to guide the end-to-end training of the extraction and refinement components, which avoids generation of generic content. Extensive offline evaluations on two large recommendation benchmark datasets and serious user studies against an array of state-of-the-art explainable recommendation algorithms demonstrate the necessity of comparative explanations and the effectiveness of our solution.
2305.09056
Jungang Chen
Jungang Chen, Eduardo Gildin and John E. Killough (Texas A&M University)
Physics-informed Convolutional Recurrent Surrogate Model for Reservoir Simulation with Well Controls
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel surrogate model for modeling subsurface fluid flow with well controls using a physics-informed convolutional recurrent neural network (PICRNN). The model uses a convolutional long-short term memory (ConvLSTM) to capture the spatiotemporal dependencies of the state evolution dynamics in the porous flow. The ConvLSTM is linked to the state space equations, enabling the incorporation of a discrete-time sequence of well control. The model requires initial state condition and a sequence of well controls as inputs, and predicts the state variables of the system, such as pressure, as output. By minimizing the residuals of reservoir flow state-space equations, the network is trained without the need for labeled data. The model is designed to serve as a surrogate model for predicting future reservoir states based on the initial reservoir state and input engineering controls. Boundary conditions are enforced into the state-space equations so no additional loss term is needed. Three numerical cases are studied, demonstrating the model's effectiveness in predicting reservoir dynamics based on future well/system controls. The proposed model provides a new approach for efficient and accurate prediction of subsurface fluid flow, with potential applications in optimal control design for reservoir engineering.
[ { "created": "Mon, 15 May 2023 22:43:18 GMT", "version": "v1" } ]
2023-05-17
[ [ "Chen", "Jungang", "", "Texas A&M\n University" ], [ "Gildin", "Eduardo", "", "Texas A&M\n University" ], [ "Killough", "John E.", "", "Texas A&M\n University" ] ]
This paper presents a novel surrogate model for modeling subsurface fluid flow with well controls using a physics-informed convolutional recurrent neural network (PICRNN). The model uses a convolutional long-short term memory (ConvLSTM) to capture the spatiotemporal dependencies of the state evolution dynamics in the porous flow. The ConvLSTM is linked to the state space equations, enabling the incorporation of a discrete-time sequence of well control. The model requires initial state condition and a sequence of well controls as inputs, and predicts the state variables of the system, such as pressure, as output. By minimizing the residuals of reservoir flow state-space equations, the network is trained without the need for labeled data. The model is designed to serve as a surrogate model for predicting future reservoir states based on the initial reservoir state and input engineering controls. Boundary conditions are enforced into the state-space equations so no additional loss term is needed. Three numerical cases are studied, demonstrating the model's effectiveness in predicting reservoir dynamics based on future well/system controls. The proposed model provides a new approach for efficient and accurate prediction of subsurface fluid flow, with potential applications in optimal control design for reservoir engineering.
2402.04599
Lei Wang
Lei Wang and Jun Liu and Liang Zheng and Tom Gedeon and Piotr Koniusz
Meet JEANIE: a Similarity Measure for 3D Skeleton Sequences via Temporal-Viewpoint Alignment
Accepted by the International Journal of Computer Vision (IJCV). An extension of our ACCV'22 paper [arXiv:arXiv:2210.16820] which was distinguished by the Sang Uk Lee Best Student Paper Award
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video sequences exhibit significant nuisance variations (undesired effects) of speed of actions, temporal locations, and subjects' poses, leading to temporal-viewpoint misalignment when comparing two sets of frames or evaluating the similarity of two sequences. Thus, we propose Joint tEmporal and cAmera viewpoiNt alIgnmEnt (JEANIE) for sequence pairs. In particular, we focus on 3D skeleton sequences whose camera and subjects' poses can be easily manipulated in 3D. We evaluate JEANIE on skeletal Few-shot Action Recognition (FSAR), where matching well temporal blocks (temporal chunks that make up a sequence) of support-query sequence pairs (by factoring out nuisance variations) is essential due to limited samples of novel classes. Given a query sequence, we create its several views by simulating several camera locations. For a support sequence, we match it with view-simulated query sequences, as in the popular Dynamic Time Warping (DTW). Specifically, each support temporal block can be matched to the query temporal block with the same or adjacent (next) temporal index, and adjacent camera views to achieve joint local temporal-viewpoint warping. JEANIE selects the smallest distance among matching paths with different temporal-viewpoint warping patterns, an advantage over DTW which only performs temporal alignment. We also propose an unsupervised FSAR akin to clustering of sequences with JEANIE as a distance measure. JEANIE achieves state-of-the-art results on NTU-60, NTU-120, Kinetics-skeleton and UWA3D Multiview Activity II on supervised and unsupervised FSAR, and their meta-learning inspired fusion.
[ { "created": "Wed, 7 Feb 2024 05:47:31 GMT", "version": "v1" }, { "created": "Mon, 25 Mar 2024 13:30:37 GMT", "version": "v2" } ]
2024-03-26
[ [ "Wang", "Lei", "" ], [ "Liu", "Jun", "" ], [ "Zheng", "Liang", "" ], [ "Gedeon", "Tom", "" ], [ "Koniusz", "Piotr", "" ] ]
Video sequences exhibit significant nuisance variations (undesired effects) of speed of actions, temporal locations, and subjects' poses, leading to temporal-viewpoint misalignment when comparing two sets of frames or evaluating the similarity of two sequences. Thus, we propose Joint tEmporal and cAmera viewpoiNt alIgnmEnt (JEANIE) for sequence pairs. In particular, we focus on 3D skeleton sequences whose camera and subjects' poses can be easily manipulated in 3D. We evaluate JEANIE on skeletal Few-shot Action Recognition (FSAR), where matching well temporal blocks (temporal chunks that make up a sequence) of support-query sequence pairs (by factoring out nuisance variations) is essential due to limited samples of novel classes. Given a query sequence, we create its several views by simulating several camera locations. For a support sequence, we match it with view-simulated query sequences, as in the popular Dynamic Time Warping (DTW). Specifically, each support temporal block can be matched to the query temporal block with the same or adjacent (next) temporal index, and adjacent camera views to achieve joint local temporal-viewpoint warping. JEANIE selects the smallest distance among matching paths with different temporal-viewpoint warping patterns, an advantage over DTW which only performs temporal alignment. We also propose an unsupervised FSAR akin to clustering of sequences with JEANIE as a distance measure. JEANIE achieves state-of-the-art results on NTU-60, NTU-120, Kinetics-skeleton and UWA3D Multiview Activity II on supervised and unsupervised FSAR, and their meta-learning inspired fusion.
1210.2282
Miguel Areias
Miguel Areias and Ricardo Rocha
Towards Multi-Threaded Local Tabling Using a Common Table Space
To appear in Theory and Practice of Logic Programming
Theory and Practice of Logic Programming, Volume 12, Special Issue 4-5, 2012, pp 427-443
10.1017/S1471068412000117
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-threading is currently supported by several well-known Prolog systems providing a highly portable solution for applications that can benefit from concurrency. When multi-threading is combined with tabling, we can exploit the power of higher procedural control and declarative semantics. However, despite the availability of both threads and tabling in some Prolog systems, the implementation of these two features implies complex ties to each other and to the underlying engine. Until now, XSB was the only Prolog system combining multi-threading with tabling. In XSB, tables may be either private or shared between threads. While thread-private tables are easier to implement, shared tables have all the associated issues of locking, synchronization and potential deadlocks. In this paper, we propose an alternative view to XSB's approach. In our proposal, each thread views its tables as private but, at the engine level, we use a common table space where tables are shared among all threads. We present three designs for our common table space approach: No-Sharing (NS) (similar to XSB's private tables), Subgoal-Sharing (SS) and Full-Sharing (FS). The primary goal of this work was to reduce the memory usage for the table space but, our experimental results, using the YapTab tabling system with a local evaluation strategy, show that we can also achieve significant reductions on running time.
[ { "created": "Mon, 8 Oct 2012 14:00:07 GMT", "version": "v1" }, { "created": "Tue, 9 Oct 2012 22:12:00 GMT", "version": "v2" } ]
2012-10-11
[ [ "Areias", "Miguel", "" ], [ "Rocha", "Ricardo", "" ] ]
Multi-threading is currently supported by several well-known Prolog systems providing a highly portable solution for applications that can benefit from concurrency. When multi-threading is combined with tabling, we can exploit the power of higher procedural control and declarative semantics. However, despite the availability of both threads and tabling in some Prolog systems, the implementation of these two features implies complex ties to each other and to the underlying engine. Until now, XSB was the only Prolog system combining multi-threading with tabling. In XSB, tables may be either private or shared between threads. While thread-private tables are easier to implement, shared tables have all the associated issues of locking, synchronization and potential deadlocks. In this paper, we propose an alternative view to XSB's approach. In our proposal, each thread views its tables as private but, at the engine level, we use a common table space where tables are shared among all threads. We present three designs for our common table space approach: No-Sharing (NS) (similar to XSB's private tables), Subgoal-Sharing (SS) and Full-Sharing (FS). The primary goal of this work was to reduce the memory usage for the table space but, our experimental results, using the YapTab tabling system with a local evaluation strategy, show that we can also achieve significant reductions on running time.
1602.01103
Chenhao Tan
Chenhao Tan, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, Lillian Lee
Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions
12 pages, 10 figures, to appear in Proceedings of WWW 2016, data and more at https://chenhaot.com/pages/changemyview.html (v2 made a minor correction on submission rules in ChangeMyView.)
null
10.1145/2872427.2883081
null
cs.SI cs.CL physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Changing someone's opinion is arguably one of the most important challenges of social interaction. The underlying process proves difficult to study: it is hard to know how someone's opinions are formed and whether and how someone's views shift. Fortunately, ChangeMyView, an active community on Reddit, provides a platform where users present their own opinions and reasoning, invite others to contest them, and acknowledge when the ensuing discussions change their original views. In this work, we study these interactions to understand the mechanisms behind persuasion. We find that persuasive arguments are characterized by interesting patterns of interaction dynamics, such as participant entry-order and degree of back-and-forth exchange. Furthermore, by comparing similar counterarguments to the same opinion, we show that language factors play an essential role. In particular, the interplay between the language of the opinion holder and that of the counterargument provides highly predictive cues of persuasiveness. Finally, since even in this favorable setting people may not be persuaded, we investigate the problem of determining whether someone's opinion is susceptible to being changed at all. For this more difficult task, we show that stylistic choices in how the opinion is expressed carry predictive power.
[ { "created": "Tue, 2 Feb 2016 21:00:11 GMT", "version": "v1" }, { "created": "Sat, 6 Feb 2016 20:13:55 GMT", "version": "v2" } ]
2016-02-09
[ [ "Tan", "Chenhao", "" ], [ "Niculae", "Vlad", "" ], [ "Danescu-Niculescu-Mizil", "Cristian", "" ], [ "Lee", "Lillian", "" ] ]
Changing someone's opinion is arguably one of the most important challenges of social interaction. The underlying process proves difficult to study: it is hard to know how someone's opinions are formed and whether and how someone's views shift. Fortunately, ChangeMyView, an active community on Reddit, provides a platform where users present their own opinions and reasoning, invite others to contest them, and acknowledge when the ensuing discussions change their original views. In this work, we study these interactions to understand the mechanisms behind persuasion. We find that persuasive arguments are characterized by interesting patterns of interaction dynamics, such as participant entry-order and degree of back-and-forth exchange. Furthermore, by comparing similar counterarguments to the same opinion, we show that language factors play an essential role. In particular, the interplay between the language of the opinion holder and that of the counterargument provides highly predictive cues of persuasiveness. Finally, since even in this favorable setting people may not be persuaded, we investigate the problem of determining whether someone's opinion is susceptible to being changed at all. For this more difficult task, we show that stylistic choices in how the opinion is expressed carry predictive power.
2210.14226
Jaehee Jang
Jaehee Jang, Heonseok Ha, Dahuin Jung, Sungroh Yoon
FedClassAvg: Local Representation Learning for Personalized Federated Learning on Heterogeneous Neural Networks
Accepted to ICPP 2022. Code: https://github.com/hukla/fedclassavg
null
null
null
cs.LG cs.AI cs.DC
http://creativecommons.org/licenses/by/4.0/
Personalized federated learning is aimed at allowing numerous clients to train personalized models while participating in collaborative training in a communication-efficient manner without exchanging private data. However, many personalized federated learning algorithms assume that clients have the same neural network architecture, and those for heterogeneous models remain understudied. In this study, we propose a novel personalized federated learning method called federated classifier averaging (FedClassAvg). Deep neural networks for supervised learning tasks consist of feature extractor and classifier layers. FedClassAvg aggregates classifier weights as an agreement on decision boundaries on feature spaces so that clients with not independently and identically distributed (non-iid) data can learn about scarce labels. In addition, local feature representation learning is applied to stabilize the decision boundaries and improve the local feature extraction capabilities for clients. While the existing methods require the collection of auxiliary data or model weights to generate a counterpart, FedClassAvg only requires clients to communicate with a couple of fully connected layers, which is highly communication-efficient. Moreover, FedClassAvg does not require extra optimization problems such as knowledge transfer, which requires intensive computation overhead. We evaluated FedClassAvg through extensive experiments and demonstrated it outperforms the current state-of-the-art algorithms on heterogeneous personalized federated learning tasks.
[ { "created": "Tue, 25 Oct 2022 08:32:08 GMT", "version": "v1" }, { "created": "Thu, 27 Oct 2022 03:19:30 GMT", "version": "v2" } ]
2022-10-28
[ [ "Jang", "Jaehee", "" ], [ "Ha", "Heonseok", "" ], [ "Jung", "Dahuin", "" ], [ "Yoon", "Sungroh", "" ] ]
Personalized federated learning is aimed at allowing numerous clients to train personalized models while participating in collaborative training in a communication-efficient manner without exchanging private data. However, many personalized federated learning algorithms assume that clients have the same neural network architecture, and those for heterogeneous models remain understudied. In this study, we propose a novel personalized federated learning method called federated classifier averaging (FedClassAvg). Deep neural networks for supervised learning tasks consist of feature extractor and classifier layers. FedClassAvg aggregates classifier weights as an agreement on decision boundaries on feature spaces so that clients with not independently and identically distributed (non-iid) data can learn about scarce labels. In addition, local feature representation learning is applied to stabilize the decision boundaries and improve the local feature extraction capabilities for clients. While the existing methods require the collection of auxiliary data or model weights to generate a counterpart, FedClassAvg only requires clients to communicate with a couple of fully connected layers, which is highly communication-efficient. Moreover, FedClassAvg does not require extra optimization problems such as knowledge transfer, which requires intensive computation overhead. We evaluated FedClassAvg through extensive experiments and demonstrated it outperforms the current state-of-the-art algorithms on heterogeneous personalized federated learning tasks.
2310.04641
Ali Nikkhah
Ali Nikkhah and Scott Jordan
Towards Equitable Peering: A Proposal for a Fair Peering Fee Between ISPs and Content Providers
Accepted for Publication in IEEE Transactions on Network and Service Management
null
10.1109/TNSM.2023.3338974
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Disagreements over peering fees have risen to the level of potential government regulation. ISPs assert that content providers should pay them based on the volume of downstream traffic. Transit providers and content providers assert that consumers have already paid ISPs to transmit the content they request and that peering agreements should be settlement-free. Our goal is to determine the fair payment between an ISP and an interconnecting network. We consider fair cost sharing between two Tier-1 ISPs, and derive the peering fee that equalizes their net backbone transportation costs. We then consider fair cost sharing between an ISP and a transit provider. We derive the peering fee that equalizes their net backbone transportation costs, and illustrate how it depends on the traffic ratio and the amount of localization of that content. Finally, we consider the fair peering fee between an ISP and a content provider. We derive the peering fee that results in the same net cost to the ISP, and illustrate how the peering fee depends on the number of interconnection points and the amount of localization of that content. We dispense with the ISP argument that it should be paid regardless of the amount of localization of content.
[ { "created": "Sat, 7 Oct 2023 01:25:46 GMT", "version": "v1" }, { "created": "Mon, 11 Dec 2023 22:33:31 GMT", "version": "v2" } ]
2023-12-13
[ [ "Nikkhah", "Ali", "" ], [ "Jordan", "Scott", "" ] ]
Disagreements over peering fees have risen to the level of potential government regulation. ISPs assert that content providers should pay them based on the volume of downstream traffic. Transit providers and content providers assert that consumers have already paid ISPs to transmit the content they request and that peering agreements should be settlement-free. Our goal is to determine the fair payment between an ISP and an interconnecting network. We consider fair cost sharing between two Tier-1 ISPs, and derive the peering fee that equalizes their net backbone transportation costs. We then consider fair cost sharing between an ISP and a transit provider. We derive the peering fee that equalizes their net backbone transportation costs, and illustrate how it depends on the traffic ratio and the amount of localization of that content. Finally, we consider the fair peering fee between an ISP and a content provider. We derive the peering fee that results in the same net cost to the ISP, and illustrate how the peering fee depends on the number of interconnection points and the amount of localization of that content. We dispense with the ISP argument that it should be paid regardless of the amount of localization of content.
2305.10987
Henrique Branquinho
Henrique Branquinho, Nuno Louren\c{c}o, Ernesto Costa
SPENSER: Towards a NeuroEvolutionary Approach for Convolutional Spiking Neural Networks
null
null
10.1145/3583133.3596399
null
cs.NE cs.LG
http://creativecommons.org/licenses/by/4.0/
Spiking Neural Networks (SNNs) have attracted recent interest due to their energy efficiency and biological plausibility. However, the performance of SNNs still lags behind traditional Artificial Neural Networks (ANNs), as there is no consensus on the best learning algorithm for SNNs. Best-performing SNNs are based on ANN to SNN conversion or learning with spike-based backpropagation through surrogate gradients. The focus of recent research has been on developing and testing different learning strategies, with hand-tailored architectures and parameter tuning. Neuroevolution (NE), has proven successful as a way to automatically design ANNs and tune parameters, but its applications to SNNs are still at an early stage. DENSER is a NE framework for the automatic design and parametrization of ANNs, based on the principles of Genetic Algorithms (GA) and Structured Grammatical Evolution (SGE). In this paper, we propose SPENSER, a NE framework for SNN generation based on DENSER, for image classification on the MNIST and Fashion-MNIST datasets. SPENSER generates competitive performing networks with a test accuracy of 99.42% and 91.65% respectively.
[ { "created": "Thu, 18 May 2023 14:06:37 GMT", "version": "v1" } ]
2023-05-19
[ [ "Branquinho", "Henrique", "" ], [ "Lourenço", "Nuno", "" ], [ "Costa", "Ernesto", "" ] ]
Spiking Neural Networks (SNNs) have attracted recent interest due to their energy efficiency and biological plausibility. However, the performance of SNNs still lags behind traditional Artificial Neural Networks (ANNs), as there is no consensus on the best learning algorithm for SNNs. Best-performing SNNs are based on ANN to SNN conversion or learning with spike-based backpropagation through surrogate gradients. The focus of recent research has been on developing and testing different learning strategies, with hand-tailored architectures and parameter tuning. Neuroevolution (NE), has proven successful as a way to automatically design ANNs and tune parameters, but its applications to SNNs are still at an early stage. DENSER is a NE framework for the automatic design and parametrization of ANNs, based on the principles of Genetic Algorithms (GA) and Structured Grammatical Evolution (SGE). In this paper, we propose SPENSER, a NE framework for SNN generation based on DENSER, for image classification on the MNIST and Fashion-MNIST datasets. SPENSER generates competitive performing networks with a test accuracy of 99.42% and 91.65% respectively.
2111.00262
Phil\'emon Brakel
Philemon Brakel, Steven Bohez, Leonard Hasenclever, Nicolas Heess, Konstantinos Bousmalis
Learning Coordinated Terrain-Adaptive Locomotion by Imitating a Centroidal Dynamics Planner
A shorter version without appendix was submitted to ICRA 2022
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
Dynamic quadruped locomotion over challenging terrains with precise foot placements is a hard problem for both optimal control methods and Reinforcement Learning (RL). Non-linear solvers can produce coordinated constraint satisfying motions, but often take too long to converge for online application. RL methods can learn dynamic reactive controllers but require carefully tuned shaping rewards to produce good gaits and can have trouble discovering precise coordinated movements. Imitation learning circumvents this problem and has been used with motion capture data to extract quadruped gaits for flat terrains. However, it would be costly to acquire motion capture data for a very large variety of terrains with height differences. In this work, we combine the advantages of trajectory optimization and learning methods and show that terrain adaptive controllers can be obtained by training policies to imitate trajectories that have been planned over procedural terrains by a non-linear solver. We show that the learned policies transfer to unseen terrains and can be fine-tuned to dynamically traverse challenging terrains that require precise foot placements and are very hard to solve with standard RL.
[ { "created": "Sat, 30 Oct 2021 14:24:39 GMT", "version": "v1" } ]
2021-11-02
[ [ "Brakel", "Philemon", "" ], [ "Bohez", "Steven", "" ], [ "Hasenclever", "Leonard", "" ], [ "Heess", "Nicolas", "" ], [ "Bousmalis", "Konstantinos", "" ] ]
Dynamic quadruped locomotion over challenging terrains with precise foot placements is a hard problem for both optimal control methods and Reinforcement Learning (RL). Non-linear solvers can produce coordinated constraint satisfying motions, but often take too long to converge for online application. RL methods can learn dynamic reactive controllers but require carefully tuned shaping rewards to produce good gaits and can have trouble discovering precise coordinated movements. Imitation learning circumvents this problem and has been used with motion capture data to extract quadruped gaits for flat terrains. However, it would be costly to acquire motion capture data for a very large variety of terrains with height differences. In this work, we combine the advantages of trajectory optimization and learning methods and show that terrain adaptive controllers can be obtained by training policies to imitate trajectories that have been planned over procedural terrains by a non-linear solver. We show that the learned policies transfer to unseen terrains and can be fine-tuned to dynamically traverse challenging terrains that require precise foot placements and are very hard to solve with standard RL.
2303.15407
Helmuth Naumer
Helmuth Naumer and Farzad Kamalabadi
Dimensionality Collapse: Optimal Measurement Selection for Low-Error Infinite-Horizon Forecasting
33 Pages, 9 Figures, To appear in Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS) 2023
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:6166-6198, 2023
null
null
cs.LG cs.SY eess.SP eess.SY math.ST stat.TH
http://creativecommons.org/licenses/by/4.0/
This work introduces a method to select linear functional measurements of a vector-valued time series optimized for forecasting distant time-horizons. By formulating and solving the problem of sequential linear measurement design as an infinite-horizon problem with the time-averaged trace of the Cram\'{e}r-Rao lower bound (CRLB) for forecasting as the cost, the most informative data can be collected irrespective of the eventual forecasting algorithm. By introducing theoretical results regarding measurements under additive noise from natural exponential families, we construct an equivalent problem from which a local dimensionality reduction can be derived. This alternative formulation is based on the future collapse of dimensionality inherent in the limiting behavior of many differential equations and can be directly observed in the low-rank structure of the CRLB for forecasting. Implementations of both an approximate dynamic programming formulation and the proposed alternative are illustrated using an extended Kalman filter for state estimation, with results on simulated systems with limit cycles and chaotic behavior demonstrating a linear improvement in the CRLB as a function of the number of collapsing dimensions of the system.
[ { "created": "Mon, 27 Mar 2023 17:25:04 GMT", "version": "v1" } ]
2023-04-18
[ [ "Naumer", "Helmuth", "" ], [ "Kamalabadi", "Farzad", "" ] ]
This work introduces a method to select linear functional measurements of a vector-valued time series optimized for forecasting distant time-horizons. By formulating and solving the problem of sequential linear measurement design as an infinite-horizon problem with the time-averaged trace of the Cram\'{e}r-Rao lower bound (CRLB) for forecasting as the cost, the most informative data can be collected irrespective of the eventual forecasting algorithm. By introducing theoretical results regarding measurements under additive noise from natural exponential families, we construct an equivalent problem from which a local dimensionality reduction can be derived. This alternative formulation is based on the future collapse of dimensionality inherent in the limiting behavior of many differential equations and can be directly observed in the low-rank structure of the CRLB for forecasting. Implementations of both an approximate dynamic programming formulation and the proposed alternative are illustrated using an extended Kalman filter for state estimation, with results on simulated systems with limit cycles and chaotic behavior demonstrating a linear improvement in the CRLB as a function of the number of collapsing dimensions of the system.
1503.06465
Joao Carreira
Joao Carreira, Sara Vicente, Lourdes Agapito and Jorge Batista
Lifting Object Detection Datasets into 3D
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image.
[ { "created": "Sun, 22 Mar 2015 19:26:57 GMT", "version": "v1" }, { "created": "Sun, 31 Jul 2016 09:49:19 GMT", "version": "v2" } ]
2016-08-02
[ [ "Carreira", "Joao", "" ], [ "Vicente", "Sara", "" ], [ "Agapito", "Lourdes", "" ], [ "Batista", "Jorge", "" ] ]
While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image.
1801.03065
Mehmet Deveci
Mehmet Deveci, Christian Trott, Sivasankaran Rajamanickam
Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures
null
null
null
SAND2018-0186 R
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.
[ { "created": "Tue, 9 Jan 2018 18:07:40 GMT", "version": "v1" } ]
2018-01-10
[ [ "Deveci", "Mehmet", "" ], [ "Trott", "Christian", "" ], [ "Rajamanickam", "Sivasankaran", "" ] ]
Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.
2003.00851
Seungjun Lee
Seungjun Lee
Deep Learning on Radar Centric 3D Object Detection
4 pages
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Even though many existing 3D object detection algorithms rely mostly on camera and LiDAR, camera and LiDAR are prone to be affected by harsh weather and lighting conditions. On the other hand, radar is resistant to such conditions. However, research has found only recently to apply deep neural networks on radar data. In this paper, we introduce a deep learning approach to 3D object detection with radar only. To the best of our knowledge, we are the first ones to demonstrate a deep learning-based 3D object detection model with radar only that was trained on the public radar dataset. To overcome the lack of radar labeled data, we propose a novel way of making use of abundant LiDAR data by transforming it into radar-like point cloud data and aggressive radar augmentation techniques.
[ { "created": "Thu, 27 Feb 2020 10:16:46 GMT", "version": "v1" } ]
2020-03-03
[ [ "Lee", "Seungjun", "" ] ]
Even though many existing 3D object detection algorithms rely mostly on camera and LiDAR, camera and LiDAR are prone to be affected by harsh weather and lighting conditions. On the other hand, radar is resistant to such conditions. However, research has found only recently to apply deep neural networks on radar data. In this paper, we introduce a deep learning approach to 3D object detection with radar only. To the best of our knowledge, we are the first ones to demonstrate a deep learning-based 3D object detection model with radar only that was trained on the public radar dataset. To overcome the lack of radar labeled data, we propose a novel way of making use of abundant LiDAR data by transforming it into radar-like point cloud data and aggressive radar augmentation techniques.
2204.12785
Kyungjae Lee
Kyungjae Lee, Wookje Han, Seung-won Hwang, Hwaran Lee, Joonsuk Park, Sang-Woo Lee
Plug-and-Play Adaptation for Continuously-updated QA
Accepted at Findings of ACL 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Language models (LMs) have shown great potential as implicit knowledge bases (KBs). And for their practical use, knowledge in LMs need to be updated periodically. However, existing tasks to assess LMs' efficacy as KBs do not adequately consider multiple large-scale updates. To this end, we first propose a novel task--Continuously-updated QA (CuQA)--in which multiple large-scale updates are made to LMs, and the performance is measured with respect to the success in adding and updating knowledge while retaining existing knowledge. We then present LMs with plug-in modules that effectively handle the updates. Experiments conducted on zsRE QA and NQ datasets show that our method outperforms existing approaches. We find that our method is 4x more effective in terms of updates/forgets ratio, compared to a fine-tuning baseline.
[ { "created": "Wed, 27 Apr 2022 09:11:16 GMT", "version": "v1" } ]
2022-04-28
[ [ "Lee", "Kyungjae", "" ], [ "Han", "Wookje", "" ], [ "Hwang", "Seung-won", "" ], [ "Lee", "Hwaran", "" ], [ "Park", "Joonsuk", "" ], [ "Lee", "Sang-Woo", "" ] ]
Language models (LMs) have shown great potential as implicit knowledge bases (KBs). And for their practical use, knowledge in LMs need to be updated periodically. However, existing tasks to assess LMs' efficacy as KBs do not adequately consider multiple large-scale updates. To this end, we first propose a novel task--Continuously-updated QA (CuQA)--in which multiple large-scale updates are made to LMs, and the performance is measured with respect to the success in adding and updating knowledge while retaining existing knowledge. We then present LMs with plug-in modules that effectively handle the updates. Experiments conducted on zsRE QA and NQ datasets show that our method outperforms existing approaches. We find that our method is 4x more effective in terms of updates/forgets ratio, compared to a fine-tuning baseline.
1401.2686
Kathryn Heal
J\'er\^ome Gilles and Kathryn Heal
A parameterless scale-space approach to find meaningful modes in histograms - Application to image and spectrum segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present an algorithm to automatically detect meaningful modes in a histogram. The proposed method is based on the behavior of local minima in a scale-space representation. We show that the detection of such meaningful modes is equivalent in a two classes clustering problem on the length of minima scale-space curves. The algorithm is easy to implement, fast, and does not require any parameters. We present several results on histogram and spectrum segmentation, grayscale image segmentation and color image reduction.
[ { "created": "Mon, 13 Jan 2014 00:19:34 GMT", "version": "v1" } ]
2014-01-14
[ [ "Gilles", "Jérôme", "" ], [ "Heal", "Kathryn", "" ] ]
In this paper, we present an algorithm to automatically detect meaningful modes in a histogram. The proposed method is based on the behavior of local minima in a scale-space representation. We show that the detection of such meaningful modes is equivalent in a two classes clustering problem on the length of minima scale-space curves. The algorithm is easy to implement, fast, and does not require any parameters. We present several results on histogram and spectrum segmentation, grayscale image segmentation and color image reduction.
0811.0063
Andrej Dujella
Andrej Dujella
A variant of Wiener's attack on RSA
9 pages
Computing 85 (2009), 77-83
10.1007/s00607-009-0037-8
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wiener's attack is a well-known polynomial-time attack on a RSA cryptosystem with small secret decryption exponent d, which works if d<n^{0.25}, where n=pq is the modulus of the cryptosystem. Namely, in that case, d is the denominator of some convergent p_m/q_m of the continued fraction expansion of e/n, and therefore d can be computed efficiently from the public key (n,e). There are several extensions of Wiener's attack that allow the RSA cryptosystem to be broken when d is a few bits longer than n^{0.25}. They all have the run-time complexity (at least) O(D^2), where d=Dn^{0.25}. Here we propose a new variant of Wiener's attack, which uses results on Diophantine approximations of the form |\alpha - p/q| < c/q^2, and "meet-in-the-middle" variant for testing the candidates (of the form rq_{m+1} + sq_m) for the secret exponent. This decreases the run-time complexity of the attack to O(D log(D)) (with the space complexity O(D)).
[ { "created": "Sat, 1 Nov 2008 07:08:23 GMT", "version": "v1" } ]
2021-08-30
[ [ "Dujella", "Andrej", "" ] ]
Wiener's attack is a well-known polynomial-time attack on a RSA cryptosystem with small secret decryption exponent d, which works if d<n^{0.25}, where n=pq is the modulus of the cryptosystem. Namely, in that case, d is the denominator of some convergent p_m/q_m of the continued fraction expansion of e/n, and therefore d can be computed efficiently from the public key (n,e). There are several extensions of Wiener's attack that allow the RSA cryptosystem to be broken when d is a few bits longer than n^{0.25}. They all have the run-time complexity (at least) O(D^2), where d=Dn^{0.25}. Here we propose a new variant of Wiener's attack, which uses results on Diophantine approximations of the form |\alpha - p/q| < c/q^2, and "meet-in-the-middle" variant for testing the candidates (of the form rq_{m+1} + sq_m) for the secret exponent. This decreases the run-time complexity of the attack to O(D log(D)) (with the space complexity O(D)).
1911.06009
Hideaki Hayashi D.Eng.
Hideaki Hayashi, Taro Shibanoki, Keisuke Shima, Yuichi Kurita and Toshio Tsuji
A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis
Published in IEEE Transactions on Neural Networks and Learning Systems
IEEE Transactions on Neural Networks and Learning Systems, Vol. 26, No.12, pp. 3021-3033, 2015
10.1109/TNNLS.2015.2400448
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a probabilistic neural network developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower-dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into a neural network, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and EEG signals in the experiments conducted during the study.
[ { "created": "Thu, 14 Nov 2019 09:48:41 GMT", "version": "v1" } ]
2019-11-15
[ [ "Hayashi", "Hideaki", "" ], [ "Shibanoki", "Taro", "" ], [ "Shima", "Keisuke", "" ], [ "Kurita", "Yuichi", "" ], [ "Tsuji", "Toshio", "" ] ]
This paper proposes a probabilistic neural network developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower-dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into a neural network, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and EEG signals in the experiments conducted during the study.
2012.09916
Jordan Samhi
Jordan Samhi, Alexandre Bartel, Tegawend\'e F. Bissyand\'e, Jacques Klein
RAICC: Revealing Atypical Inter-Component Communication in Android Apps
In the proceedings of the 43rd International Conference on Software Engineering 2021 (ICSE 2021)
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inter-Component Communication (ICC) is a key mechanism in Android. It enables developers to compose rich functionalities and explore reuse within and across apps. Unfortunately, as reported by a large body of literature, ICC is rather "complex and largely unconstrained", leaving room to a lack of precision in apps modeling. To address the challenge of tracking ICCs within apps, state of the art static approaches such as Epicc, IccTA and Amandroid have focused on the documented framework ICC methods (e.g., startActivity) to build their approaches. In this work we show that ICC models inferred in these state of the art tools may actually be incomplete: the framework provides other atypical ways of performing ICCs. To address this limitation in the state of the art, we propose RAICC a static approach for modeling new ICC links and thus boosting previous analysis tasks such as ICC vulnerability detection, privacy leaks detection, malware detection, etc. We have evaluated RAICC on 20 benchmark apps, demonstrating that it improves the precision and recall of uncovered leaks in state of the art tools. We have also performed a large empirical investigation showing that Atypical ICC methods are largely used in Android apps, although not necessarily for data transfer. We also show that RAICC increases the number of ICC links found by 61.6% on a dataset of real-world malicious apps, and that RAICC enables the detection of new ICC vulnerabilities.
[ { "created": "Thu, 17 Dec 2020 20:20:20 GMT", "version": "v1" }, { "created": "Fri, 15 Jan 2021 09:41:05 GMT", "version": "v2" } ]
2021-01-18
[ [ "Samhi", "Jordan", "" ], [ "Bartel", "Alexandre", "" ], [ "Bissyandé", "Tegawendé F.", "" ], [ "Klein", "Jacques", "" ] ]
Inter-Component Communication (ICC) is a key mechanism in Android. It enables developers to compose rich functionalities and explore reuse within and across apps. Unfortunately, as reported by a large body of literature, ICC is rather "complex and largely unconstrained", leaving room to a lack of precision in apps modeling. To address the challenge of tracking ICCs within apps, state of the art static approaches such as Epicc, IccTA and Amandroid have focused on the documented framework ICC methods (e.g., startActivity) to build their approaches. In this work we show that ICC models inferred in these state of the art tools may actually be incomplete: the framework provides other atypical ways of performing ICCs. To address this limitation in the state of the art, we propose RAICC a static approach for modeling new ICC links and thus boosting previous analysis tasks such as ICC vulnerability detection, privacy leaks detection, malware detection, etc. We have evaluated RAICC on 20 benchmark apps, demonstrating that it improves the precision and recall of uncovered leaks in state of the art tools. We have also performed a large empirical investigation showing that Atypical ICC methods are largely used in Android apps, although not necessarily for data transfer. We also show that RAICC increases the number of ICC links found by 61.6% on a dataset of real-world malicious apps, and that RAICC enables the detection of new ICC vulnerabilities.
1609.09743
Antoine Deleforge
Antoine Deleforge (PANAMA), Florence Forbes (MISTIS)
Rectified binaural ratio: A complex T-distributed feature for robust sound localization
European Signal Processing Conference, Aug 2016, Budapest, Hungary. Proceedings of the 24th European Signal Processing Conference (EUSIPCO), 2016, 2016
null
null
null
cs.SD stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most existing methods in binaural sound source localization rely on some kind of aggregation of phase-and level-difference cues in the time-frequency plane. While different ag-gregation schemes exist, they are often heuristic and suffer in adverse noise conditions. In this paper, we introduce the rectified binaural ratio as a new feature for sound source local-ization. We show that for Gaussian-process point source signals corrupted by stationary Gaussian noise, this ratio follows a complex t-distribution with explicit parameters. This new formulation provides a principled and statistically sound way to aggregate binaural features in the presence of noise. We subsequently derive two simple and efficient methods for robust relative transfer function and time-delay estimation. Experiments on heavily corrupted simulated and speech signals demonstrate the robustness of the proposed scheme.
[ { "created": "Fri, 30 Sep 2016 14:15:46 GMT", "version": "v1" } ]
2016-10-03
[ [ "Deleforge", "Antoine", "", "PANAMA" ], [ "Forbes", "Florence", "", "MISTIS" ] ]
Most existing methods in binaural sound source localization rely on some kind of aggregation of phase-and level-difference cues in the time-frequency plane. While different ag-gregation schemes exist, they are often heuristic and suffer in adverse noise conditions. In this paper, we introduce the rectified binaural ratio as a new feature for sound source local-ization. We show that for Gaussian-process point source signals corrupted by stationary Gaussian noise, this ratio follows a complex t-distribution with explicit parameters. This new formulation provides a principled and statistically sound way to aggregate binaural features in the presence of noise. We subsequently derive two simple and efficient methods for robust relative transfer function and time-delay estimation. Experiments on heavily corrupted simulated and speech signals demonstrate the robustness of the proposed scheme.
2303.06532
Qi Zhao
Qi Zhao, Qiqi Duan, Bai Yan, Shi Cheng, Yuhui Shi
Automated Design of Metaheuristic Algorithms: A Survey
null
Transactions on Machine Learning Research, 2024, https://openreview.net/forum?id=qhtHsvF5zj
null
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metaheuristics have gained great success in academia and practice because their search logic can be applied to any problem with available solution representation, solution quality evaluation, and certain notions of locality. Manually designing metaheuristic algorithms for solving a target problem is criticized for being laborious, error-prone, and requiring intensive specialized knowledge. This gives rise to increasing interest in automated design of metaheuristic algorithms. With computing power to fully explore potential design choices, the automated design could reach and even surpass human-level design and could make high-performance algorithms accessible to a much wider range of researchers and practitioners. This paper presents a broad picture of automated design of metaheuristic algorithms, by conducting a survey on the common grounds and representative techniques in terms of design space, design strategies, performance evaluation strategies, and target problems in this field.
[ { "created": "Sun, 12 Mar 2023 01:20:49 GMT", "version": "v1" }, { "created": "Mon, 13 Nov 2023 09:39:30 GMT", "version": "v2" }, { "created": "Wed, 21 Feb 2024 08:15:58 GMT", "version": "v3" } ]
2024-02-22
[ [ "Zhao", "Qi", "" ], [ "Duan", "Qiqi", "" ], [ "Yan", "Bai", "" ], [ "Cheng", "Shi", "" ], [ "Shi", "Yuhui", "" ] ]
Metaheuristics have gained great success in academia and practice because their search logic can be applied to any problem with available solution representation, solution quality evaluation, and certain notions of locality. Manually designing metaheuristic algorithms for solving a target problem is criticized for being laborious, error-prone, and requiring intensive specialized knowledge. This gives rise to increasing interest in automated design of metaheuristic algorithms. With computing power to fully explore potential design choices, the automated design could reach and even surpass human-level design and could make high-performance algorithms accessible to a much wider range of researchers and practitioners. This paper presents a broad picture of automated design of metaheuristic algorithms, by conducting a survey on the common grounds and representative techniques in terms of design space, design strategies, performance evaluation strategies, and target problems in this field.
2408.02520
Pieter Delobelle
Christoph Rauchegger, Sonja Mei Wang, Pieter Delobelle
OneLove beyond the field -- A few-shot pipeline for topic and sentiment analysis during the FIFA World Cup in Qatar
Accepted at KONVENS 2024
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The FIFA World Cup in Qatar was discussed extensively in the news and on social media. Due to news reports with allegations of human rights violations, there were calls to boycott it. Wearing a OneLove armband was part of a planned protest activity. Controversy around the armband arose when FIFA threatened to sanction captains who wear it. To understand what topics Twitter users Tweeted about and what the opinion of German Twitter users was towards the OneLove armband, we performed an analysis of German Tweets published during the World Cup using in-context learning with LLMs. We validated the labels on human annotations. We found that Twitter users initially discussed the armband's impact, LGBT rights, and politics; after the ban, the conversation shifted towards politics in sports in general, accompanied by a subtle shift in sentiment towards neutrality. Our evaluation serves as a framework for future research to explore the impact of sports activism and evolving public sentiment. This is especially useful in settings where labeling datasets for specific opinions is unfeasible, such as when events are unfolding.
[ { "created": "Mon, 5 Aug 2024 14:40:40 GMT", "version": "v1" } ]
2024-08-06
[ [ "Rauchegger", "Christoph", "" ], [ "Wang", "Sonja Mei", "" ], [ "Delobelle", "Pieter", "" ] ]
The FIFA World Cup in Qatar was discussed extensively in the news and on social media. Due to news reports with allegations of human rights violations, there were calls to boycott it. Wearing a OneLove armband was part of a planned protest activity. Controversy around the armband arose when FIFA threatened to sanction captains who wear it. To understand what topics Twitter users Tweeted about and what the opinion of German Twitter users was towards the OneLove armband, we performed an analysis of German Tweets published during the World Cup using in-context learning with LLMs. We validated the labels on human annotations. We found that Twitter users initially discussed the armband's impact, LGBT rights, and politics; after the ban, the conversation shifted towards politics in sports in general, accompanied by a subtle shift in sentiment towards neutrality. Our evaluation serves as a framework for future research to explore the impact of sports activism and evolving public sentiment. This is especially useful in settings where labeling datasets for specific opinions is unfeasible, such as when events are unfolding.
1401.7702
Benjamin Miller
Benjamin A. Miller, Michelle S. Beard, Patrick J. Wolfe, and Nadya T. Bliss
A Spectral Framework for Anomalous Subgraph Detection
In submission to the IEEE, 16 pages, 8 figures
IEEE Trans. Signal Process. 63 (2015) 4191-4206
10.1109/TSP.2015.2437841
null
cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A wide variety of application domains are concerned with data consisting of entities and their relationships or connections, formally represented as graphs. Within these diverse application areas, a common problem of interest is the detection of a subset of entities whose connectivity is anomalous with respect to the rest of the data. While the detection of such anomalous subgraphs has received a substantial amount of attention, no application-agnostic framework exists for analysis of signal detectability in graph-based data. In this paper, we describe a framework that enables such analysis using the principal eigenspace of a graph's residuals matrix, commonly called the modularity matrix in community detection. Leveraging this analytical tool, we show that the framework has a natural power metric in the spectral norm of the anomalous subgraph's adjacency matrix (signal power) and of the background graph's residuals matrix (noise power). We propose several algorithms based on spectral properties of the residuals matrix, with more computationally expensive techniques providing greater detection power. Detection and identification performance are presented for a number of signal and noise models, including clusters and bipartite foregrounds embedded into simple random backgrounds as well as graphs with community structure and realistic degree distributions. The trends observed verify intuition gleaned from other signal processing areas, such as greater detection power when the signal is embedded within a less active portion of the background. We demonstrate the utility of the proposed techniques in detecting small, highly anomalous subgraphs in real graphs derived from Internet traffic and product co-purchases.
[ { "created": "Wed, 29 Jan 2014 23:39:39 GMT", "version": "v1" }, { "created": "Wed, 22 Oct 2014 04:01:00 GMT", "version": "v2" } ]
2016-09-06
[ [ "Miller", "Benjamin A.", "" ], [ "Beard", "Michelle S.", "" ], [ "Wolfe", "Patrick J.", "" ], [ "Bliss", "Nadya T.", "" ] ]
A wide variety of application domains are concerned with data consisting of entities and their relationships or connections, formally represented as graphs. Within these diverse application areas, a common problem of interest is the detection of a subset of entities whose connectivity is anomalous with respect to the rest of the data. While the detection of such anomalous subgraphs has received a substantial amount of attention, no application-agnostic framework exists for analysis of signal detectability in graph-based data. In this paper, we describe a framework that enables such analysis using the principal eigenspace of a graph's residuals matrix, commonly called the modularity matrix in community detection. Leveraging this analytical tool, we show that the framework has a natural power metric in the spectral norm of the anomalous subgraph's adjacency matrix (signal power) and of the background graph's residuals matrix (noise power). We propose several algorithms based on spectral properties of the residuals matrix, with more computationally expensive techniques providing greater detection power. Detection and identification performance are presented for a number of signal and noise models, including clusters and bipartite foregrounds embedded into simple random backgrounds as well as graphs with community structure and realistic degree distributions. The trends observed verify intuition gleaned from other signal processing areas, such as greater detection power when the signal is embedded within a less active portion of the background. We demonstrate the utility of the proposed techniques in detecting small, highly anomalous subgraphs in real graphs derived from Internet traffic and product co-purchases.
1810.11404
Barbara K\"onig
Paolo Baldan, Barbara K\"onig, Tommaso Padoan, Christina Mika-Michalski
Fixpoint Games on Continuous Lattices
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many analysis and verifications tasks, such as static program analyses and model-checking for temporal logics reduce to the solution of systems of equations over suitable lattices. Inspired by recent work on lattice-theoretic progress measures, we develop a game-theoretical approach to the solution of systems of monotone equations over lattices, where for each single equation either the least or greatest solution is taken. A simple parity game, referred to as fixpoint game, is defined that provides a correct and complete characterisation of the solution of equation systems over continuous lattices, a quite general class of lattices widely used in semantics. For powerset lattices the fixpoint game is intimately connected with classical parity games for $\mu$-calculus model-checking, whose solution can exploit as a key tool Jurdzi\'nski's small progress measures. We show how the notion of progress measure can be naturally generalised to fixpoint games over continuous lattices and we prove the existence of small progress measures. Our results lead to a constructive formulation of progress measures as (least) fixpoints. We refine this characterisation by introducing the notion of selection that allows one to constrain the plays in the parity game, enabling an effective (and possibly efficient) solution of the game, and thus of the associated verification problem. We also propose a logic for specifying the moves of the existential player that can be used to systematically derive simplified equations for efficiently computing progress measures. We discuss potential applications to the model-checking of latticed $\mu$-calculi and to the solution of fixpoint equations systems over the reals.
[ { "created": "Fri, 26 Oct 2018 16:04:06 GMT", "version": "v1" }, { "created": "Mon, 19 Apr 2021 11:20:25 GMT", "version": "v2" } ]
2021-04-20
[ [ "Baldan", "Paolo", "" ], [ "König", "Barbara", "" ], [ "Padoan", "Tommaso", "" ], [ "Mika-Michalski", "Christina", "" ] ]
Many analysis and verifications tasks, such as static program analyses and model-checking for temporal logics reduce to the solution of systems of equations over suitable lattices. Inspired by recent work on lattice-theoretic progress measures, we develop a game-theoretical approach to the solution of systems of monotone equations over lattices, where for each single equation either the least or greatest solution is taken. A simple parity game, referred to as fixpoint game, is defined that provides a correct and complete characterisation of the solution of equation systems over continuous lattices, a quite general class of lattices widely used in semantics. For powerset lattices the fixpoint game is intimately connected with classical parity games for $\mu$-calculus model-checking, whose solution can exploit as a key tool Jurdzi\'nski's small progress measures. We show how the notion of progress measure can be naturally generalised to fixpoint games over continuous lattices and we prove the existence of small progress measures. Our results lead to a constructive formulation of progress measures as (least) fixpoints. We refine this characterisation by introducing the notion of selection that allows one to constrain the plays in the parity game, enabling an effective (and possibly efficient) solution of the game, and thus of the associated verification problem. We also propose a logic for specifying the moves of the existential player that can be used to systematically derive simplified equations for efficiently computing progress measures. We discuss potential applications to the model-checking of latticed $\mu$-calculi and to the solution of fixpoint equations systems over the reals.