id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1510.00651
Mark Scanlon
Jason Farina, M-Tahar Kechadi and Mark Scanlon
Project Maelstrom: Forensic Analysis of the BitTorrent-Powered Browser
null
Journal of Digital Forensics, Security and Law (Proc. of 10th International Conference on Systematic Approaches to Digital Forensic Engineering, SADFE 2015)
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
In April 2015, BitTorrent Inc. released their distributed peer-to-peer powered browser, Project Maelstrom, into public beta. The browser facilitates a new alternative website distribution paradigm to the traditional HTTP-based, client-server model. This decentralised web is powered by each of the visitors accessing each Maelstrom hosted website. Each user shares their copy of the website's source code and multimedia content with new visitors. As a result, a Maelstrom hosted website cannot be taken offline by law enforcement or any other parties. Due to this open distribution model, a number of interesting censorship, security and privacy considerations are raised. This paper explores the application, its protocol, sharing Maelstrom content and its new visitor powered "web-hosting" paradigm.
[ { "created": "Fri, 2 Oct 2015 17:25:27 GMT", "version": "v1" } ]
2015-10-05
[ [ "Farina", "Jason", "" ], [ "Kechadi", "M-Tahar", "" ], [ "Scanlon", "Mark", "" ] ]
In April 2015, BitTorrent Inc. released their distributed peer-to-peer powered browser, Project Maelstrom, into public beta. The browser facilitates a new alternative website distribution paradigm to the traditional HTTP-based, client-server model. This decentralised web is powered by each of the visitors accessing each Maelstrom hosted website. Each user shares their copy of the website's source code and multimedia content with new visitors. As a result, a Maelstrom hosted website cannot be taken offline by law enforcement or any other parties. Due to this open distribution model, a number of interesting censorship, security and privacy considerations are raised. This paper explores the application, its protocol, sharing Maelstrom content and its new visitor powered "web-hosting" paradigm.
2406.06236
Tahira Shehzadi
Talha Uddin Sheikh, Tahira Shehzadi, Khurram Azeem Hashmi, Didier Stricker, Muhammad Zeshan Afzal
UnSupDLA: Towards Unsupervised Document Layout Analysis
ICDAR 2024 - Workshop
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Document layout analysis is a key area in document research, involving techniques like text mining and visual analysis. Despite various methods developed to tackle layout analysis, a critical but frequently overlooked problem is the scarcity of labeled data needed for analyses. With the rise of internet use, an overwhelming number of documents are now available online, making the process of accurately labeling them for research purposes increasingly challenging and labor-intensive. Moreover, the diversity of documents online presents a unique set of challenges in maintaining the quality and consistency of these labels, further complicating document layout analysis in the digital era. To address this, we employ a vision-based approach for analyzing document layouts designed to train a network without labels. Instead, we focus on pre-training, initially generating simple object masks from the unlabeled document images. These masks are then used to train a detector, enhancing object detection and segmentation performance. The model's effectiveness is further amplified through several unsupervised training iterations, continuously refining its performance. This approach significantly advances document layout analysis, particularly precision and efficiency, without labels.
[ { "created": "Mon, 10 Jun 2024 13:06:28 GMT", "version": "v1" } ]
2024-06-11
[ [ "Sheikh", "Talha Uddin", "" ], [ "Shehzadi", "Tahira", "" ], [ "Hashmi", "Khurram Azeem", "" ], [ "Stricker", "Didier", "" ], [ "Afzal", "Muhammad Zeshan", "" ] ]
Document layout analysis is a key area in document research, involving techniques like text mining and visual analysis. Despite various methods developed to tackle layout analysis, a critical but frequently overlooked problem is the scarcity of labeled data needed for analyses. With the rise of internet use, an overwhelming number of documents are now available online, making the process of accurately labeling them for research purposes increasingly challenging and labor-intensive. Moreover, the diversity of documents online presents a unique set of challenges in maintaining the quality and consistency of these labels, further complicating document layout analysis in the digital era. To address this, we employ a vision-based approach for analyzing document layouts designed to train a network without labels. Instead, we focus on pre-training, initially generating simple object masks from the unlabeled document images. These masks are then used to train a detector, enhancing object detection and segmentation performance. The model's effectiveness is further amplified through several unsupervised training iterations, continuously refining its performance. This approach significantly advances document layout analysis, particularly precision and efficiency, without labels.
2312.16697
He Zhang
He Zhang, Robin Ananda, Xinyi Fu, Zhe Sun, Xiaoyu Wang, Keqi Chen, John M. Carroll
Multi-channel Sensor Network Construction, Data Fusion and Challenges for Smart Home
8 pages, accepted by CHCHI2023
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Both sensor networks and data fusion are essential foundations for developing the smart home Internet of Things (IoT) and related fields. We proposed a multi-channel sensor network construction method involving hardware, acquisition, and synchronization in the smart home environment and a smart home data fusion method (SHDFM) for multi-modal data (position, gait, voice, pose, facial expression, temperature, and humidity) generated in the smart home environment to address the configuration of a multi-channel sensor network, improve the quality and efficiency of various human activities and environmental data collection, and reduce the difficulty of multi-modal data fusion in the smart home. SHDFM contains 5 levels, with inputs and outputs as criteria to provide recommendations for multi-modal data fusion strategies in the smart home. We built a real experimental environment using the proposed method in this paper. To validate our method, we created a real experimental environment - a physical setup in a home-like scenario where the multi-channel sensor network and data fusion techniques were deployed and evaluated. The acceptance and testing results show that the proposed construction and data fusion methods can be applied to the examples with high robustness, replicability, and scalability. Besides, we discuss how smart homes with multi-channel sensor networks can support digital twins.
[ { "created": "Wed, 27 Dec 2023 19:30:43 GMT", "version": "v1" } ]
2023-12-29
[ [ "Zhang", "He", "" ], [ "Ananda", "Robin", "" ], [ "Fu", "Xinyi", "" ], [ "Sun", "Zhe", "" ], [ "Wang", "Xiaoyu", "" ], [ "Chen", "Keqi", "" ], [ "Carroll", "John M.", "" ] ]
Both sensor networks and data fusion are essential foundations for developing the smart home Internet of Things (IoT) and related fields. We proposed a multi-channel sensor network construction method involving hardware, acquisition, and synchronization in the smart home environment and a smart home data fusion method (SHDFM) for multi-modal data (position, gait, voice, pose, facial expression, temperature, and humidity) generated in the smart home environment to address the configuration of a multi-channel sensor network, improve the quality and efficiency of various human activities and environmental data collection, and reduce the difficulty of multi-modal data fusion in the smart home. SHDFM contains 5 levels, with inputs and outputs as criteria to provide recommendations for multi-modal data fusion strategies in the smart home. We built a real experimental environment using the proposed method in this paper. To validate our method, we created a real experimental environment - a physical setup in a home-like scenario where the multi-channel sensor network and data fusion techniques were deployed and evaluated. The acceptance and testing results show that the proposed construction and data fusion methods can be applied to the examples with high robustness, replicability, and scalability. Besides, we discuss how smart homes with multi-channel sensor networks can support digital twins.
1312.1961
Christian Lavault
Marc Bui (CHART), Franck Butelle (LIPN), Christian Lavault (LIPN)
A Distributed Algorithm for Constructing a Minimum Diameter Spanning Tree
Comments: 11 pages LaTeX, 2 figures; International Journal with referees article; New version (full paper design): results added in Section 2.2 and 2.2; typos removed
Journal of Parallel and Distributed Computing 64, 5 (2004) 571-577
null
null
cs.DC cs.DS cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new algorithm, which solves the problem of distributively finding a minimum diameter spanning tree of any (non-negatively) real-weighted graph $G = (V,E,\omega)$. As an intermediate step, we use a new, fast, linear-time all-pairs shortest paths distributed algorithm to find an absolute center of $G$. The resulting distributed algorithm is asynchronous, it works for named asynchronous arbitrary networks and achieves $\mathcal{O}(|V|)$ time complexity and $\mathcal{O}\left(|V|\,|E|\right)$
[ { "created": "Fri, 6 Dec 2013 19:04:02 GMT", "version": "v1" }, { "created": "Wed, 11 Dec 2013 08:09:33 GMT", "version": "v2" } ]
2013-12-12
[ [ "Bui", "Marc", "", "CHART" ], [ "Butelle", "Franck", "", "LIPN" ], [ "Lavault", "Christian", "", "LIPN" ] ]
We present a new algorithm, which solves the problem of distributively finding a minimum diameter spanning tree of any (non-negatively) real-weighted graph $G = (V,E,\omega)$. As an intermediate step, we use a new, fast, linear-time all-pairs shortest paths distributed algorithm to find an absolute center of $G$. The resulting distributed algorithm is asynchronous, it works for named asynchronous arbitrary networks and achieves $\mathcal{O}(|V|)$ time complexity and $\mathcal{O}\left(|V|\,|E|\right)$
1906.12087
Zhangheng Li
Zhangheng Li, Jia-Xing Zhong, Jingjia Huang, Tao Zhang, Thomas Li and Ge Li
ARMIN: Towards a More Efficient and Light-weight Recurrent Memory Network
Published in IJCAI 2019
null
null
null
cs.LG cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, memory-augmented neural networks(MANNs) have shown promising power to enhance the memory ability of neural networks for sequential processing tasks. However, previous MANNs suffer from complex memory addressing mechanism, making them relatively hard to train and causing computational overheads. Moreover, many of them reuse the classical RNN structure such as LSTM for memory processing, causing inefficient exploitations of memory information. In this paper, we introduce a novel MANN, the Auto-addressing and Recurrent Memory Integrating Network (ARMIN) to address these issues. The ARMIN only utilizes hidden state ht for automatic memory addressing, and uses a novel RNN cell for refined integration of memory information. Empirical results on a variety of experiments demonstrate that the ARMIN is more light-weight and efficient compared to existing memory networks. Moreover, we demonstrate that the ARMIN can achieve much lower computational overhead than vanilla LSTM while keeping similar performances. Codes are available on github.com/zoharli/armin.
[ { "created": "Fri, 28 Jun 2019 08:21:49 GMT", "version": "v1" } ]
2019-07-01
[ [ "Li", "Zhangheng", "" ], [ "Zhong", "Jia-Xing", "" ], [ "Huang", "Jingjia", "" ], [ "Zhang", "Tao", "" ], [ "Li", "Thomas", "" ], [ "Li", "Ge", "" ] ]
In recent years, memory-augmented neural networks(MANNs) have shown promising power to enhance the memory ability of neural networks for sequential processing tasks. However, previous MANNs suffer from complex memory addressing mechanism, making them relatively hard to train and causing computational overheads. Moreover, many of them reuse the classical RNN structure such as LSTM for memory processing, causing inefficient exploitations of memory information. In this paper, we introduce a novel MANN, the Auto-addressing and Recurrent Memory Integrating Network (ARMIN) to address these issues. The ARMIN only utilizes hidden state ht for automatic memory addressing, and uses a novel RNN cell for refined integration of memory information. Empirical results on a variety of experiments demonstrate that the ARMIN is more light-weight and efficient compared to existing memory networks. Moreover, we demonstrate that the ARMIN can achieve much lower computational overhead than vanilla LSTM while keeping similar performances. Codes are available on github.com/zoharli/armin.
2004.02335
David Arnas
David Arnas and Carl Leake and Daniele Mortari
The n-dimensional k-vector and its application to orthogonal range searching
31 pages, 10 figures
Applied Mathematics and Computation, Vol. 372, 2020
10.1016/j.amc.2019.125010
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work focuses on the definition and study of the n-dimensional k-vector, an algorithm devised to perform orthogonal range searching in static databases with multiple dimensions. The methodology first finds the order in which to search the dimensions, and then, performs the search using a modified projection method. In order to determine the dimension order, the algorithm uses the k-vector, a range searching technique for one dimension that identifies the number of elements contained in the searching range. Then, using this information, the algorithm predicts and selects the best approach to deal with each dimension. The algorithm has a worst case complexity of $\mathcal{O}(nd(k/n)^{2/d})$, where $k$ is the number of elements retrieved, $n$ is the number of elements in the database, and $d$ is the number of dimensions of the database. This work includes a detailed description of the methodology as well as a study of the algorithm performance.
[ { "created": "Sun, 5 Apr 2020 22:26:05 GMT", "version": "v1" } ]
2020-04-07
[ [ "Arnas", "David", "" ], [ "Leake", "Carl", "" ], [ "Mortari", "Daniele", "" ] ]
This work focuses on the definition and study of the n-dimensional k-vector, an algorithm devised to perform orthogonal range searching in static databases with multiple dimensions. The methodology first finds the order in which to search the dimensions, and then, performs the search using a modified projection method. In order to determine the dimension order, the algorithm uses the k-vector, a range searching technique for one dimension that identifies the number of elements contained in the searching range. Then, using this information, the algorithm predicts and selects the best approach to deal with each dimension. The algorithm has a worst case complexity of $\mathcal{O}(nd(k/n)^{2/d})$, where $k$ is the number of elements retrieved, $n$ is the number of elements in the database, and $d$ is the number of dimensions of the database. This work includes a detailed description of the methodology as well as a study of the algorithm performance.
2402.00357
Yichen Zhu
Xin Liu, Yichen Zhu, Yunshi Lan, Chao Yang, Yu Qiao
Safety of Multimodal Large Language Models on Images and Texts
Accepted at IJCAI2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Attracted by the impressive power of Multimodal Large Language Models (MLLMs), the public is increasingly utilizing them to improve the efficiency of daily work. Nonetheless, the vulnerabilities of MLLMs to unsafe instructions bring huge safety risks when these models are deployed in real-world scenarios. In this paper, we systematically survey current efforts on the evaluation, attack, and defense of MLLMs' safety on images and text. We begin with introducing the overview of MLLMs on images and text and understanding of safety, which helps researchers know the detailed scope of our survey. Then, we review the evaluation datasets and metrics for measuring the safety of MLLMs. Next, we comprehensively present attack and defense techniques related to MLLMs' safety. Finally, we analyze several unsolved issues and discuss promising research directions. The latest papers are continually collected at https://github.com/isXinLiu/MLLM-Safety-Collection.
[ { "created": "Thu, 1 Feb 2024 05:57:10 GMT", "version": "v1" }, { "created": "Sun, 25 Feb 2024 03:20:54 GMT", "version": "v2" }, { "created": "Thu, 20 Jun 2024 15:06:10 GMT", "version": "v3" } ]
2024-06-21
[ [ "Liu", "Xin", "" ], [ "Zhu", "Yichen", "" ], [ "Lan", "Yunshi", "" ], [ "Yang", "Chao", "" ], [ "Qiao", "Yu", "" ] ]
Attracted by the impressive power of Multimodal Large Language Models (MLLMs), the public is increasingly utilizing them to improve the efficiency of daily work. Nonetheless, the vulnerabilities of MLLMs to unsafe instructions bring huge safety risks when these models are deployed in real-world scenarios. In this paper, we systematically survey current efforts on the evaluation, attack, and defense of MLLMs' safety on images and text. We begin with introducing the overview of MLLMs on images and text and understanding of safety, which helps researchers know the detailed scope of our survey. Then, we review the evaluation datasets and metrics for measuring the safety of MLLMs. Next, we comprehensively present attack and defense techniques related to MLLMs' safety. Finally, we analyze several unsolved issues and discuss promising research directions. The latest papers are continually collected at https://github.com/isXinLiu/MLLM-Safety-Collection.
1809.04898
Michele Colledanchise
Michele Colledanchise and Lorenzo Natale
Improving the Parallel Execution of Behavior Trees
null
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
10.1109/IROS.2018.8593504
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Behavior Trees (BTs) have become a popular framework for designing controllers of autonomous agents in the computer game and in the robotics industry. One of the key advantages of BTs lies in their modularity, where independent modules can be composed to create more complex ones. In the classical formulation of BTs, modules can be composed using one of the three operators: Sequence, Fallback, and Parallel. The Parallel operator is rarely used despite its strong potential against other control architectures as Finite State Machines. This is due to the fact that concurrent actions may lead to unexpected problems similar to the ones experienced in concurrent programming. In this paper, we introduce Concurrent BTs (CBTs) as a generalization of BTs in which we introduce the notions of progress and resource usage. We show how CBTs allow safe concurrent executions of actions and we analyze the approach from a mathematical standpoint. To illustrate the use of CBTs, we provide a set of use cases in robotics scenarios.
[ { "created": "Thu, 13 Sep 2018 11:58:31 GMT", "version": "v1" } ]
2021-08-25
[ [ "Colledanchise", "Michele", "" ], [ "Natale", "Lorenzo", "" ] ]
Behavior Trees (BTs) have become a popular framework for designing controllers of autonomous agents in the computer game and in the robotics industry. One of the key advantages of BTs lies in their modularity, where independent modules can be composed to create more complex ones. In the classical formulation of BTs, modules can be composed using one of the three operators: Sequence, Fallback, and Parallel. The Parallel operator is rarely used despite its strong potential against other control architectures as Finite State Machines. This is due to the fact that concurrent actions may lead to unexpected problems similar to the ones experienced in concurrent programming. In this paper, we introduce Concurrent BTs (CBTs) as a generalization of BTs in which we introduce the notions of progress and resource usage. We show how CBTs allow safe concurrent executions of actions and we analyze the approach from a mathematical standpoint. To illustrate the use of CBTs, we provide a set of use cases in robotics scenarios.
2107.02308
Joseph Ortiz
Joseph Ortiz, Talfan Evans, Andrew J. Davison
A visual introduction to Gaussian Belief Propagation
See online version of this article: https://gaussianbp.github.io/
null
null
null
cs.AI cs.CV cs.LG cs.RO
http://creativecommons.org/licenses/by/4.0/
In this article, we present a visual introduction to Gaussian Belief Propagation (GBP), an approximate probabilistic inference algorithm that operates by passing messages between the nodes of arbitrarily structured factor graphs. A special case of loopy belief propagation, GBP updates rely only on local information and will converge independently of the message schedule. Our key argument is that, given recent trends in computing hardware, GBP has the right computational properties to act as a scalable distributed probabilistic inference framework for future machine learning systems.
[ { "created": "Mon, 5 Jul 2021 22:43:27 GMT", "version": "v1" } ]
2021-07-07
[ [ "Ortiz", "Joseph", "" ], [ "Evans", "Talfan", "" ], [ "Davison", "Andrew J.", "" ] ]
In this article, we present a visual introduction to Gaussian Belief Propagation (GBP), an approximate probabilistic inference algorithm that operates by passing messages between the nodes of arbitrarily structured factor graphs. A special case of loopy belief propagation, GBP updates rely only on local information and will converge independently of the message schedule. Our key argument is that, given recent trends in computing hardware, GBP has the right computational properties to act as a scalable distributed probabilistic inference framework for future machine learning systems.
1012.5929
Joel Goossens
Jo\"el Goossens (1), Patrick Meumeu Yomsi (2) ((1) Brussels University, U.L.B., Brussels, Belgium., (2) F.N.R.S, Belgium.)
Exact Schedulability Test for global-EDF Scheduling of Periodic Hard Real-Time Tasks on Identical Multiprocessors
null
null
null
null
cs.OS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we consider the scheduling problem of hard real-time systems composed of periodic constrained-deadline tasks upon identical multiprocessor platforms. We assume that tasks are scheduled by using the global-EDF scheduler. We establish an exact schedulability test for this scheduler by exploiting on the one hand its predictability property and by providing on the other hand a feasibility interval so that if it is possible to find a valid schedule for all the jobs contained in this interval, then the whole system will be stamped feasible. In addition, we show by means of a counterexample that the feasibility interval, and thus the schedulability test, proposed by Leung [Leung 1989] is incorrect and we show which arguments are actually incorrect.
[ { "created": "Wed, 29 Dec 2010 12:41:13 GMT", "version": "v1" } ]
2010-12-30
[ [ "Goossens", "Joël", "" ], [ "Yomsi", "Patrick Meumeu", "" ] ]
In this paper we consider the scheduling problem of hard real-time systems composed of periodic constrained-deadline tasks upon identical multiprocessor platforms. We assume that tasks are scheduled by using the global-EDF scheduler. We establish an exact schedulability test for this scheduler by exploiting on the one hand its predictability property and by providing on the other hand a feasibility interval so that if it is possible to find a valid schedule for all the jobs contained in this interval, then the whole system will be stamped feasible. In addition, we show by means of a counterexample that the feasibility interval, and thus the schedulability test, proposed by Leung [Leung 1989] is incorrect and we show which arguments are actually incorrect.
2310.06125
William Ravenscroft
William Ravenscroft and Stefan Goetze and Thomas Hain
On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments
Accepted at ASRU Workshop 2023
null
null
null
cs.SD cs.AI cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
Speech separation remains an important topic for multi-speaker technology researchers. Convolution augmented transformers (conformers) have performed well for many speech processing tasks but have been under-researched for speech separation. Most recent state-of-the-art (SOTA) separation models have been time-domain audio separation networks (TasNets). A number of successful models have made use of dual-path (DP) networks which sequentially process local and global information. Time domain conformers (TD-Conformers) are an analogue of the DP approach in that they also process local and global context sequentially but have a different time complexity function. It is shown that for realistic shorter signal lengths, conformers are more efficient when controlling for feature dimension. Subsampling layers are proposed to further improve computational efficiency. The best TD-Conformer achieves 14.6 dB and 21.2 dB SISDR improvement on the WHAMR and WSJ0-2Mix benchmarks, respectively.
[ { "created": "Mon, 9 Oct 2023 20:02:11 GMT", "version": "v1" } ]
2023-10-11
[ [ "Ravenscroft", "William", "" ], [ "Goetze", "Stefan", "" ], [ "Hain", "Thomas", "" ] ]
Speech separation remains an important topic for multi-speaker technology researchers. Convolution augmented transformers (conformers) have performed well for many speech processing tasks but have been under-researched for speech separation. Most recent state-of-the-art (SOTA) separation models have been time-domain audio separation networks (TasNets). A number of successful models have made use of dual-path (DP) networks which sequentially process local and global information. Time domain conformers (TD-Conformers) are an analogue of the DP approach in that they also process local and global context sequentially but have a different time complexity function. It is shown that for realistic shorter signal lengths, conformers are more efficient when controlling for feature dimension. Subsampling layers are proposed to further improve computational efficiency. The best TD-Conformer achieves 14.6 dB and 21.2 dB SISDR improvement on the WHAMR and WSJ0-2Mix benchmarks, respectively.
2401.01752
Zheng Yuan
Zheng Yuan, Jie Zhang, Shiguang Shan
FullLoRA-AT: Efficiently Boosting the Robustness of Pretrained Vision Transformers
10 pages, 2 figures, 6 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, the Vision Transformer (ViT) model has gradually become mainstream in various computer vision tasks, and the robustness of the model has received increasing attention. However, existing large models tend to prioritize performance during training, potentially neglecting the robustness, which may lead to serious security concerns. In this paper, we establish a new challenge: exploring how to use a small number of additional parameters for adversarial finetuning to quickly and effectively enhance the adversarial robustness of a standardly trained model. To address this challenge, we develop the novel LNLoRA module, incorporating a learnable layer normalization before the conventional LoRA module, which helps mitigate magnitude differences in parameters between the adversarial and standard training paradigms. Furthermore, we propose the FullLoRA-AT framework by integrating the learnable LNLoRA modules into all key components of ViT-based models while keeping the pretrained model frozen, which can significantly improve the model robustness via adversarial finetuning in a parameter-efficient manner. Extensive experiments on CIFAR-10, CIFAR-100, and Imagenette demonstrate the superiority of our proposed FullLoRA-AT framework. It achieves comparable robustness with full finetuning while only requiring about 5% of the learnable parameters. This also effectively addresses concerns regarding extra model storage space and enormous training time caused by adversarial finetuning.
[ { "created": "Wed, 3 Jan 2024 14:08:39 GMT", "version": "v1" } ]
2024-01-04
[ [ "Yuan", "Zheng", "" ], [ "Zhang", "Jie", "" ], [ "Shan", "Shiguang", "" ] ]
In recent years, the Vision Transformer (ViT) model has gradually become mainstream in various computer vision tasks, and the robustness of the model has received increasing attention. However, existing large models tend to prioritize performance during training, potentially neglecting the robustness, which may lead to serious security concerns. In this paper, we establish a new challenge: exploring how to use a small number of additional parameters for adversarial finetuning to quickly and effectively enhance the adversarial robustness of a standardly trained model. To address this challenge, we develop the novel LNLoRA module, incorporating a learnable layer normalization before the conventional LoRA module, which helps mitigate magnitude differences in parameters between the adversarial and standard training paradigms. Furthermore, we propose the FullLoRA-AT framework by integrating the learnable LNLoRA modules into all key components of ViT-based models while keeping the pretrained model frozen, which can significantly improve the model robustness via adversarial finetuning in a parameter-efficient manner. Extensive experiments on CIFAR-10, CIFAR-100, and Imagenette demonstrate the superiority of our proposed FullLoRA-AT framework. It achieves comparable robustness with full finetuning while only requiring about 5% of the learnable parameters. This also effectively addresses concerns regarding extra model storage space and enormous training time caused by adversarial finetuning.
2402.18927
Jiayuan Chen
Xiang Chen, Wenjie Zhu, Jiayuan Chen, Tong Zhang, Changyan Yi, Jun Cai
Edge Computing Enabled Real-Time Video Analysis via Adaptive Spatial-Temporal Semantic Filtering
null
null
null
null
cs.CV cs.MM cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a novel edge computing enabled real-time video analysis system for intelligent visual devices. The proposed system consists of a tracking-assisted object detection module (TAODM) and a region of interesting module (ROIM). TAODM adaptively determines the offloading decision to process each video frame locally with a tracking algorithm or to offload it to the edge server inferred by an object detection model. ROIM determines each offloading frame's resolution and detection model configuration to ensure that the analysis results can return in time. TAODM and ROIM interact jointly to filter the repetitive spatial-temporal semantic information to maximize the processing rate while ensuring high video analysis accuracy. Unlike most existing works, this paper investigates the real-time video analysis systems where the intelligent visual device connects to the edge server through a wireless network with fluctuating network conditions. We decompose the real-time video analysis problem into the offloading decision and configurations selection sub-problems. To solve these two sub-problems, we introduce a double deep Q network (DDQN) based offloading approach and a contextual multi-armed bandit (CMAB) based adaptive configurations selection approach, respectively. A DDQN-CMAB reinforcement learning (DCRL) training framework is further developed to integrate these two approaches to improve the overall video analyzing performance. Extensive simulations are conducted to evaluate the performance of the proposed solution, and demonstrate its superiority over counterparts.
[ { "created": "Thu, 29 Feb 2024 07:42:03 GMT", "version": "v1" } ]
2024-03-01
[ [ "Chen", "Xiang", "" ], [ "Zhu", "Wenjie", "" ], [ "Chen", "Jiayuan", "" ], [ "Zhang", "Tong", "" ], [ "Yi", "Changyan", "" ], [ "Cai", "Jun", "" ] ]
This paper proposes a novel edge computing enabled real-time video analysis system for intelligent visual devices. The proposed system consists of a tracking-assisted object detection module (TAODM) and a region of interesting module (ROIM). TAODM adaptively determines the offloading decision to process each video frame locally with a tracking algorithm or to offload it to the edge server inferred by an object detection model. ROIM determines each offloading frame's resolution and detection model configuration to ensure that the analysis results can return in time. TAODM and ROIM interact jointly to filter the repetitive spatial-temporal semantic information to maximize the processing rate while ensuring high video analysis accuracy. Unlike most existing works, this paper investigates the real-time video analysis systems where the intelligent visual device connects to the edge server through a wireless network with fluctuating network conditions. We decompose the real-time video analysis problem into the offloading decision and configurations selection sub-problems. To solve these two sub-problems, we introduce a double deep Q network (DDQN) based offloading approach and a contextual multi-armed bandit (CMAB) based adaptive configurations selection approach, respectively. A DDQN-CMAB reinforcement learning (DCRL) training framework is further developed to integrate these two approaches to improve the overall video analyzing performance. Extensive simulations are conducted to evaluate the performance of the proposed solution, and demonstrate its superiority over counterparts.
2201.13348
Stefan H\"oppner
Stefan H\"oppner, Yves Haas, Matthias Tichy, Katharina Juhnke
Advantages and Disadvantages of (Dedicated) Model Transformation Languages A Qualitative Interview Study
null
null
10.1007/s10664-022-10194-7
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Model driven development envisages the use of model transformations to evolve models. Model transformation languages, developed for this task, are touted with many benefits over general purpose programming languages. However, a large number of these claims have not yet been substantiated. They are also made without the context necessary to be able to critically assess their merit or built meaningful empirical studies around them. The objective of our work is to elicit the reasoning, influences and background knowledge that lead people to assume benefits or drawbacks of model transformation languages. We conducted a large-scale interview study involving 56 participants from research and industry. Interviewees were presented with claims about model transformation languages and were asked to provide reasons for their assessment thereof. We qualitatively analysed the responses to find factors that influence the properties of model transformation languages as well as explanations as to how exactly they do so. Our interviews show, that general purpose expressiveness of GPLs, domain specific capabilities of MTLs as well as tooling all have strong influences on how people view properties of model transformation languages. Moreover, the Choice of MTL, the Use Case for which a transformation should be developed as well as the Skills of involved stakeholders have a moderating effect on the influences, by changing the context to consider. There is a broad body of experience, that suggests positive and negative influences for properties of MTLs. Our data suggests, that much needs to be done in order to convey the viability of model transformation languages. Efforts to provide more empirical substance need to be undergone and lackluster language capabilities and tooling need to be improved upon. We suggest several approaches for this that can be based on the results of the presented study.
[ { "created": "Mon, 31 Jan 2022 16:52:59 GMT", "version": "v1" }, { "created": "Thu, 5 May 2022 16:51:40 GMT", "version": "v2" }, { "created": "Mon, 4 Jul 2022 10:42:13 GMT", "version": "v3" } ]
2022-08-19
[ [ "Höppner", "Stefan", "" ], [ "Haas", "Yves", "" ], [ "Tichy", "Matthias", "" ], [ "Juhnke", "Katharina", "" ] ]
Model driven development envisages the use of model transformations to evolve models. Model transformation languages, developed for this task, are touted with many benefits over general purpose programming languages. However, a large number of these claims have not yet been substantiated. They are also made without the context necessary to be able to critically assess their merit or built meaningful empirical studies around them. The objective of our work is to elicit the reasoning, influences and background knowledge that lead people to assume benefits or drawbacks of model transformation languages. We conducted a large-scale interview study involving 56 participants from research and industry. Interviewees were presented with claims about model transformation languages and were asked to provide reasons for their assessment thereof. We qualitatively analysed the responses to find factors that influence the properties of model transformation languages as well as explanations as to how exactly they do so. Our interviews show, that general purpose expressiveness of GPLs, domain specific capabilities of MTLs as well as tooling all have strong influences on how people view properties of model transformation languages. Moreover, the Choice of MTL, the Use Case for which a transformation should be developed as well as the Skills of involved stakeholders have a moderating effect on the influences, by changing the context to consider. There is a broad body of experience, that suggests positive and negative influences for properties of MTLs. Our data suggests, that much needs to be done in order to convey the viability of model transformation languages. Efforts to provide more empirical substance need to be undergone and lackluster language capabilities and tooling need to be improved upon. We suggest several approaches for this that can be based on the results of the presented study.
1806.07815
Nicolas Robinson-Garcia
Nicolas Robinson-Garcia, Cassidy R. Sugimoto, Dakota Murray, Alfredo Yegros-Yegros, Vincent Larivi\`ere and Rodrigo Costas
Scientific mobility indicators in practice: International mobility profiles at the country level
null
Robinson-Garcia, N. et al. Scientific mobility indicators in practice: International mobility profiles at the country level. El profesional de la informaci\'on, 27(3), 511-520. doi:10.3145/epi.2018.may.05
10.3145/epi.2018.may.05
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents and describes the methodological opportunities offered by bibliometric data to produce indicators of scientific mobility. Large bibliographic datasets of disambiguated authors and their affiliations allow for the possibility of tracking the affiliation changes of scientists. Using the Web of Science as data source, we analyze the distribution of types of mobile scientists for a selection of countries. We explore the possibility of creating profiles of international mobility at the country level, and discuss potential interpretations and caveats. Five countries (Canada, The Netherlands, South Africa, Spain, and the United States) are used as examples. These profiles enable us to characterize these countries in terms of their strongest links with other countries. This type of analysis reveals circulation among and between countries with strong policy implications.
[ { "created": "Wed, 20 Jun 2018 16:13:37 GMT", "version": "v1" } ]
2018-06-21
[ [ "Robinson-Garcia", "Nicolas", "" ], [ "Sugimoto", "Cassidy R.", "" ], [ "Murray", "Dakota", "" ], [ "Yegros-Yegros", "Alfredo", "" ], [ "Larivière", "Vincent", "" ], [ "Costas", "Rodrigo", "" ] ]
This paper presents and describes the methodological opportunities offered by bibliometric data to produce indicators of scientific mobility. Large bibliographic datasets of disambiguated authors and their affiliations allow for the possibility of tracking the affiliation changes of scientists. Using the Web of Science as data source, we analyze the distribution of types of mobile scientists for a selection of countries. We explore the possibility of creating profiles of international mobility at the country level, and discuss potential interpretations and caveats. Five countries (Canada, The Netherlands, South Africa, Spain, and the United States) are used as examples. These profiles enable us to characterize these countries in terms of their strongest links with other countries. This type of analysis reveals circulation among and between countries with strong policy implications.
2010.01160
Aditi Chaudhary
Aditi Chaudhary, Antonios Anastasopoulos, Adithya Pratapa, David R. Mortensen, Zaid Sheikh, Yulia Tsvetkov, Graham Neubig
Automatic Extraction of Rules Governing Morphological Agreement
Accepted at EMNLP 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Creating a descriptive grammar of a language is an indispensable step for language documentation and preservation. However, at the same time it is a tedious, time-consuming task. In this paper, we take steps towards automating this process by devising an automated framework for extracting a first-pass grammatical specification from raw text in a concise, human- and machine-readable format. We focus on extracting rules describing agreement, a morphosyntactic phenomenon at the core of the grammars of many of the world's languages. We apply our framework to all languages included in the Universal Dependencies project, with promising results. Using cross-lingual transfer, even with no expert annotations in the language of interest, our framework extracts a grammatical specification which is nearly equivalent to those created with large amounts of gold-standard annotated data. We confirm this finding with human expert evaluations of the rules that our framework produces, which have an average accuracy of 78%. We release an interface demonstrating the extracted rules at https://neulab.github.io/lase/.
[ { "created": "Fri, 2 Oct 2020 18:31:45 GMT", "version": "v1" }, { "created": "Tue, 6 Oct 2020 03:30:27 GMT", "version": "v2" } ]
2020-10-07
[ [ "Chaudhary", "Aditi", "" ], [ "Anastasopoulos", "Antonios", "" ], [ "Pratapa", "Adithya", "" ], [ "Mortensen", "David R.", "" ], [ "Sheikh", "Zaid", "" ], [ "Tsvetkov", "Yulia", "" ], [ "Neubig", "Graham", "" ] ]
Creating a descriptive grammar of a language is an indispensable step for language documentation and preservation. However, at the same time it is a tedious, time-consuming task. In this paper, we take steps towards automating this process by devising an automated framework for extracting a first-pass grammatical specification from raw text in a concise, human- and machine-readable format. We focus on extracting rules describing agreement, a morphosyntactic phenomenon at the core of the grammars of many of the world's languages. We apply our framework to all languages included in the Universal Dependencies project, with promising results. Using cross-lingual transfer, even with no expert annotations in the language of interest, our framework extracts a grammatical specification which is nearly equivalent to those created with large amounts of gold-standard annotated data. We confirm this finding with human expert evaluations of the rules that our framework produces, which have an average accuracy of 78%. We release an interface demonstrating the extracted rules at https://neulab.github.io/lase/.
2305.15725
Fangwei Zhu
Fangwei Zhu, Jifan Yu, Hailong Jin, Juanzi Li, Lei Hou, Zhifang Sui
Learn to Not Link: Exploring NIL Prediction in Entity Linking
ACL Findings 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Entity linking models have achieved significant success via utilizing pretrained language models to capture semantic features. However, the NIL prediction problem, which aims to identify mentions without a corresponding entity in the knowledge base, has received insufficient attention. We categorize mentions linking to NIL into Missing Entity and Non-Entity Phrase, and propose an entity linking dataset NEL that focuses on the NIL prediction problem. NEL takes ambiguous entities as seeds, collects relevant mention context in the Wikipedia corpus, and ensures the presence of mentions linking to NIL by human annotation and entity masking. We conduct a series of experiments with the widely used bi-encoder and cross-encoder entity linking models, results show that both types of NIL mentions in training data have a significant influence on the accuracy of NIL prediction. Our code and dataset can be accessed at https://github.com/solitaryzero/NIL_EL
[ { "created": "Thu, 25 May 2023 05:12:33 GMT", "version": "v1" } ]
2023-05-26
[ [ "Zhu", "Fangwei", "" ], [ "Yu", "Jifan", "" ], [ "Jin", "Hailong", "" ], [ "Li", "Juanzi", "" ], [ "Hou", "Lei", "" ], [ "Sui", "Zhifang", "" ] ]
Entity linking models have achieved significant success via utilizing pretrained language models to capture semantic features. However, the NIL prediction problem, which aims to identify mentions without a corresponding entity in the knowledge base, has received insufficient attention. We categorize mentions linking to NIL into Missing Entity and Non-Entity Phrase, and propose an entity linking dataset NEL that focuses on the NIL prediction problem. NEL takes ambiguous entities as seeds, collects relevant mention context in the Wikipedia corpus, and ensures the presence of mentions linking to NIL by human annotation and entity masking. We conduct a series of experiments with the widely used bi-encoder and cross-encoder entity linking models, results show that both types of NIL mentions in training data have a significant influence on the accuracy of NIL prediction. Our code and dataset can be accessed at https://github.com/solitaryzero/NIL_EL
1506.06006
Srinivas S S Kruthiventi
Srinivas S. S. Kruthiventi and R. Venkatesh Babu
Crowd Flow Segmentation in Compressed Domain using CRF
In IEEE International Conference on Image Processing (ICIP), 2015
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Crowd flow segmentation is an important step in many video surveillance tasks. In this work, we propose an algorithm for segmenting flows in H.264 compressed videos in a completely unsupervised manner. Our algorithm works on motion vectors which can be obtained by partially decoding the compressed video without extracting any additional features. Our approach is based on modelling the motion vector field as a Conditional Random Field (CRF) and obtaining oriented motion segments by finding the optimal labelling which minimises the global energy of CRF. These oriented motion segments are recursively merged based on gradient across their boundaries to obtain the final flow segments. This work in compressed domain can be easily extended to pixel domain by substituting motion vectors with motion based features like optical flow. The proposed algorithm is experimentally evaluated on a standard crowd flow dataset and its superior performance in both accuracy and computational time are demonstrated through quantitative results.
[ { "created": "Fri, 19 Jun 2015 14:01:24 GMT", "version": "v1" } ]
2015-06-22
[ [ "Kruthiventi", "Srinivas S. S.", "" ], [ "Babu", "R. Venkatesh", "" ] ]
Crowd flow segmentation is an important step in many video surveillance tasks. In this work, we propose an algorithm for segmenting flows in H.264 compressed videos in a completely unsupervised manner. Our algorithm works on motion vectors which can be obtained by partially decoding the compressed video without extracting any additional features. Our approach is based on modelling the motion vector field as a Conditional Random Field (CRF) and obtaining oriented motion segments by finding the optimal labelling which minimises the global energy of CRF. These oriented motion segments are recursively merged based on gradient across their boundaries to obtain the final flow segments. This work in compressed domain can be easily extended to pixel domain by substituting motion vectors with motion based features like optical flow. The proposed algorithm is experimentally evaluated on a standard crowd flow dataset and its superior performance in both accuracy and computational time are demonstrated through quantitative results.
1305.3671
Marcus Hutter
Marcus Hutter
Sparse Adaptive Dirichlet-Multinomial-like Processes
32 LaTeX pages, 5 figures
null
null
null
cs.IT math.IT math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online estimation and modelling of i.i.d. data for short sequences over large or complex "alphabets" is a ubiquitous (sub)problem in machine learning, information theory, data compression, statistical language processing, and document analysis. The Dirichlet-Multinomial distribution (also called Polya urn scheme) and extensions thereof are widely applied for online i.i.d. estimation. Good a-priori choices for the parameters in this regime are difficult to obtain though. I derive an optimal adaptive choice for the main parameter via tight, data-dependent redundancy bounds for a related model. The 1-line recommendation is to set the 'total mass' = 'precision' = 'concentration' parameter to m/2ln[(n+1)/m], where n is the (past) sample size and m the number of different symbols observed (so far). The resulting estimator (i) is simple, (ii) online, (iii) fast, (iv) performs well for all m, small, middle and large, (v) is independent of the base alphabet size, (vi) non-occurring symbols induce no redundancy, (vii) the constant sequence has constant redundancy, (viii) symbols that appear only finitely often have bounded/constant contribution to the redundancy, (ix) is competitive with (slow) Bayesian mixing over all sub-alphabets.
[ { "created": "Thu, 16 May 2013 02:35:42 GMT", "version": "v1" } ]
2013-05-17
[ [ "Hutter", "Marcus", "" ] ]
Online estimation and modelling of i.i.d. data for short sequences over large or complex "alphabets" is a ubiquitous (sub)problem in machine learning, information theory, data compression, statistical language processing, and document analysis. The Dirichlet-Multinomial distribution (also called Polya urn scheme) and extensions thereof are widely applied for online i.i.d. estimation. Good a-priori choices for the parameters in this regime are difficult to obtain though. I derive an optimal adaptive choice for the main parameter via tight, data-dependent redundancy bounds for a related model. The 1-line recommendation is to set the 'total mass' = 'precision' = 'concentration' parameter to m/2ln[(n+1)/m], where n is the (past) sample size and m the number of different symbols observed (so far). The resulting estimator (i) is simple, (ii) online, (iii) fast, (iv) performs well for all m, small, middle and large, (v) is independent of the base alphabet size, (vi) non-occurring symbols induce no redundancy, (vii) the constant sequence has constant redundancy, (viii) symbols that appear only finitely often have bounded/constant contribution to the redundancy, (ix) is competitive with (slow) Bayesian mixing over all sub-alphabets.
2407.06823
Luca Lanzend\"orfer
Giulia Arg\"uello, Luca A. Lanzend\"orfer, Roger Wattenhofer
Cue Point Estimation using Object Detection
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Cue points indicate possible temporal boundaries in a transition between two pieces of music in DJ mixing and constitute a crucial element in autonomous DJ systems as well as for live mixing. In this work, we present a novel method for automatic cue point estimation, interpreted as a computer vision object detection task. Our proposed system is based on a pre-trained object detection transformer which we fine-tune on our novel cue point dataset. Our provided dataset contains 21k manually annotated cue points from human experts as well as metronome information for nearly 5k individual tracks, making this dataset 35x larger than the previously available cue point dataset. Unlike previous methods, our approach does not require low-level musical information analysis, while demonstrating increased precision in retrieving cue point positions. Moreover, our proposed method demonstrates high adherence to phrasing, a type of high-level music structure commonly emphasized in electronic dance music. The code, model checkpoints, and dataset are made publicly available.
[ { "created": "Tue, 9 Jul 2024 12:56:30 GMT", "version": "v1" } ]
2024-07-10
[ [ "Argüello", "Giulia", "" ], [ "Lanzendörfer", "Luca A.", "" ], [ "Wattenhofer", "Roger", "" ] ]
Cue points indicate possible temporal boundaries in a transition between two pieces of music in DJ mixing and constitute a crucial element in autonomous DJ systems as well as for live mixing. In this work, we present a novel method for automatic cue point estimation, interpreted as a computer vision object detection task. Our proposed system is based on a pre-trained object detection transformer which we fine-tune on our novel cue point dataset. Our provided dataset contains 21k manually annotated cue points from human experts as well as metronome information for nearly 5k individual tracks, making this dataset 35x larger than the previously available cue point dataset. Unlike previous methods, our approach does not require low-level musical information analysis, while demonstrating increased precision in retrieving cue point positions. Moreover, our proposed method demonstrates high adherence to phrasing, a type of high-level music structure commonly emphasized in electronic dance music. The code, model checkpoints, and dataset are made publicly available.
1302.4970
Paul J. Krause
Paul J. Krause, John Fox, Philip Judson
Is There a Role for Qualitative Risk Assessment?
Appears in Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence (UAI1995)
null
null
UAI-P-1995-PG-386-393
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classically, risk is characterized by a point value probability indicating the likelihood of occurrence of an adverse effect. However, there are domains where the attainability of objective numerical risk characterizations is increasingly being questioned. This paper reviews the arguments in favour of extending classical techniques of risk assessment to incorporate meaningful qualitative and weak quantitative risk characterizations. A technique in which linguistic uncertainty terms are defined in terms of patterns of argument is then proposed. The technique is demonstrated using a prototype computer-based system for predicting the carcinogenic risk due to novel chemical compounds.
[ { "created": "Wed, 20 Feb 2013 15:22:31 GMT", "version": "v1" } ]
2013-02-21
[ [ "Krause", "Paul J.", "" ], [ "Fox", "John", "" ], [ "Judson", "Philip", "" ] ]
Classically, risk is characterized by a point value probability indicating the likelihood of occurrence of an adverse effect. However, there are domains where the attainability of objective numerical risk characterizations is increasingly being questioned. This paper reviews the arguments in favour of extending classical techniques of risk assessment to incorporate meaningful qualitative and weak quantitative risk characterizations. A technique in which linguistic uncertainty terms are defined in terms of patterns of argument is then proposed. The technique is demonstrated using a prototype computer-based system for predicting the carcinogenic risk due to novel chemical compounds.
2304.05512
Taner Arsan
Taner Arsan, Sehnaz Sismanoglu Simsek, Onder Pekcan
Mathematical and Linguistic Characterization of Orhan Pamuk's Nobel Works
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In this study, Nobel Laureate Orhan Pamuk's works are chosen as examples of Turkish literature. By counting the number of letters and words in his texts, we find it possible to study his works statistically. It has been known that there is a geometrical order in text structures. Here the method based on the basic assumption of fractal geometry is introduced for calculating the fractal dimensions of Pamuk's texts. The results are compared with the applications of Zipf's law, which is successfully applied for letters and words, where two concepts, namely Zipf's dimension and Zipf's order, are introduced. The Zipf dimension of the novel My Name is Red is found to be much different than his other novels. However, it is linguistically observed that there is no fundamental difference between his corpora. The results are interpreted in terms of fractal dimensions and the Turkish language.
[ { "created": "Tue, 11 Apr 2023 21:37:50 GMT", "version": "v1" } ]
2023-04-13
[ [ "Arsan", "Taner", "" ], [ "Simsek", "Sehnaz Sismanoglu", "" ], [ "Pekcan", "Onder", "" ] ]
In this study, Nobel Laureate Orhan Pamuk's works are chosen as examples of Turkish literature. By counting the number of letters and words in his texts, we find it possible to study his works statistically. It has been known that there is a geometrical order in text structures. Here the method based on the basic assumption of fractal geometry is introduced for calculating the fractal dimensions of Pamuk's texts. The results are compared with the applications of Zipf's law, which is successfully applied for letters and words, where two concepts, namely Zipf's dimension and Zipf's order, are introduced. The Zipf dimension of the novel My Name is Red is found to be much different than his other novels. However, it is linguistically observed that there is no fundamental difference between his corpora. The results are interpreted in terms of fractal dimensions and the Turkish language.
2102.08098
Chen Zhu
Chen Zhu, Renkun Ni, Zheng Xu, Kezhi Kong, W. Ronny Huang, Tom Goldstein
GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training
NeurIPS 2021, fixing typos
null
null
null
cs.LG cs.CL cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Innovations in neural architectures have fostered significant breakthroughs in language modeling and computer vision. Unfortunately, novel architectures often result in challenging hyper-parameter choices and training instability if the network parameters are not properly initialized. A number of architecture-specific initialization schemes have been proposed, but these schemes are not always portable to new architectures. This paper presents GradInit, an automated and architecture agnostic method for initializing neural networks. GradInit is based on a simple heuristic; the norm of each network layer is adjusted so that a single step of SGD or Adam with prescribed hyperparameters results in the smallest possible loss value. This adjustment is done by introducing a scalar multiplier variable in front of each parameter block, and then optimizing these variables using a simple numerical scheme. GradInit accelerates the convergence and test performance of many convolutional architectures, both with or without skip connections, and even without normalization layers. It also improves the stability of the original Transformer architecture for machine translation, enabling training it without learning rate warmup using either Adam or SGD under a wide range of learning rates and momentum coefficients. Code is available at https://github.com/zhuchen03/gradinit.
[ { "created": "Tue, 16 Feb 2021 11:45:35 GMT", "version": "v1" }, { "created": "Wed, 27 Oct 2021 05:52:41 GMT", "version": "v2" }, { "created": "Wed, 24 Nov 2021 09:13:08 GMT", "version": "v3" } ]
2021-11-25
[ [ "Zhu", "Chen", "" ], [ "Ni", "Renkun", "" ], [ "Xu", "Zheng", "" ], [ "Kong", "Kezhi", "" ], [ "Huang", "W. Ronny", "" ], [ "Goldstein", "Tom", "" ] ]
Innovations in neural architectures have fostered significant breakthroughs in language modeling and computer vision. Unfortunately, novel architectures often result in challenging hyper-parameter choices and training instability if the network parameters are not properly initialized. A number of architecture-specific initialization schemes have been proposed, but these schemes are not always portable to new architectures. This paper presents GradInit, an automated and architecture agnostic method for initializing neural networks. GradInit is based on a simple heuristic; the norm of each network layer is adjusted so that a single step of SGD or Adam with prescribed hyperparameters results in the smallest possible loss value. This adjustment is done by introducing a scalar multiplier variable in front of each parameter block, and then optimizing these variables using a simple numerical scheme. GradInit accelerates the convergence and test performance of many convolutional architectures, both with or without skip connections, and even without normalization layers. It also improves the stability of the original Transformer architecture for machine translation, enabling training it without learning rate warmup using either Adam or SGD under a wide range of learning rates and momentum coefficients. Code is available at https://github.com/zhuchen03/gradinit.
1708.02096
Raghavendra Selvan
Raghavendra Selvan, Jens Petersen, Jesper H. Pedersen, Marleen de Bruijne
Extraction of Airways with Probabilistic State-space Models and Bayesian Smoothing
10 pages. Pre-print of the paper accepted at Workshop on Graphs in Biomedical Image Analysis. MICCAI 2017. Quebec City
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Segmenting tree structures is common in several image processing applications. In medical image analysis, reliable segmentations of airways, vessels, neurons and other tree structures can enable important clinical applications. We present a framework for tracking tree structures comprising of elongated branches using probabilistic state-space models and Bayesian smoothing. Unlike most existing methods that proceed with sequential tracking of branches, we present an exploratory method, that is less sensitive to local anomalies in the data due to acquisition noise and/or interfering structures. The evolution of individual branches is modelled using a process model and the observed data is incorporated into the update step of the Bayesian smoother using a measurement model that is based on a multi-scale blob detector. Bayesian smoothing is performed using the RTS (Rauch-Tung-Striebel) smoother, which provides Gaussian density estimates of branch states at each tracking step. We select likely branch seed points automatically based on the response of the blob detection and track from all such seed points using the RTS smoother. We use covariance of the marginal posterior density estimated for each branch to discriminate false positive and true positive branches. The method is evaluated on 3D chest CT scans to track airways. We show that the presented method results in additional branches compared to a baseline method based on region growing on probability images.
[ { "created": "Mon, 7 Aug 2017 12:43:26 GMT", "version": "v1" } ]
2017-08-08
[ [ "Selvan", "Raghavendra", "" ], [ "Petersen", "Jens", "" ], [ "Pedersen", "Jesper H.", "" ], [ "de Bruijne", "Marleen", "" ] ]
Segmenting tree structures is common in several image processing applications. In medical image analysis, reliable segmentations of airways, vessels, neurons and other tree structures can enable important clinical applications. We present a framework for tracking tree structures comprising of elongated branches using probabilistic state-space models and Bayesian smoothing. Unlike most existing methods that proceed with sequential tracking of branches, we present an exploratory method, that is less sensitive to local anomalies in the data due to acquisition noise and/or interfering structures. The evolution of individual branches is modelled using a process model and the observed data is incorporated into the update step of the Bayesian smoother using a measurement model that is based on a multi-scale blob detector. Bayesian smoothing is performed using the RTS (Rauch-Tung-Striebel) smoother, which provides Gaussian density estimates of branch states at each tracking step. We select likely branch seed points automatically based on the response of the blob detection and track from all such seed points using the RTS smoother. We use covariance of the marginal posterior density estimated for each branch to discriminate false positive and true positive branches. The method is evaluated on 3D chest CT scans to track airways. We show that the presented method results in additional branches compared to a baseline method based on region growing on probability images.
1410.5105
Chaitanya Swamy
Guru Guruganesh, Laura Sanita, and Chaitanya Swamy
Improved Region-Growing and Combinatorial Algorithms for $k$-Route Cut Problems
null
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the {\em $k$-route} generalizations of various cut problems, the most general of which is \emph{$k$-route multicut} ($k$-MC) problem, wherein we have $r$ source-sink pairs and the goal is to delete a minimum-cost set of edges to reduce the edge-connectivity of every source-sink pair to below $k$. The $k$-route extensions of multiway cut ($k$-MWC), and the minimum $s$-$t$ cut problem ($k$-$(s,t)$-cut), are similarly defined. We present various approximation and hardness results for these $k$-route cut problems that improve the state-of-the-art for these problems in several cases. (i) For {\em $k$-route multiway cut}, we devise simple, but surprisingly effective, combinatorial algorithms that yield bicriteria approximation guarantees that markedly improve upon the previous-best guarantees. (ii) For {\em $k$-route multicut}, we design algorithms that improve upon the previous-best approximation factors by roughly an $O(\sqrt{\log r})$-factor, when $k=2$, and for general $k$ and unit costs and any fixed violation of the connectivity threshold $k$. The main technical innovation is the definition of a new, powerful \emph{region growing} lemma that allows us to perform region-growing in a recursive fashion even though the LP solution yields a {\em different metric} for each source-sink pair. (iii) We complement these results by showing that the {\em $k$-route $s$-$t$ cut} problem is at least as hard to approximate as the {\em densest-$k$-subgraph} (DkS) problem on uniform hypergraphs.
[ { "created": "Sun, 19 Oct 2014 19:23:24 GMT", "version": "v1" } ]
2014-10-21
[ [ "Guruganesh", "Guru", "" ], [ "Sanita", "Laura", "" ], [ "Swamy", "Chaitanya", "" ] ]
We study the {\em $k$-route} generalizations of various cut problems, the most general of which is \emph{$k$-route multicut} ($k$-MC) problem, wherein we have $r$ source-sink pairs and the goal is to delete a minimum-cost set of edges to reduce the edge-connectivity of every source-sink pair to below $k$. The $k$-route extensions of multiway cut ($k$-MWC), and the minimum $s$-$t$ cut problem ($k$-$(s,t)$-cut), are similarly defined. We present various approximation and hardness results for these $k$-route cut problems that improve the state-of-the-art for these problems in several cases. (i) For {\em $k$-route multiway cut}, we devise simple, but surprisingly effective, combinatorial algorithms that yield bicriteria approximation guarantees that markedly improve upon the previous-best guarantees. (ii) For {\em $k$-route multicut}, we design algorithms that improve upon the previous-best approximation factors by roughly an $O(\sqrt{\log r})$-factor, when $k=2$, and for general $k$ and unit costs and any fixed violation of the connectivity threshold $k$. The main technical innovation is the definition of a new, powerful \emph{region growing} lemma that allows us to perform region-growing in a recursive fashion even though the LP solution yields a {\em different metric} for each source-sink pair. (iii) We complement these results by showing that the {\em $k$-route $s$-$t$ cut} problem is at least as hard to approximate as the {\em densest-$k$-subgraph} (DkS) problem on uniform hypergraphs.
2306.08620
John Thickstun
John Thickstun, David Hall, Chris Donahue, Percy Liang
Anticipatory Music Transformer
TMLR accepted version
null
null
null
cs.SD cs.LG eess.AS stat.ML
http://creativecommons.org/licenses/by/4.0/
We introduce anticipation: a method for constructing a controllable generative model of a temporal point process (the event process) conditioned asynchronously on realizations of a second, correlated process (the control process). We achieve this by interleaving sequences of events and controls, such that controls appear following stopping times in the event sequence. This work is motivated by problems arising in the control of symbolic music generation. We focus on infilling control tasks, whereby the controls are a subset of the events themselves, and conditional generation completes a sequence of events given the fixed control events. We train anticipatory infilling models using the large and diverse Lakh MIDI music dataset. These models match the performance of autoregressive models for prompted music generation, with the additional capability to perform infilling control tasks, including accompaniment. Human evaluators report that an anticipatory model produces accompaniments with similar musicality to even music composed by humans over a 20-second clip.
[ { "created": "Wed, 14 Jun 2023 16:27:53 GMT", "version": "v1" }, { "created": "Thu, 25 Jul 2024 18:35:33 GMT", "version": "v2" } ]
2024-07-29
[ [ "Thickstun", "John", "" ], [ "Hall", "David", "" ], [ "Donahue", "Chris", "" ], [ "Liang", "Percy", "" ] ]
We introduce anticipation: a method for constructing a controllable generative model of a temporal point process (the event process) conditioned asynchronously on realizations of a second, correlated process (the control process). We achieve this by interleaving sequences of events and controls, such that controls appear following stopping times in the event sequence. This work is motivated by problems arising in the control of symbolic music generation. We focus on infilling control tasks, whereby the controls are a subset of the events themselves, and conditional generation completes a sequence of events given the fixed control events. We train anticipatory infilling models using the large and diverse Lakh MIDI music dataset. These models match the performance of autoregressive models for prompted music generation, with the additional capability to perform infilling control tasks, including accompaniment. Human evaluators report that an anticipatory model produces accompaniments with similar musicality to even music composed by humans over a 20-second clip.
2010.16073
Zeeshan Ahmad
Zeeshan Ahmad and Naimul khan
CNN based Multistage Gated Average Fusion (MGAF) for Human Action Recognition Using Depth and Inertial Sensors
arXiv admin note: text overlap with arXiv:1910.11482
null
null
null
cs.CV cs.LG cs.MM eess.IV
http://creativecommons.org/licenses/by/4.0/
Convolutional Neural Network (CNN) provides leverage to extract and fuse features from all layers of its architecture. However, extracting and fusing intermediate features from different layers of CNN structure is still uninvestigated for Human Action Recognition (HAR) using depth and inertial sensors. To get maximum benefit of accessing all the CNN's layers, in this paper, we propose novel Multistage Gated Average Fusion (MGAF) network which extracts and fuses features from all layers of CNN using our novel and computationally efficient Gated Average Fusion (GAF) network, a decisive integral element of MGAF. At the input of the proposed MGAF, we transform the depth and inertial sensor data into depth images called sequential front view images (SFI) and signal images (SI) respectively. These SFI are formed from the front view information generated by depth data. CNN is employed to extract feature maps from both input modalities. GAF network fuses the extracted features effectively while preserving the dimensionality of fused feature as well. The proposed MGAF network has structural extensibility and can be unfolded to more than two modalities. Experiments on three publicly available multimodal HAR datasets demonstrate that the proposed MGAF outperforms the previous state of the art fusion methods for depth-inertial HAR in terms of recognition accuracy while being computationally much more efficient. We increase the accuracy by an average of 1.5 percent while reducing the computational cost by approximately 50 percent over the previous state of the art.
[ { "created": "Thu, 29 Oct 2020 11:49:13 GMT", "version": "v1" } ]
2020-11-02
[ [ "Ahmad", "Zeeshan", "" ], [ "khan", "Naimul", "" ] ]
Convolutional Neural Network (CNN) provides leverage to extract and fuse features from all layers of its architecture. However, extracting and fusing intermediate features from different layers of CNN structure is still uninvestigated for Human Action Recognition (HAR) using depth and inertial sensors. To get maximum benefit of accessing all the CNN's layers, in this paper, we propose novel Multistage Gated Average Fusion (MGAF) network which extracts and fuses features from all layers of CNN using our novel and computationally efficient Gated Average Fusion (GAF) network, a decisive integral element of MGAF. At the input of the proposed MGAF, we transform the depth and inertial sensor data into depth images called sequential front view images (SFI) and signal images (SI) respectively. These SFI are formed from the front view information generated by depth data. CNN is employed to extract feature maps from both input modalities. GAF network fuses the extracted features effectively while preserving the dimensionality of fused feature as well. The proposed MGAF network has structural extensibility and can be unfolded to more than two modalities. Experiments on three publicly available multimodal HAR datasets demonstrate that the proposed MGAF outperforms the previous state of the art fusion methods for depth-inertial HAR in terms of recognition accuracy while being computationally much more efficient. We increase the accuracy by an average of 1.5 percent while reducing the computational cost by approximately 50 percent over the previous state of the art.
2311.03774
Cheng Cheng
Cheng Cheng, Lin Song, Ruoyi Xue, Hang Wang, Hongbin Sun, Yixiao Ge, Ying Shan
Meta-Adapter: An Online Few-shot Learner for Vision-Language Model
Accepted by NeurIPS 2023
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
The contrastive vision-language pre-training, known as CLIP, demonstrates remarkable potential in perceiving open-world visual concepts, enabling effective zero-shot image recognition. Nevertheless, few-shot learning methods based on CLIP typically require offline fine-tuning of the parameters on few-shot samples, resulting in longer inference time and the risk of over-fitting in certain domains. To tackle these challenges, we propose the Meta-Adapter, a lightweight residual-style adapter, to refine the CLIP features guided by the few-shot samples in an online manner. With a few training samples, our method can enable effective few-shot learning capabilities and generalize to unseen data or tasks without additional fine-tuning, achieving competitive performance and high efficiency. Without bells and whistles, our approach outperforms the state-of-the-art online few-shot learning method by an average of 3.6\% on eight image classification datasets with higher inference speed. Furthermore, our model is simple and flexible, serving as a plug-and-play module directly applicable to downstream tasks. Without further fine-tuning, Meta-Adapter obtains notable performance improvements in open-vocabulary object detection and segmentation tasks.
[ { "created": "Tue, 7 Nov 2023 07:27:16 GMT", "version": "v1" }, { "created": "Thu, 11 Jan 2024 06:03:56 GMT", "version": "v2" } ]
2024-01-12
[ [ "Cheng", "Cheng", "" ], [ "Song", "Lin", "" ], [ "Xue", "Ruoyi", "" ], [ "Wang", "Hang", "" ], [ "Sun", "Hongbin", "" ], [ "Ge", "Yixiao", "" ], [ "Shan", "Ying", "" ] ]
The contrastive vision-language pre-training, known as CLIP, demonstrates remarkable potential in perceiving open-world visual concepts, enabling effective zero-shot image recognition. Nevertheless, few-shot learning methods based on CLIP typically require offline fine-tuning of the parameters on few-shot samples, resulting in longer inference time and the risk of over-fitting in certain domains. To tackle these challenges, we propose the Meta-Adapter, a lightweight residual-style adapter, to refine the CLIP features guided by the few-shot samples in an online manner. With a few training samples, our method can enable effective few-shot learning capabilities and generalize to unseen data or tasks without additional fine-tuning, achieving competitive performance and high efficiency. Without bells and whistles, our approach outperforms the state-of-the-art online few-shot learning method by an average of 3.6\% on eight image classification datasets with higher inference speed. Furthermore, our model is simple and flexible, serving as a plug-and-play module directly applicable to downstream tasks. Without further fine-tuning, Meta-Adapter obtains notable performance improvements in open-vocabulary object detection and segmentation tasks.
1912.05362
Sylvain Cherrier
Hantanirina Felixie, Jean Razafindramintsa, Sylvain Cherrier (LIGM), Thomas Mahatody, Laurent George (LIGM), Victor Manantsoa
Jason-RS, a Collaboration between Agents and an IoT Platform
null
International Workshop on Networking for Smart Living, Dec 2019, Paris, France
null
null
cs.MA cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article we start from the observation that REST services are the most used as tools of interoperability and orchestration in the Internet of Things (IoT). But REST does not make it possible to inject artificial intelligence into connected objects, ie it cannot allow autonomy and decision-making by the objects themselves. To define an intelligence to a connected object, one can use a Beleive Desire Intention agent (BDI an intelligent agent that adopts human behavior) such as Jason Agentspeak. But Jason AgentSpeak does not guarantee orchestration or choreography between connected objects. There are platforms for service orchestration and choreography in IoT, still the interconnection with artificial intelligence needs to be built. In this article, we propose a new approach called Jason-RS. It is a result of pairing Jason BDI agent with the web service technologies to exploit the agent capacity as a service, Jason-RS turn in Java SE and it does not need any middleware. The architecture that we propose allows to create the link between Artificial Intelligence and Services choreography to reduce human intervention in the service choreography. In order to validate the proposed approach, we have interconnected the Iot BeC 3 platform and the REST agent (Jason-RS). The decision-making faculty offered by Jason-RS is derived from the information sent by the objects according to the different methods of REST (GET, POST, PUT, and DELETE) that Jason-RS offers. As a result, the objects feed the inter-agent collaborations and decision-making inside the agent. Finally, we show that Jason-RS allows the Web of Objects to power complex systems such as an artificial intelligence responsible for processing data. This performance is promising.
[ { "created": "Wed, 11 Dec 2019 14:43:22 GMT", "version": "v1" } ]
2019-12-12
[ [ "Felixie", "Hantanirina", "", "LIGM" ], [ "Razafindramintsa", "Jean", "", "LIGM" ], [ "Cherrier", "Sylvain", "", "LIGM" ], [ "Mahatody", "Thomas", "", "LIGM" ], [ "George", "Laurent", "", "LIGM" ], [ "Manantsoa", "Victor", "" ] ]
In this article we start from the observation that REST services are the most used as tools of interoperability and orchestration in the Internet of Things (IoT). But REST does not make it possible to inject artificial intelligence into connected objects, ie it cannot allow autonomy and decision-making by the objects themselves. To define an intelligence to a connected object, one can use a Beleive Desire Intention agent (BDI an intelligent agent that adopts human behavior) such as Jason Agentspeak. But Jason AgentSpeak does not guarantee orchestration or choreography between connected objects. There are platforms for service orchestration and choreography in IoT, still the interconnection with artificial intelligence needs to be built. In this article, we propose a new approach called Jason-RS. It is a result of pairing Jason BDI agent with the web service technologies to exploit the agent capacity as a service, Jason-RS turn in Java SE and it does not need any middleware. The architecture that we propose allows to create the link between Artificial Intelligence and Services choreography to reduce human intervention in the service choreography. In order to validate the proposed approach, we have interconnected the Iot BeC 3 platform and the REST agent (Jason-RS). The decision-making faculty offered by Jason-RS is derived from the information sent by the objects according to the different methods of REST (GET, POST, PUT, and DELETE) that Jason-RS offers. As a result, the objects feed the inter-agent collaborations and decision-making inside the agent. Finally, we show that Jason-RS allows the Web of Objects to power complex systems such as an artificial intelligence responsible for processing data. This performance is promising.
2209.03910
Prajwal Chidananda
Prajwal Chidananda, Saurabh Nair, Douglas Lee, Adrian Kaehler
PixTrack: Precise 6DoF Object Pose Tracking using NeRF Templates and Feature-metric Alignment
null
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present PixTrack, a vision based object pose tracking framework using novel view synthesis and deep feature-metric alignment. We follow an SfM-based relocalization paradigm where we use a Neural Radiance Field to canonically represent the tracked object. Our evaluations demonstrate that our method produces highly accurate, robust, and jitter-free 6DoF pose estimates of objects in both monocular RGB images and RGB-D images without the need of any data annotation or trajectory smoothing. Our method is also computationally efficient making it easy to have multi-object tracking with no alteration to our algorithm through simple CPU multiprocessing. Our code is available at: https://github.com/GiantAI/pixtrack
[ { "created": "Thu, 8 Sep 2022 16:36:24 GMT", "version": "v1" }, { "created": "Wed, 14 Feb 2024 09:43:01 GMT", "version": "v2" } ]
2024-02-16
[ [ "Chidananda", "Prajwal", "" ], [ "Nair", "Saurabh", "" ], [ "Lee", "Douglas", "" ], [ "Kaehler", "Adrian", "" ] ]
We present PixTrack, a vision based object pose tracking framework using novel view synthesis and deep feature-metric alignment. We follow an SfM-based relocalization paradigm where we use a Neural Radiance Field to canonically represent the tracked object. Our evaluations demonstrate that our method produces highly accurate, robust, and jitter-free 6DoF pose estimates of objects in both monocular RGB images and RGB-D images without the need of any data annotation or trajectory smoothing. Our method is also computationally efficient making it easy to have multi-object tracking with no alteration to our algorithm through simple CPU multiprocessing. Our code is available at: https://github.com/GiantAI/pixtrack
2212.03795
Idit Diamant
Idit Diamant, Roy H. Jennings, Oranit Dror, Hai Victor Habi, Arnon Netzer
Reconciling a Centroid-Hypothesis Conflict in Source-Free Domain Adaptation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Source-free domain adaptation (SFDA) aims to transfer knowledge learned from a source domain to an unlabeled target domain, where the source data is unavailable during adaptation. Existing approaches for SFDA focus on self-training usually including well-established entropy minimization techniques. One of the main challenges in SFDA is to reduce accumulation of errors caused by domain misalignment. A recent strategy successfully managed to reduce error accumulation by pseudo-labeling the target samples based on class-wise prototypes (centroids) generated by their clustering in the representation space. However, this strategy also creates cases for which the cross-entropy of a pseudo-label and the minimum entropy have a conflict in their objectives. We call this conflict the centroid-hypothesis conflict. We propose to reconcile this conflict by aligning the entropy minimization objective with that of the pseudo labels' cross entropy. We demonstrate the effectiveness of aligning the two loss objectives on three domain adaptation datasets. In addition, we provide state-of-the-art results using up-to-date architectures also showing the consistency of our method across these architectures.
[ { "created": "Wed, 7 Dec 2022 17:23:49 GMT", "version": "v1" } ]
2022-12-08
[ [ "Diamant", "Idit", "" ], [ "Jennings", "Roy H.", "" ], [ "Dror", "Oranit", "" ], [ "Habi", "Hai Victor", "" ], [ "Netzer", "Arnon", "" ] ]
Source-free domain adaptation (SFDA) aims to transfer knowledge learned from a source domain to an unlabeled target domain, where the source data is unavailable during adaptation. Existing approaches for SFDA focus on self-training usually including well-established entropy minimization techniques. One of the main challenges in SFDA is to reduce accumulation of errors caused by domain misalignment. A recent strategy successfully managed to reduce error accumulation by pseudo-labeling the target samples based on class-wise prototypes (centroids) generated by their clustering in the representation space. However, this strategy also creates cases for which the cross-entropy of a pseudo-label and the minimum entropy have a conflict in their objectives. We call this conflict the centroid-hypothesis conflict. We propose to reconcile this conflict by aligning the entropy minimization objective with that of the pseudo labels' cross entropy. We demonstrate the effectiveness of aligning the two loss objectives on three domain adaptation datasets. In addition, we provide state-of-the-art results using up-to-date architectures also showing the consistency of our method across these architectures.
1303.3636
Rodrigo de Lamare
Lei Wang and Rodrigo C. de Lamare
Low-Complexity Adaptive Set-Membership Reduced-rank LCMV Beamforming
2 figures, 5 pages
ISWCS 2010
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a new adaptive algorithm for the implementation of the linearly constrained minimum variance (LCMV) beamformer. The proposed algorithm utilizes the set-membership filtering (SMF) framework and the reduced-rank joint iterative optimization (JIO) scheme. We develop a stochastic gradient (SG) based algorithm for the beamformer design. An effective time-varying bound is employed in the proposed method to adjust the step sizes, avoid the misadjustment and the risk of overbounding or underbounding. Simulations are performed to show the improved performance of the proposed algorithm in comparison with existing full-rank and reduced-rank methods.
[ { "created": "Thu, 14 Mar 2013 22:56:15 GMT", "version": "v1" } ]
2013-03-18
[ [ "Wang", "Lei", "" ], [ "de Lamare", "Rodrigo C.", "" ] ]
This paper proposes a new adaptive algorithm for the implementation of the linearly constrained minimum variance (LCMV) beamformer. The proposed algorithm utilizes the set-membership filtering (SMF) framework and the reduced-rank joint iterative optimization (JIO) scheme. We develop a stochastic gradient (SG) based algorithm for the beamformer design. An effective time-varying bound is employed in the proposed method to adjust the step sizes, avoid the misadjustment and the risk of overbounding or underbounding. Simulations are performed to show the improved performance of the proposed algorithm in comparison with existing full-rank and reduced-rank methods.
0901.2804
Li Chia Choo
Li-Chia Choo and Kai-Kit Wong
The Secrecy Capacity for a 3-Receiver Broadcast Channel with Degraded Message Sets
This paper has been withdrawn by the author
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper has been withdrawn by the author due to some errors.
[ { "created": "Mon, 19 Jan 2009 10:36:33 GMT", "version": "v1" }, { "created": "Thu, 5 Feb 2009 16:04:22 GMT", "version": "v2" }, { "created": "Tue, 3 Mar 2009 15:08:26 GMT", "version": "v3" }, { "created": "Thu, 11 Jun 2009 09:53:16 GMT", "version": "v4" } ]
2009-06-11
[ [ "Choo", "Li-Chia", "" ], [ "Wong", "Kai-Kit", "" ] ]
This paper has been withdrawn by the author due to some errors.
2408.00882
Emily Wenger
Emily Wenger, Eshika Saxena, Mohamed Malhou, Ellie Thieu, Kristin Lauter
Benchmarking Attacks on Learning with Errors
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Lattice cryptography schemes based on the learning with errors (LWE) hardness assumption have been standardized by NIST for use as post-quantum cryptosystems, and by HomomorphicEncryption.org for encrypted compute on sensitive data. Thus, understanding their concrete security is critical. Most work on LWE security focuses on theoretical estimates of attack performance, which is important but may overlook attack nuances arising in real-world implementations. The sole existing concrete benchmarking effort, the Darmstadt Lattice Challenge, does not include benchmarks relevant to the standardized LWE parameter choices - such as small secret and small error distributions, and Ring-LWE (RLWE) and Module-LWE (MLWE) variants. To improve our understanding of concrete LWE security, we provide the first benchmarks for LWE secret recovery on standardized parameters, for small and low-weight (sparse) secrets. We evaluate four LWE attacks in these settings to serve as a baseline: the Search-LWE attacks uSVP, SALSA, and Cool & Cruel, and the Decision-LWE attack: Dual Hybrid Meet-in-the-Middle (MitM). We extend the SALSA and Cool & Cruel attacks in significant ways, and implement and scale up MitM attacks for the first time. For example, we recover hamming weight $9-11$ binomial secrets for KYBER ($\kappa=2$) parameters in $28-36$ hours with SALSA and Cool\&Cruel, while we find that MitM can solve Decision-LWE instances for hamming weights up to $4$ in under an hour for Kyber parameters, while uSVP attacks do not recover any secrets after running for more than $1100$ hours. We also compare concrete performance against theoretical estimates. Finally, we open source the code to enable future research.
[ { "created": "Thu, 1 Aug 2024 19:21:20 GMT", "version": "v1" } ]
2024-08-05
[ [ "Wenger", "Emily", "" ], [ "Saxena", "Eshika", "" ], [ "Malhou", "Mohamed", "" ], [ "Thieu", "Ellie", "" ], [ "Lauter", "Kristin", "" ] ]
Lattice cryptography schemes based on the learning with errors (LWE) hardness assumption have been standardized by NIST for use as post-quantum cryptosystems, and by HomomorphicEncryption.org for encrypted compute on sensitive data. Thus, understanding their concrete security is critical. Most work on LWE security focuses on theoretical estimates of attack performance, which is important but may overlook attack nuances arising in real-world implementations. The sole existing concrete benchmarking effort, the Darmstadt Lattice Challenge, does not include benchmarks relevant to the standardized LWE parameter choices - such as small secret and small error distributions, and Ring-LWE (RLWE) and Module-LWE (MLWE) variants. To improve our understanding of concrete LWE security, we provide the first benchmarks for LWE secret recovery on standardized parameters, for small and low-weight (sparse) secrets. We evaluate four LWE attacks in these settings to serve as a baseline: the Search-LWE attacks uSVP, SALSA, and Cool & Cruel, and the Decision-LWE attack: Dual Hybrid Meet-in-the-Middle (MitM). We extend the SALSA and Cool & Cruel attacks in significant ways, and implement and scale up MitM attacks for the first time. For example, we recover hamming weight $9-11$ binomial secrets for KYBER ($\kappa=2$) parameters in $28-36$ hours with SALSA and Cool\&Cruel, while we find that MitM can solve Decision-LWE instances for hamming weights up to $4$ in under an hour for Kyber parameters, while uSVP attacks do not recover any secrets after running for more than $1100$ hours. We also compare concrete performance against theoretical estimates. Finally, we open source the code to enable future research.
2101.06232
Yuzhou Lin
Yuzhou Lin, Xiaolin Chang
Towards interpreting ML-based automated malware detection models: a survey
null
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Malware is being increasingly threatening and malware detectors based on traditional signature-based analysis are no longer suitable for current malware detection. Recently, the models based on machine learning (ML) are developed for predicting unknown malware variants and saving human strength. However, most of the existing ML models are black-box, which made their pre-diction results undependable, and therefore need further interpretation in order to be effectively deployed in the wild. This paper aims to examine and categorize the existing researches on ML-based malware detector interpretability. We first give a detailed comparison over the previous work on common ML model inter-pretability in groups after introducing the principles, attributes, evaluation indi-cators and taxonomy of common ML interpretability. Then we investigate the interpretation methods towards malware detection, by addressing the importance of interpreting malware detectors, challenges faced by this field, solutions for migitating these challenges, and a new taxonomy for classifying all the state-of-the-art malware detection interpretability work in recent years. The highlight of our survey is providing a new taxonomy towards malware detection interpreta-tion methods based on the common taxonomy summarized by previous re-searches in the common field. In addition, we are the first to evaluate the state-of-the-art approaches by interpretation method attributes to generate the final score so as to give insight to quantifying the interpretability. By concluding the results of the recent researches, we hope our work can provide suggestions for researchers who are interested in the interpretability on ML-based malware de-tection models.
[ { "created": "Fri, 15 Jan 2021 17:34:40 GMT", "version": "v1" } ]
2021-01-18
[ [ "Lin", "Yuzhou", "" ], [ "Chang", "Xiaolin", "" ] ]
Malware is being increasingly threatening and malware detectors based on traditional signature-based analysis are no longer suitable for current malware detection. Recently, the models based on machine learning (ML) are developed for predicting unknown malware variants and saving human strength. However, most of the existing ML models are black-box, which made their pre-diction results undependable, and therefore need further interpretation in order to be effectively deployed in the wild. This paper aims to examine and categorize the existing researches on ML-based malware detector interpretability. We first give a detailed comparison over the previous work on common ML model inter-pretability in groups after introducing the principles, attributes, evaluation indi-cators and taxonomy of common ML interpretability. Then we investigate the interpretation methods towards malware detection, by addressing the importance of interpreting malware detectors, challenges faced by this field, solutions for migitating these challenges, and a new taxonomy for classifying all the state-of-the-art malware detection interpretability work in recent years. The highlight of our survey is providing a new taxonomy towards malware detection interpreta-tion methods based on the common taxonomy summarized by previous re-searches in the common field. In addition, we are the first to evaluate the state-of-the-art approaches by interpretation method attributes to generate the final score so as to give insight to quantifying the interpretability. By concluding the results of the recent researches, we hope our work can provide suggestions for researchers who are interested in the interpretability on ML-based malware de-tection models.
2312.09963
Matteo Cardellini
Matteo Cardellini, Enrico Giunchiglia, and Marco Maratea
Symbolic Numeric Planning with Patterns
Accepted at AAAI24
null
10.1609/aaai.v38i18.29985
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a novel approach for solving linear numeric planning problems, called Symbolic Pattern Planning. Given a planning problem $\Pi$, a bound $n$ and a pattern -- defined as an arbitrary sequence of actions -- we encode the problem of finding a plan for $\Pi$ with bound $n$ as a formula with fewer variables and/or clauses than the state-of-the-art rolled-up and relaxed-relaxed-$\exists$ encodings. More importantly, we prove that for any given bound, it is never the case that the latter two encodings allow finding a valid plan while ours does not. On the experimental side, we consider 6 other planning systems -- including the ones which participated in this year's International Planning Competition (IPC) -- and we show that our planner Patty has remarkably good comparative performances on this year's IPC problems.
[ { "created": "Fri, 15 Dec 2023 17:20:25 GMT", "version": "v1" }, { "created": "Sun, 7 Jan 2024 14:44:18 GMT", "version": "v2" }, { "created": "Mon, 12 Feb 2024 09:52:37 GMT", "version": "v3" } ]
2024-03-28
[ [ "Cardellini", "Matteo", "" ], [ "Giunchiglia", "Enrico", "" ], [ "Maratea", "Marco", "" ] ]
In this paper, we propose a novel approach for solving linear numeric planning problems, called Symbolic Pattern Planning. Given a planning problem $\Pi$, a bound $n$ and a pattern -- defined as an arbitrary sequence of actions -- we encode the problem of finding a plan for $\Pi$ with bound $n$ as a formula with fewer variables and/or clauses than the state-of-the-art rolled-up and relaxed-relaxed-$\exists$ encodings. More importantly, we prove that for any given bound, it is never the case that the latter two encodings allow finding a valid plan while ours does not. On the experimental side, we consider 6 other planning systems -- including the ones which participated in this year's International Planning Competition (IPC) -- and we show that our planner Patty has remarkably good comparative performances on this year's IPC problems.
1901.08728
Rishabh Agarwal
Rishabh Agarwal
Evaluation Function Approximation for Scrabble
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The current state-of-the-art Scrabble agents are not learning-based but depend on truncated Monte Carlo simulations and the quality of such agents is contingent upon the time available for running the simulations. This thesis takes steps towards building a learning-based Scrabble agent using self-play. Specifically, we try to find a better function approximation for the static evaluation function used in Scrabble which determines the move goodness at a given board configuration. In this work, we experimented with evolutionary algorithms and Bayesian Optimization to learn the weights for an approximate feature-based evaluation function. However, these optimization methods were not quite effective, which lead us to explore the given problem from an Imitation Learning point of view. We also tried to imitate the ranking of moves produced by the Quackle simulation agent using supervised learning with a neural network function approximator which takes the raw representation of the Scrabble board as the input instead of using only a fixed number of handcrafted features.
[ { "created": "Fri, 25 Jan 2019 04:05:52 GMT", "version": "v1" } ]
2019-01-28
[ [ "Agarwal", "Rishabh", "" ] ]
The current state-of-the-art Scrabble agents are not learning-based but depend on truncated Monte Carlo simulations and the quality of such agents is contingent upon the time available for running the simulations. This thesis takes steps towards building a learning-based Scrabble agent using self-play. Specifically, we try to find a better function approximation for the static evaluation function used in Scrabble which determines the move goodness at a given board configuration. In this work, we experimented with evolutionary algorithms and Bayesian Optimization to learn the weights for an approximate feature-based evaluation function. However, these optimization methods were not quite effective, which lead us to explore the given problem from an Imitation Learning point of view. We also tried to imitate the ranking of moves produced by the Quackle simulation agent using supervised learning with a neural network function approximator which takes the raw representation of the Scrabble board as the input instead of using only a fixed number of handcrafted features.
1911.10150
Alex Lang
Sourabh Vora, Alex H. Lang, Bassam Helou, and Oscar Beijbom
PointPainting: Sequential Fusion for 3D Object Detection
11 pages, 6 figures, 8 tables. v1 is initial submission to CVPR 2020. v2 is final version accepted for publication at CVPR 2020
null
null
null
cs.CV cs.LG eess.IV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Camera and lidar are important sensor modalities for robotics in general and self-driving cars in particular. The sensors provide complementary information offering an opportunity for tight sensor-fusion. Surprisingly, lidar-only methods outperform fusion methods on the main benchmark datasets, suggesting a gap in the literature. In this work, we propose PointPainting: a sequential fusion method to fill this gap. PointPainting works by projecting lidar points into the output of an image-only semantic segmentation network and appending the class scores to each point. The appended (painted) point cloud can then be fed to any lidar-only method. Experiments show large improvements on three different state-of-the art methods, Point-RCNN, VoxelNet and PointPillars on the KITTI and nuScenes datasets. The painted version of PointRCNN represents a new state of the art on the KITTI leaderboard for the bird's-eye view detection task. In ablation, we study how the effects of Painting depends on the quality and format of the semantic segmentation output, and demonstrate how latency can be minimized through pipelining.
[ { "created": "Fri, 22 Nov 2019 17:19:50 GMT", "version": "v1" }, { "created": "Wed, 6 May 2020 17:17:18 GMT", "version": "v2" } ]
2020-05-07
[ [ "Vora", "Sourabh", "" ], [ "Lang", "Alex H.", "" ], [ "Helou", "Bassam", "" ], [ "Beijbom", "Oscar", "" ] ]
Camera and lidar are important sensor modalities for robotics in general and self-driving cars in particular. The sensors provide complementary information offering an opportunity for tight sensor-fusion. Surprisingly, lidar-only methods outperform fusion methods on the main benchmark datasets, suggesting a gap in the literature. In this work, we propose PointPainting: a sequential fusion method to fill this gap. PointPainting works by projecting lidar points into the output of an image-only semantic segmentation network and appending the class scores to each point. The appended (painted) point cloud can then be fed to any lidar-only method. Experiments show large improvements on three different state-of-the art methods, Point-RCNN, VoxelNet and PointPillars on the KITTI and nuScenes datasets. The painted version of PointRCNN represents a new state of the art on the KITTI leaderboard for the bird's-eye view detection task. In ablation, we study how the effects of Painting depends on the quality and format of the semantic segmentation output, and demonstrate how latency can be minimized through pipelining.
2402.00591
Nicolas Lazzari
Nicolas Lazzari, Stefano De Giorgis, Aldo Gangemi, Valentina Presutti
Sandra -- A Neuro-Symbolic Reasoner Based On Descriptions And Situations
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper presents sandra, a neuro-symbolic reasoner combining vectorial representations with deductive reasoning. Sandra builds a vector space constrained by an ontology and performs reasoning over it. The geometric nature of the reasoner allows its combination with neural networks, bridging the gap with symbolic knowledge representations. Sandra is based on the Description and Situation (DnS) ontology design pattern, a formalization of frame semantics. Given a set of facts (a situation) it allows to infer all possible perspectives (descriptions) that can provide a plausible interpretation for it, even in presence of incomplete information. We prove that our method is correct with respect to the DnS model. We experiment with two different tasks and their standard benchmarks, demonstrating that, without increasing complexity, sandra (i) outperforms all the baselines (ii) provides interpretability in the classification process, and (iii) allows control over the vector space, which is designed a priori.
[ { "created": "Thu, 1 Feb 2024 13:37:53 GMT", "version": "v1" }, { "created": "Fri, 2 Feb 2024 08:58:41 GMT", "version": "v2" }, { "created": "Mon, 25 Mar 2024 10:52:20 GMT", "version": "v3" } ]
2024-03-26
[ [ "Lazzari", "Nicolas", "" ], [ "De Giorgis", "Stefano", "" ], [ "Gangemi", "Aldo", "" ], [ "Presutti", "Valentina", "" ] ]
This paper presents sandra, a neuro-symbolic reasoner combining vectorial representations with deductive reasoning. Sandra builds a vector space constrained by an ontology and performs reasoning over it. The geometric nature of the reasoner allows its combination with neural networks, bridging the gap with symbolic knowledge representations. Sandra is based on the Description and Situation (DnS) ontology design pattern, a formalization of frame semantics. Given a set of facts (a situation) it allows to infer all possible perspectives (descriptions) that can provide a plausible interpretation for it, even in presence of incomplete information. We prove that our method is correct with respect to the DnS model. We experiment with two different tasks and their standard benchmarks, demonstrating that, without increasing complexity, sandra (i) outperforms all the baselines (ii) provides interpretability in the classification process, and (iii) allows control over the vector space, which is designed a priori.
2209.09635
Yuxuan Du
Ruohua Zhou, Yuxuan Du, Chenlei Hu
The BUCEA Speaker Diarization System for the VoxCeleb Speaker Recognition Challenge 2022
null
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the BUCEA speaker diarization system for the 2022 VoxCeleb Speaker Recognition Challenge. Voxsrc-22 provides the development set and test set of VoxConverse, and we mainly use the test set of VoxConverse for parameter adjustment. Our system consists of several modules, including speech activity detection (VAD), speaker embedding extractor, clustering methods, overlapping speech detection (OSD), and result fusion. Without considering overlap, the Dover-LAP (short for Diarization Output Voting Error Reduction) method was applied to system fusion, and overlapping speech detection and processing were finally carried out. Our best system achieves a diarization error rate (DER) of 5.48% and a Jaccard error rate (JER) of 32.1% on the VoxSRC 2022 evaluation set respectively.
[ { "created": "Tue, 20 Sep 2022 11:33:58 GMT", "version": "v1" } ]
2022-09-21
[ [ "Zhou", "Ruohua", "" ], [ "Du", "Yuxuan", "" ], [ "Hu", "Chenlei", "" ] ]
This paper describes the BUCEA speaker diarization system for the 2022 VoxCeleb Speaker Recognition Challenge. Voxsrc-22 provides the development set and test set of VoxConverse, and we mainly use the test set of VoxConverse for parameter adjustment. Our system consists of several modules, including speech activity detection (VAD), speaker embedding extractor, clustering methods, overlapping speech detection (OSD), and result fusion. Without considering overlap, the Dover-LAP (short for Diarization Output Voting Error Reduction) method was applied to system fusion, and overlapping speech detection and processing were finally carried out. Our best system achieves a diarization error rate (DER) of 5.48% and a Jaccard error rate (JER) of 32.1% on the VoxSRC 2022 evaluation set respectively.
1210.6636
Jan Bergstra
Jan A. Bergstra
Informaticology: combining Computer Science, Data Science, and Fiction Science
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by an intention to remedy current complications with Dutch terminology concerning informatics, the term informaticology is positioned to denote an academic counterpart of informatics where informatics is conceived of as a container for a coherent family of practical disciplines ranging from computer engineering and software engineering to network technology, data center management, information technology, and information management in a broad sense. Informaticology escapes from the limitations of instrumental objectives and the perspective of usage that both restrict the scope of informatics. That is achieved by including fiction science in informaticology and by ranking fiction science on equal terms with computer science and data science, and framing (the study of) game design, evelopment, assessment and distribution, ranging from serious gaming to entertainment gaming, as a chapter of fiction science. A suggestion for the scope of fiction science is specified in some detail. In order to illustrate the coherence of informaticology thus conceived, a potential application of fiction to the ontology of instruction sequences and to software quality assessment is sketched, thereby highlighting a possible role of fiction (science) within informaticology but outside gaming.
[ { "created": "Wed, 24 Oct 2012 19:24:59 GMT", "version": "v1" } ]
2012-10-25
[ [ "Bergstra", "Jan A.", "" ] ]
Motivated by an intention to remedy current complications with Dutch terminology concerning informatics, the term informaticology is positioned to denote an academic counterpart of informatics where informatics is conceived of as a container for a coherent family of practical disciplines ranging from computer engineering and software engineering to network technology, data center management, information technology, and information management in a broad sense. Informaticology escapes from the limitations of instrumental objectives and the perspective of usage that both restrict the scope of informatics. That is achieved by including fiction science in informaticology and by ranking fiction science on equal terms with computer science and data science, and framing (the study of) game design, evelopment, assessment and distribution, ranging from serious gaming to entertainment gaming, as a chapter of fiction science. A suggestion for the scope of fiction science is specified in some detail. In order to illustrate the coherence of informaticology thus conceived, a potential application of fiction to the ontology of instruction sequences and to software quality assessment is sketched, thereby highlighting a possible role of fiction (science) within informaticology but outside gaming.
2305.11487
Guangyan Chen
Guangyan Chen, Meiling Wang, Yi Yang, Kai Yu, Li Yuan, Yufeng Yue
PointGPT: Auto-regressively Generative Pre-training from Point Clouds
9 pages, 2 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) based on the generative pre-training transformer (GPT) have demonstrated remarkable effectiveness across a diverse range of downstream tasks. Inspired by the advancements of the GPT, we present PointGPT, a novel approach that extends the concept of GPT to point clouds, addressing the challenges associated with disorder properties, low information density, and task gaps. Specifically, a point cloud auto-regressive generation task is proposed to pre-train transformer models. Our method partitions the input point cloud into multiple point patches and arranges them in an ordered sequence based on their spatial proximity. Then, an extractor-generator based transformer decoder, with a dual masking strategy, learns latent representations conditioned on the preceding point patches, aiming to predict the next one in an auto-regressive manner. Our scalable approach allows for learning high-capacity models that generalize well, achieving state-of-the-art performance on various downstream tasks. In particular, our approach achieves classification accuracies of 94.9% on the ModelNet40 dataset and 93.4% on the ScanObjectNN dataset, outperforming all other transformer models. Furthermore, our method also attains new state-of-the-art accuracies on all four few-shot learning benchmarks.
[ { "created": "Fri, 19 May 2023 07:39:04 GMT", "version": "v1" }, { "created": "Tue, 23 May 2023 02:38:26 GMT", "version": "v2" } ]
2023-05-24
[ [ "Chen", "Guangyan", "" ], [ "Wang", "Meiling", "" ], [ "Yang", "Yi", "" ], [ "Yu", "Kai", "" ], [ "Yuan", "Li", "" ], [ "Yue", "Yufeng", "" ] ]
Large language models (LLMs) based on the generative pre-training transformer (GPT) have demonstrated remarkable effectiveness across a diverse range of downstream tasks. Inspired by the advancements of the GPT, we present PointGPT, a novel approach that extends the concept of GPT to point clouds, addressing the challenges associated with disorder properties, low information density, and task gaps. Specifically, a point cloud auto-regressive generation task is proposed to pre-train transformer models. Our method partitions the input point cloud into multiple point patches and arranges them in an ordered sequence based on their spatial proximity. Then, an extractor-generator based transformer decoder, with a dual masking strategy, learns latent representations conditioned on the preceding point patches, aiming to predict the next one in an auto-regressive manner. Our scalable approach allows for learning high-capacity models that generalize well, achieving state-of-the-art performance on various downstream tasks. In particular, our approach achieves classification accuracies of 94.9% on the ModelNet40 dataset and 93.4% on the ScanObjectNN dataset, outperforming all other transformer models. Furthermore, our method also attains new state-of-the-art accuracies on all four few-shot learning benchmarks.
2109.09105
Ayush Kumar
Ayush Kumar, Mukuntha Narayanan Sundararaman, Jithendra Vepa
What BERT Based Language Models Learn in Spoken Transcripts: An Empirical Study
BlackboxNLP @ EMNLP 2021 (15 pages, includes Appendix)
null
null
null
cs.CL cs.AI cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
Language Models (LMs) have been ubiquitously leveraged in various tasks including spoken language understanding (SLU). Spoken language requires careful understanding of speaker interactions, dialog states and speech induced multimodal behaviors to generate a meaningful representation of the conversation. In this work, we propose to dissect SLU into three representative properties:conversational (disfluency, pause, overtalk), channel (speaker-type, turn-tasks) and ASR (insertion, deletion,substitution). We probe BERT based language models (BERT, RoBERTa) trained on spoken transcripts to investigate its ability to understand multifarious properties in absence of any speech cues. Empirical results indicate that LM is surprisingly good at capturing conversational properties such as pause prediction and overtalk detection from lexical tokens. On the downsides, the LM scores low on turn-tasks and ASR errors predictions. Additionally, pre-training the LM on spoken transcripts restrain its linguistic understanding. Finally, we establish the efficacy and transferability of the mentioned properties on two benchmark datasets: Switchboard Dialog Act and Disfluency datasets.
[ { "created": "Sun, 19 Sep 2021 11:23:50 GMT", "version": "v1" }, { "created": "Tue, 21 Sep 2021 05:24:51 GMT", "version": "v2" } ]
2021-09-22
[ [ "Kumar", "Ayush", "" ], [ "Sundararaman", "Mukuntha Narayanan", "" ], [ "Vepa", "Jithendra", "" ] ]
Language Models (LMs) have been ubiquitously leveraged in various tasks including spoken language understanding (SLU). Spoken language requires careful understanding of speaker interactions, dialog states and speech induced multimodal behaviors to generate a meaningful representation of the conversation. In this work, we propose to dissect SLU into three representative properties:conversational (disfluency, pause, overtalk), channel (speaker-type, turn-tasks) and ASR (insertion, deletion,substitution). We probe BERT based language models (BERT, RoBERTa) trained on spoken transcripts to investigate its ability to understand multifarious properties in absence of any speech cues. Empirical results indicate that LM is surprisingly good at capturing conversational properties such as pause prediction and overtalk detection from lexical tokens. On the downsides, the LM scores low on turn-tasks and ASR errors predictions. Additionally, pre-training the LM on spoken transcripts restrain its linguistic understanding. Finally, we establish the efficacy and transferability of the mentioned properties on two benchmark datasets: Switchboard Dialog Act and Disfluency datasets.
1603.06665
Richard Kiehl
Richard A. Kiehl
Information Processing by Nonlinear Phase Dynamics in Locally Connected Arrays
null
null
null
null
cs.NE cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research toward powerful information processing systems that circumvent the interconnect bottleneck by exploiting the nonlinear evolution of multiple phase dynamics in locally connected arrays is discussed. We focus on a scheme in which logic states are defined by the electrical phase of a dynamic process and information processing is realized through interactions between the elements in the array. Simulation results are given for networks comprised of neuron-like integrate-and-fire elements, which could potentially be implemented by ultra-small tunnel junctions, molecules and other types of nanoscale elements. This approach could lead to powerful information processing systems due to massive parallelism in simple, highly scalable nano-architectures. The rational for this approach, its advantages, simulation results, critical issues, and future research directions are discussed.
[ { "created": "Tue, 22 Mar 2016 03:14:00 GMT", "version": "v1" } ]
2016-03-23
[ [ "Kiehl", "Richard A.", "" ] ]
Research toward powerful information processing systems that circumvent the interconnect bottleneck by exploiting the nonlinear evolution of multiple phase dynamics in locally connected arrays is discussed. We focus on a scheme in which logic states are defined by the electrical phase of a dynamic process and information processing is realized through interactions between the elements in the array. Simulation results are given for networks comprised of neuron-like integrate-and-fire elements, which could potentially be implemented by ultra-small tunnel junctions, molecules and other types of nanoscale elements. This approach could lead to powerful information processing systems due to massive parallelism in simple, highly scalable nano-architectures. The rational for this approach, its advantages, simulation results, critical issues, and future research directions are discussed.
2401.02152
Yo Kobayashi Dr.
Yo Kobayashi, Yoshihiro Katagi
Estimating continuous data of wrist joint angles using ultrasound images
null
null
null
null
cs.HC cs.RO eess.SP
http://creativecommons.org/licenses/by/4.0/
Ultrasound imaging has recently been introduced as a sensing interface for joint motion estimation. The use of ultrasound images as an estimation method is expected to improve the control performance of assistive devices and human--machine interfaces. This study aimed to estimate continuous wrist joint angles using ultrasound images. Specifically, in an experiment, joint angle information was obtained during extension--flexion movements, and ultrasound images of the associated muscles were acquired. Using the features obtained from ultrasound images, a multivariate linear regression model was used to estimate the joint angles. The coordinates of the feature points obtained using optical flow from the ultrasound images were used as explanatory variables of the multivariate linear regression model. The model was trained and tested for each trial by each participant to verify the estimation accuracy. The results show that the mean and standard deviation of the estimation accuracy for all trials were root mean square error (RMSE)=1.82 $\pm$ 0.54 deg and coefficient of determination (R2)=0.985 $\pm$ 0.009. Our method achieves a highly accurate estimation of joint angles compared with previous studies using other signals, such as surface electromyography, while the multivariate linear regression model is simple and both computational and model training costs are low.
[ { "created": "Thu, 4 Jan 2024 09:04:16 GMT", "version": "v1" } ]
2024-01-05
[ [ "Kobayashi", "Yo", "" ], [ "Katagi", "Yoshihiro", "" ] ]
Ultrasound imaging has recently been introduced as a sensing interface for joint motion estimation. The use of ultrasound images as an estimation method is expected to improve the control performance of assistive devices and human--machine interfaces. This study aimed to estimate continuous wrist joint angles using ultrasound images. Specifically, in an experiment, joint angle information was obtained during extension--flexion movements, and ultrasound images of the associated muscles were acquired. Using the features obtained from ultrasound images, a multivariate linear regression model was used to estimate the joint angles. The coordinates of the feature points obtained using optical flow from the ultrasound images were used as explanatory variables of the multivariate linear regression model. The model was trained and tested for each trial by each participant to verify the estimation accuracy. The results show that the mean and standard deviation of the estimation accuracy for all trials were root mean square error (RMSE)=1.82 $\pm$ 0.54 deg and coefficient of determination (R2)=0.985 $\pm$ 0.009. Our method achieves a highly accurate estimation of joint angles compared with previous studies using other signals, such as surface electromyography, while the multivariate linear regression model is simple and both computational and model training costs are low.
2306.10042
Fan Yang
Fan Yang, Mian Zhang, Gongzhen Hu and Xiabing Zhou
A Pairing Enhancement Approach for Aspect Sentiment Triplet Extraction
12 pages, 4 figures
null
null
null
cs.IR cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aspect Sentiment Triplet Extraction (ASTE) aims to extract the triplet of an aspect term, an opinion term, and their corresponding sentiment polarity from the review texts. Due to the complexity of language and the existence of multiple aspect terms and opinion terms in a single sentence, current models often confuse the connections between an aspect term and the opinion term describing it. To address this issue, we propose a pairing enhancement approach for ASTE, which incorporates contrastive learning during the training stage to inject aspect-opinion pairing knowledge into the triplet extraction model. Experimental results demonstrate that our approach performs well on four ASTE datasets (i.e., 14lap, 14res, 15res and 16res) compared to several related classical and state-of-the-art triplet extraction methods. Moreover, ablation studies conduct an analysis and verify the advantage of contrastive learning over other pairing enhancement approaches.
[ { "created": "Sun, 11 Jun 2023 07:32:10 GMT", "version": "v1" } ]
2023-06-21
[ [ "Yang", "Fan", "" ], [ "Zhang", "Mian", "" ], [ "Hu", "Gongzhen", "" ], [ "Zhou", "Xiabing", "" ] ]
Aspect Sentiment Triplet Extraction (ASTE) aims to extract the triplet of an aspect term, an opinion term, and their corresponding sentiment polarity from the review texts. Due to the complexity of language and the existence of multiple aspect terms and opinion terms in a single sentence, current models often confuse the connections between an aspect term and the opinion term describing it. To address this issue, we propose a pairing enhancement approach for ASTE, which incorporates contrastive learning during the training stage to inject aspect-opinion pairing knowledge into the triplet extraction model. Experimental results demonstrate that our approach performs well on four ASTE datasets (i.e., 14lap, 14res, 15res and 16res) compared to several related classical and state-of-the-art triplet extraction methods. Moreover, ablation studies conduct an analysis and verify the advantage of contrastive learning over other pairing enhancement approaches.
2106.00906
Daniel McKenzie
Daniel McKenzie, Howard Heaton, Qiuwei Li, Samy Wu Fung, Stanley Osher, Wotao Yin
Operator Splitting for Learning to Predict Equilibria in Convex Games
To appear in SIMODS
null
null
null
cs.LG cs.GT math.OC
http://creativecommons.org/licenses/by/4.0/
Systems of competing agents can often be modeled as games. Assuming rationality, the most likely outcomes are given by an equilibrium (e.g. a Nash equilibrium). In many practical settings, games are influenced by context, i.e. additional data beyond the control of any agent (e.g. weather for traffic and fiscal policy for market economies). Often the exact game mechanics are unknown, yet vast amounts of historical data consisting of (context, equilibrium) pairs are available, raising the possibility of learning a solver which predicts the equilibria given only the context. We introduce Nash Fixed Point Networks (N-FPNs), a class of neural networks that naturally output equilibria. Crucially, N- FPNs employ a constraint decoupling scheme to handle complicated agent action sets while avoiding expensive projections. Empirically, we find N-FPNs are compatible with the recently developed Jacobian-Free Backpropagation technique for training implicit networks, making them significantly faster and easier to train than prior models. Our experiments show N-FPNs are capable of scaling to problems orders of magnitude larger than existing learned game solvers.
[ { "created": "Wed, 2 Jun 2021 02:55:46 GMT", "version": "v1" }, { "created": "Thu, 3 Feb 2022 20:40:23 GMT", "version": "v2" }, { "created": "Wed, 8 Nov 2023 22:00:48 GMT", "version": "v3" }, { "created": "Tue, 11 Jun 2024 23:32:53 GMT", "version": "v4" } ]
2024-06-13
[ [ "McKenzie", "Daniel", "" ], [ "Heaton", "Howard", "" ], [ "Li", "Qiuwei", "" ], [ "Fung", "Samy Wu", "" ], [ "Osher", "Stanley", "" ], [ "Yin", "Wotao", "" ] ]
Systems of competing agents can often be modeled as games. Assuming rationality, the most likely outcomes are given by an equilibrium (e.g. a Nash equilibrium). In many practical settings, games are influenced by context, i.e. additional data beyond the control of any agent (e.g. weather for traffic and fiscal policy for market economies). Often the exact game mechanics are unknown, yet vast amounts of historical data consisting of (context, equilibrium) pairs are available, raising the possibility of learning a solver which predicts the equilibria given only the context. We introduce Nash Fixed Point Networks (N-FPNs), a class of neural networks that naturally output equilibria. Crucially, N- FPNs employ a constraint decoupling scheme to handle complicated agent action sets while avoiding expensive projections. Empirically, we find N-FPNs are compatible with the recently developed Jacobian-Free Backpropagation technique for training implicit networks, making them significantly faster and easier to train than prior models. Our experiments show N-FPNs are capable of scaling to problems orders of magnitude larger than existing learned game solvers.
2001.02981
Laura Titolo
Laura Titolo, Mariano Moscato, Cesar A. Mu\~noz
Automatic generation and verification of test-stable floating-point code
32 pages. arXiv admin note: text overlap with arXiv:1808.04289
null
null
null
cs.PL cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Test instability in a floating-point program occurs when the control flow of the program diverges from its ideal execution assuming real arithmetic. This phenomenon is caused by the presence of round-off errors that affect the evaluation of arithmetic expressions occurring in conditional statements. Unstable tests may lead to significant errors in safety-critical applications that depend on numerical computations. Writing programs that take into consideration test instability is a difficult task that requires expertise on finite precision computations and rounding errors. This paper presents a toolchain to automatically generate and verify a provably correct test-stable floating-point program from a functional specification in real arithmetic. The input is a real-valued program written in the Prototype Verification System (PVS) specification language and the output is a transformed floating-point C program annotated with ANSI/ISO C Specification Language (ACSL) contracts. These contracts relate the floating-point program to its functional specification in real arithmetic. The transformed program detects if unstable tests may occur and, in these cases, issues a warning and terminate. An approach that combines the Frama-C analyzer, the PRECiSA round-off error estimator, and PVS is proposed to automatically verify that the generated program code is correct in the sense that, if the program terminates without a warning, it follows the same computational path as its real-valued functional specification.
[ { "created": "Tue, 7 Jan 2020 19:46:42 GMT", "version": "v1" } ]
2020-01-10
[ [ "Titolo", "Laura", "" ], [ "Moscato", "Mariano", "" ], [ "Muñoz", "Cesar A.", "" ] ]
Test instability in a floating-point program occurs when the control flow of the program diverges from its ideal execution assuming real arithmetic. This phenomenon is caused by the presence of round-off errors that affect the evaluation of arithmetic expressions occurring in conditional statements. Unstable tests may lead to significant errors in safety-critical applications that depend on numerical computations. Writing programs that take into consideration test instability is a difficult task that requires expertise on finite precision computations and rounding errors. This paper presents a toolchain to automatically generate and verify a provably correct test-stable floating-point program from a functional specification in real arithmetic. The input is a real-valued program written in the Prototype Verification System (PVS) specification language and the output is a transformed floating-point C program annotated with ANSI/ISO C Specification Language (ACSL) contracts. These contracts relate the floating-point program to its functional specification in real arithmetic. The transformed program detects if unstable tests may occur and, in these cases, issues a warning and terminate. An approach that combines the Frama-C analyzer, the PRECiSA round-off error estimator, and PVS is proposed to automatically verify that the generated program code is correct in the sense that, if the program terminates without a warning, it follows the same computational path as its real-valued functional specification.
2212.10460
Hao Wang
Hao Wang
PoissonMat: Remodeling Matrix Factorization using Poisson Distribution and Solving the Cold Start Problem without Input Data
null
null
10.1109/MLISE57402.2022.00055
null
cs.IR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Matrix Factorization is one of the most successful recommender system techniques over the past decade. However, the classic probabilistic theory framework for matrix factorization is modeled using normal distributions. To find better probabilistic models, algorithms such as RankMat, ZeroMat and DotMat have been invented in recent years. In this paper, we model the user rating behavior in recommender system as a Poisson process, and design an algorithm that relies on no input data to solve the recommendation problem and the cold start issue at the same time. We prove the superiority of our algorithm in comparison with matrix factorization, random placement, Zipf placement, ZeroMat, DotMat, etc.
[ { "created": "Tue, 6 Dec 2022 01:20:26 GMT", "version": "v1" } ]
2022-12-21
[ [ "Wang", "Hao", "" ] ]
Matrix Factorization is one of the most successful recommender system techniques over the past decade. However, the classic probabilistic theory framework for matrix factorization is modeled using normal distributions. To find better probabilistic models, algorithms such as RankMat, ZeroMat and DotMat have been invented in recent years. In this paper, we model the user rating behavior in recommender system as a Poisson process, and design an algorithm that relies on no input data to solve the recommendation problem and the cold start issue at the same time. We prove the superiority of our algorithm in comparison with matrix factorization, random placement, Zipf placement, ZeroMat, DotMat, etc.
1506.04356
Milan Rajkovic
Milan Rajkovi\'c and Milo\v{s} Milovanovi\'c
The Artists who Forged Themselves: Detecting Creativity in Art
26 pages, 8 figures
null
null
null
cs.CV q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Creativity and the understanding of cognitive processes involved in the creative process are relevant to all of human activities. Comprehension of creativity in the arts is of special interest due to the involvement of many scientific and non scientific disciplines. Using digital representation of paintings, we show that creative process in painting art may be objectively recognized within the mathematical framework of self organization, a process characteristic of nonlinear dynamic systems and occurring in natural and social sciences. Unlike the artist identification process or the recognition of forgery, which presupposes the knowledge of the original work, our method requires no prior knowledge on the originality of the work of art. The original paintings are recognized as realizations of the creative process which, in general, is shown to correspond to self-organization of texture features which determine the aesthetic complexity of the painting. The method consists of the wavelet based statistical digital image processing and the measure of statistical complexity which represents the minimal (average) information necessary for optimal prediction. The statistical complexity is based on the properly defined causal states with optimal predictive properties. Two different time concepts related to the works of art are introduced: the internal time and the artistic time. The internal time of the artwork is determined by the span of causal dependencies between wavelet coefficients while the artistic time refers to the internal time during which complexity increases where complexity refers to compositional, aesthetic and structural arrangement of texture features. The method is illustrated by recognizing the original paintings from the copies made by the artists themselves, including the works of the famous surrealist painter Ren\'{e} Magritte.
[ { "created": "Sun, 14 Jun 2015 06:44:34 GMT", "version": "v1" } ]
2015-06-16
[ [ "Rajković", "Milan", "" ], [ "Milovanović", "Miloš", "" ] ]
Creativity and the understanding of cognitive processes involved in the creative process are relevant to all of human activities. Comprehension of creativity in the arts is of special interest due to the involvement of many scientific and non scientific disciplines. Using digital representation of paintings, we show that creative process in painting art may be objectively recognized within the mathematical framework of self organization, a process characteristic of nonlinear dynamic systems and occurring in natural and social sciences. Unlike the artist identification process or the recognition of forgery, which presupposes the knowledge of the original work, our method requires no prior knowledge on the originality of the work of art. The original paintings are recognized as realizations of the creative process which, in general, is shown to correspond to self-organization of texture features which determine the aesthetic complexity of the painting. The method consists of the wavelet based statistical digital image processing and the measure of statistical complexity which represents the minimal (average) information necessary for optimal prediction. The statistical complexity is based on the properly defined causal states with optimal predictive properties. Two different time concepts related to the works of art are introduced: the internal time and the artistic time. The internal time of the artwork is determined by the span of causal dependencies between wavelet coefficients while the artistic time refers to the internal time during which complexity increases where complexity refers to compositional, aesthetic and structural arrangement of texture features. The method is illustrated by recognizing the original paintings from the copies made by the artists themselves, including the works of the famous surrealist painter Ren\'{e} Magritte.
1312.7442
Jamil Hamodi Mr.
Jamil Hamodi, Khaled Salah, Ravindra Thool
Evaluating the Performance of IPTV over Fixed WiMAX
9 Pages, 9 Figures. arXiv admin note: substantial text overlap with other internet sources by other authors
International Journal of Computer Applications 84(6):35-43, December 2013. Published by Foundation of Computer Science, New York, USA
10.5120/14582-2812
null
cs.MM cs.NI
http://creativecommons.org/licenses/by/3.0/
IEEE specifies different modulation techniques for WiMAX; namely, BPSK, QPSK, 16 QAM and 64 QAM. This paper studies the performance of Internet Protocol Television (IPTV) over Fixed WiMAX system considering different combinations of digital modulation. The performance is studied taking into account a number of key system parameters which include the variation in the video coding, path-loss, scheduling service classes different rated codes in FEC channel coding. The performance study was conducted using OPNET simulation. The performance is studied in terms of packet lost, packet jitter delay, end-to-end delay, and network throughput. Simulation results show that higher order modulation and coding schemes (namely, 16 QAM and 64 QAM) yield better performance than that of QPSK.
[ { "created": "Sat, 28 Dec 2013 15:19:09 GMT", "version": "v1" } ]
2016-01-13
[ [ "Hamodi", "Jamil", "" ], [ "Salah", "Khaled", "" ], [ "Thool", "Ravindra", "" ] ]
IEEE specifies different modulation techniques for WiMAX; namely, BPSK, QPSK, 16 QAM and 64 QAM. This paper studies the performance of Internet Protocol Television (IPTV) over Fixed WiMAX system considering different combinations of digital modulation. The performance is studied taking into account a number of key system parameters which include the variation in the video coding, path-loss, scheduling service classes different rated codes in FEC channel coding. The performance study was conducted using OPNET simulation. The performance is studied in terms of packet lost, packet jitter delay, end-to-end delay, and network throughput. Simulation results show that higher order modulation and coding schemes (namely, 16 QAM and 64 QAM) yield better performance than that of QPSK.
2305.04395
Yulin Shao
Runxin Zhang, Yulin Shao, Menghan Li, Lu Lu
Optical Integrated Sensing and Communication
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores a new paradigm of optical integrated sensing and communication (O-ISAC). Our investigation reveals that optical communication and optical sensing are two inherently complementary technologies. On the one hand, optical communication provides the necessary illumination for optical sensing. On the other hand, optical sensing provides environmental information for optical communication. These insights form the foundation of a directionless integrated system, which constitutes the first phase of O-ISAC. We further put forth the concept of optical beamforming using the collimating lens, whereby the light emitted by optical sources is concentrated onto the target device. This greatly improves communication rate and sensing accuracy, thanks to remarkably increased light intensity. Simulation results confirm the significant performance gains of our O-ISAC system over a separated sensing and communication system. With the collimating lens, the light intensity arrived at the target object is increased from 1.09% to 78.06%. The sensing accuracy and communication BER are improved by 62.06dB and 65.52dB, respectively.
[ { "created": "Mon, 8 May 2023 00:03:55 GMT", "version": "v1" }, { "created": "Wed, 24 May 2023 00:25:36 GMT", "version": "v2" } ]
2023-05-25
[ [ "Zhang", "Runxin", "" ], [ "Shao", "Yulin", "" ], [ "Li", "Menghan", "" ], [ "Lu", "Lu", "" ] ]
This paper explores a new paradigm of optical integrated sensing and communication (O-ISAC). Our investigation reveals that optical communication and optical sensing are two inherently complementary technologies. On the one hand, optical communication provides the necessary illumination for optical sensing. On the other hand, optical sensing provides environmental information for optical communication. These insights form the foundation of a directionless integrated system, which constitutes the first phase of O-ISAC. We further put forth the concept of optical beamforming using the collimating lens, whereby the light emitted by optical sources is concentrated onto the target device. This greatly improves communication rate and sensing accuracy, thanks to remarkably increased light intensity. Simulation results confirm the significant performance gains of our O-ISAC system over a separated sensing and communication system. With the collimating lens, the light intensity arrived at the target object is increased from 1.09% to 78.06%. The sensing accuracy and communication BER are improved by 62.06dB and 65.52dB, respectively.
1803.05181
Rizwan Ahmed Khan
Muhammad Shoaib Jaliawala, Rizwan Ahmed Khan
Can Autism be Catered with Artificial Intelligence-Assisted Intervention Technology? A Literature Review
null
Artificial Intelligence Review 2019
10.1007/s10462-019-09686-8
null
cs.HC cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article presents an extensive literature review of technology based intervention methodologies for individuals facing Autism Spectrum Disorder (ASD). Reviewed methodologies include: contemporary Computer Aided Systems (CAS), Computer Vision Assisted Technologies (CVAT) and Virtual Reality (VR) or Artificial Intelligence (AI)-Assisted interventions. The research over the past decade has provided enough demonstrations that individuals with ASD have a strong interest in technology based interventions, which are useful in both, clinical settings as well as at home and classrooms. Despite showing great promise, research in developing an advanced technology based intervention that is clinically quantitative for ASD is minimal. Moreover, the clinicians are generally not convinced about the potential of the technology based interventions due to non-empirical nature of published results. A major reason behind this lack of acceptability is that a vast majority of studies on distinct intervention methodologies do not follow any specific standard or research design. We conclude from our findings that there remains a gap between the research community of computer science, psychology and neuroscience to develop an AI assisted intervention technology for individuals suffering from ASD. Following the development of a standardized AI based intervention technology, a database needs to be developed, to devise effective AI algorithms.
[ { "created": "Wed, 14 Mar 2018 09:56:39 GMT", "version": "v1" }, { "created": "Fri, 16 Mar 2018 04:37:12 GMT", "version": "v2" }, { "created": "Sat, 10 Nov 2018 18:54:34 GMT", "version": "v3" }, { "created": "Fri, 23 Nov 2018 05:15:02 GMT", "version": "v4" }, { "created": "Sat, 19 Jan 2019 16:16:32 GMT", "version": "v5" } ]
2019-02-21
[ [ "Jaliawala", "Muhammad Shoaib", "" ], [ "Khan", "Rizwan Ahmed", "" ] ]
This article presents an extensive literature review of technology based intervention methodologies for individuals facing Autism Spectrum Disorder (ASD). Reviewed methodologies include: contemporary Computer Aided Systems (CAS), Computer Vision Assisted Technologies (CVAT) and Virtual Reality (VR) or Artificial Intelligence (AI)-Assisted interventions. The research over the past decade has provided enough demonstrations that individuals with ASD have a strong interest in technology based interventions, which are useful in both, clinical settings as well as at home and classrooms. Despite showing great promise, research in developing an advanced technology based intervention that is clinically quantitative for ASD is minimal. Moreover, the clinicians are generally not convinced about the potential of the technology based interventions due to non-empirical nature of published results. A major reason behind this lack of acceptability is that a vast majority of studies on distinct intervention methodologies do not follow any specific standard or research design. We conclude from our findings that there remains a gap between the research community of computer science, psychology and neuroscience to develop an AI assisted intervention technology for individuals suffering from ASD. Following the development of a standardized AI based intervention technology, a database needs to be developed, to devise effective AI algorithms.
1003.2682
David Spivak
David I. Spivak
Table manipulation in simplicial databases
8 pages.
null
null
null
cs.DB cs.IR
http://creativecommons.org/licenses/by/3.0/
In \cite{Spi}, we developed a category of databases in which the schema of a database is represented as a simplicial set. Each simplex corresponds to a table in the database. There, our main concern was to find a categorical formulation of databases; the simplicial nature of the schemas was to some degree unexpected and unexploited. In the present note, we show how to use this geometric formulation effectively on a computer. If we think of each simplex as a polygonal tile, we can imagine assembling custom databases by mixing and matching tiles. Queries on this database can be performed by drawing paths through the resulting tile formations, selecting records at the start-point of this path and retrieving corresponding records at its end-point.
[ { "created": "Sat, 13 Mar 2010 06:22:07 GMT", "version": "v1" } ]
2010-03-16
[ [ "Spivak", "David I.", "" ] ]
In \cite{Spi}, we developed a category of databases in which the schema of a database is represented as a simplicial set. Each simplex corresponds to a table in the database. There, our main concern was to find a categorical formulation of databases; the simplicial nature of the schemas was to some degree unexpected and unexploited. In the present note, we show how to use this geometric formulation effectively on a computer. If we think of each simplex as a polygonal tile, we can imagine assembling custom databases by mixing and matching tiles. Queries on this database can be performed by drawing paths through the resulting tile formations, selecting records at the start-point of this path and retrieving corresponding records at its end-point.
1410.4672
Biju Issac
B. Issac, R. Chiong, S.M. Jacob
Analysis of Phishing Attacks and Countermeasures
8 pages
Issac, B., Chiong, R. & Jacob, S. M. (2006, June). Analysis of Phishing Attacks and Countermeasures. IBIMA, Bonn, Germany, ISBN 0-9753393-5-4, pp.339-346
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the biggest problems with the Internet technology is the unwanted spam emails. The well disguised phishing email comes in as part of the spam and makes its entry into the inbox quite frequently nowadays. While phishing is normally considered a consumer issue, the fraudulent tactics the phishers use are now intimidating the corporate sector as well. In this paper, we analyze the various aspects of phishing attacks and draw on some possible defenses as countermeasures. We initially address the different forms of phishing attacks in theory, and then look at some examples of attacks in practice, along with their common defenses. We also highlight some recent statistical data on phishing scam to project the seriousness of the problem. Finally, some specific phishing countermeasures at both the user level and the organization level are listed, and a multi-layered anti-phishing proposal is presented to round up our studies.
[ { "created": "Fri, 17 Oct 2014 09:34:50 GMT", "version": "v1" } ]
2014-10-20
[ [ "Issac", "B.", "" ], [ "Chiong", "R.", "" ], [ "Jacob", "S. M.", "" ] ]
One of the biggest problems with the Internet technology is the unwanted spam emails. The well disguised phishing email comes in as part of the spam and makes its entry into the inbox quite frequently nowadays. While phishing is normally considered a consumer issue, the fraudulent tactics the phishers use are now intimidating the corporate sector as well. In this paper, we analyze the various aspects of phishing attacks and draw on some possible defenses as countermeasures. We initially address the different forms of phishing attacks in theory, and then look at some examples of attacks in practice, along with their common defenses. We also highlight some recent statistical data on phishing scam to project the seriousness of the problem. Finally, some specific phishing countermeasures at both the user level and the organization level are listed, and a multi-layered anti-phishing proposal is presented to round up our studies.
2405.05109
Weijia Zhang
Weijia Zhang, Vaishali Pal, Jia-Hong Huang, Evangelos Kanoulas, Maarten de Rijke
QFMTS: Generating Query-Focused Summaries over Multi-Table Inputs
16 pages, 3 figures
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Table summarization is a crucial task aimed at condensing information from tabular data into concise and comprehensible textual summaries. However, existing approaches often fall short of adequately meeting users' information and quality requirements and tend to overlook the complexities of real-world queries. In this paper, we propose a novel method to address these limitations by introducing query-focused multi-table summarization. Our approach, which comprises a table serialization module, a summarization controller, and a large language model (LLM), utilizes textual queries and multiple tables to generate query-dependent table summaries tailored to users' information needs. To facilitate research in this area, we present a comprehensive dataset specifically tailored for this task, consisting of 4909 query-summary pairs, each associated with multiple tables. Through extensive experiments using our curated dataset, we demonstrate the effectiveness of our proposed method compared to baseline approaches. Our findings offer insights into the challenges of complex table reasoning for precise summarization, contributing to the advancement of research in query-focused multi-table summarization.
[ { "created": "Wed, 8 May 2024 15:05:55 GMT", "version": "v1" } ]
2024-05-09
[ [ "Zhang", "Weijia", "" ], [ "Pal", "Vaishali", "" ], [ "Huang", "Jia-Hong", "" ], [ "Kanoulas", "Evangelos", "" ], [ "de Rijke", "Maarten", "" ] ]
Table summarization is a crucial task aimed at condensing information from tabular data into concise and comprehensible textual summaries. However, existing approaches often fall short of adequately meeting users' information and quality requirements and tend to overlook the complexities of real-world queries. In this paper, we propose a novel method to address these limitations by introducing query-focused multi-table summarization. Our approach, which comprises a table serialization module, a summarization controller, and a large language model (LLM), utilizes textual queries and multiple tables to generate query-dependent table summaries tailored to users' information needs. To facilitate research in this area, we present a comprehensive dataset specifically tailored for this task, consisting of 4909 query-summary pairs, each associated with multiple tables. Through extensive experiments using our curated dataset, we demonstrate the effectiveness of our proposed method compared to baseline approaches. Our findings offer insights into the challenges of complex table reasoning for precise summarization, contributing to the advancement of research in query-focused multi-table summarization.
2210.17017
Minkyu Jung
Minkyu Jung, Ohhyeok Kwon, Seunghyun Seo, Soonshin Seo
Blank Collapse: Compressing CTC emission for the faster decoding
Accepted in Interspeech 2023
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Connectionist Temporal Classification (CTC) model is a very efficient method for modeling sequences, especially for speech data. In order to use CTC model as an Automatic Speech Recognition (ASR) task, the beam search decoding with an external language model like n-gram LM is necessary to obtain reasonable results. In this paper we analyze the blank label in CTC beam search deeply and propose a very simple method to reduce the amount of calculation resulting in faster beam search decoding speed. With this method, we can get up to 78% faster decoding speed than ordinary beam search decoding with a very small loss of accuracy in LibriSpeech datasets. We prove this method is effective not only practically by experiments but also theoretically by mathematical reasoning. We also observe that this reduction is more obvious if the accuracy of the model is higher.
[ { "created": "Mon, 31 Oct 2022 02:12:51 GMT", "version": "v1" }, { "created": "Tue, 27 Jun 2023 00:39:38 GMT", "version": "v2" } ]
2023-06-28
[ [ "Jung", "Minkyu", "" ], [ "Kwon", "Ohhyeok", "" ], [ "Seo", "Seunghyun", "" ], [ "Seo", "Soonshin", "" ] ]
Connectionist Temporal Classification (CTC) model is a very efficient method for modeling sequences, especially for speech data. In order to use CTC model as an Automatic Speech Recognition (ASR) task, the beam search decoding with an external language model like n-gram LM is necessary to obtain reasonable results. In this paper we analyze the blank label in CTC beam search deeply and propose a very simple method to reduce the amount of calculation resulting in faster beam search decoding speed. With this method, we can get up to 78% faster decoding speed than ordinary beam search decoding with a very small loss of accuracy in LibriSpeech datasets. We prove this method is effective not only practically by experiments but also theoretically by mathematical reasoning. We also observe that this reduction is more obvious if the accuracy of the model is higher.
1108.5703
Sivakumar Madesan
Jeevan H E, Prashanth P P, Punith Kumar S N, Vinay Hegde
Web Pages Clustering: A New Approach
Clustering, concept mining, information retrieval, metasearch engine
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid growth of web has resulted in vast volume of information. Information availability at a rapid speed to the user is vital. English language (or any for that matter) has lot of ambiguity in the usage of words. So there is no guarantee that a keyword based search engine will provide the required results. This paper introduces the use of dictionary (standardised) to obtain the context with which a keyword is used and in turn cluster the results based on this context. These ideas can be merged with a metasearch engine to enhance the search efficiency.
[ { "created": "Fri, 26 Aug 2011 07:02:35 GMT", "version": "v1" } ]
2011-08-30
[ [ "E", "Jeevan H", "" ], [ "P", "Prashanth P", "" ], [ "N", "Punith Kumar S", "" ], [ "Hegde", "Vinay", "" ] ]
The rapid growth of web has resulted in vast volume of information. Information availability at a rapid speed to the user is vital. English language (or any for that matter) has lot of ambiguity in the usage of words. So there is no guarantee that a keyword based search engine will provide the required results. This paper introduces the use of dictionary (standardised) to obtain the context with which a keyword is used and in turn cluster the results based on this context. These ideas can be merged with a metasearch engine to enhance the search efficiency.
2107.00934
Jiahui Li
Jiahui Li, Wen Chen, Xiaodi Huang, Zhiqiang Hu, Qi Duan, Hongsheng Li, Dimitris N. Metaxas, Shaoting Zhang
Hybrid Supervision Learning for Pathology Whole Slide Image Classification
Accepted in MICCAI2021
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Weak supervision learning on classification labels has demonstrated high performance in various tasks, while a few pixel-level fine annotations are also affordable. Naturally a question comes to us that whether the combination of pixel-level (e.g., segmentation) and image level (e.g., classification) annotation can introduce further improvement. However in computational pathology this is a difficult task for this reason: High resolution of whole slide images makes it difficult to do end-to-end classification model training, which is challenging to research of weak or hybrid supervision learning in the past. To handle this problem, we propose a hybrid supervision learning framework for this kind of high resolution images with sufficient image-level coarse annotations and a few pixel-level fine labels. This framework, when applied in training patch model, can carefully make use of coarse image-level labels to refine generated pixel-level pseudo labels. Complete strategy is proposed to suppress pixel-level false positives and false negatives. A large hybrid annotated dataset is used to evaluate the effectiveness of hybrid supervision learning. By extracting pixel-level pseudo labels in initially image-level labeled samples, we achieve 5.2% higher specificity than purely training on existing labels while retaining 100% sensitivity, in the task of image-level classification to be positive or negative.
[ { "created": "Fri, 2 Jul 2021 09:46:06 GMT", "version": "v1" }, { "created": "Mon, 5 Jul 2021 03:09:33 GMT", "version": "v2" }, { "created": "Mon, 25 Oct 2021 06:45:28 GMT", "version": "v3" } ]
2021-10-26
[ [ "Li", "Jiahui", "" ], [ "Chen", "Wen", "" ], [ "Huang", "Xiaodi", "" ], [ "Hu", "Zhiqiang", "" ], [ "Duan", "Qi", "" ], [ "Li", "Hongsheng", "" ], [ "Metaxas", "Dimitris N.", "" ], [ "Zhang", "Shaoting", "" ] ]
Weak supervision learning on classification labels has demonstrated high performance in various tasks, while a few pixel-level fine annotations are also affordable. Naturally a question comes to us that whether the combination of pixel-level (e.g., segmentation) and image level (e.g., classification) annotation can introduce further improvement. However in computational pathology this is a difficult task for this reason: High resolution of whole slide images makes it difficult to do end-to-end classification model training, which is challenging to research of weak or hybrid supervision learning in the past. To handle this problem, we propose a hybrid supervision learning framework for this kind of high resolution images with sufficient image-level coarse annotations and a few pixel-level fine labels. This framework, when applied in training patch model, can carefully make use of coarse image-level labels to refine generated pixel-level pseudo labels. Complete strategy is proposed to suppress pixel-level false positives and false negatives. A large hybrid annotated dataset is used to evaluate the effectiveness of hybrid supervision learning. By extracting pixel-level pseudo labels in initially image-level labeled samples, we achieve 5.2% higher specificity than purely training on existing labels while retaining 100% sensitivity, in the task of image-level classification to be positive or negative.
2010.16336
Ari Kobren
Naveen Jafer Nizar, Ari Kobren
Leveraging Extracted Model Adversaries for Improved Black Box Attacks
null
Analyzing and interpreting neural networks for NLP, 2020
null
null
cs.LG cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method for adversarial input generation against black box models for reading comprehension based question answering. Our approach is composed of two steps. First, we approximate a victim black box model via model extraction (Krishna et al., 2020). Second, we use our own white box method to generate input perturbations that cause the approximate model to fail. These perturbed inputs are used against the victim. In experiments we find that our method improves on the efficacy of the AddAny---a white box attack---performed on the approximate model by 25% F1, and the AddSent attack---a black box attack---by 11% F1 (Jia and Liang, 2017).
[ { "created": "Fri, 30 Oct 2020 15:53:50 GMT", "version": "v1" }, { "created": "Mon, 2 Nov 2020 16:38:30 GMT", "version": "v2" } ]
2020-11-03
[ [ "Nizar", "Naveen Jafer", "" ], [ "Kobren", "Ari", "" ] ]
We present a method for adversarial input generation against black box models for reading comprehension based question answering. Our approach is composed of two steps. First, we approximate a victim black box model via model extraction (Krishna et al., 2020). Second, we use our own white box method to generate input perturbations that cause the approximate model to fail. These perturbed inputs are used against the victim. In experiments we find that our method improves on the efficacy of the AddAny---a white box attack---performed on the approximate model by 25% F1, and the AddSent attack---a black box attack---by 11% F1 (Jia and Liang, 2017).
1909.11015
Shiv Ram Dubey
Shiv Ram Dubey, Soumendu Chakraborty, Swalpa Kumar Roy, Snehasis Mukherjee, Satish Kumar Singh, Bidyut Baran Chaudhuri
diffGrad: An Optimization Method for Convolutional Neural Networks
null
IEEE Transactions on Neural Networks and Learning Systems, 2020
null
null
cs.LG cs.CV cs.NE math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic Gradient Decent (SGD) is one of the core techniques behind the success of deep neural networks. The gradient provides information on the direction in which a function has the steepest rate of change. The main problem with basic SGD is to change by equal sized steps for all parameters, irrespective of gradient behavior. Hence, an efficient way of deep network optimization is to make adaptive step sizes for each parameter. Recently, several attempts have been made to improve gradient descent methods such as AdaGrad, AdaDelta, RMSProp and Adam. These methods rely on the square roots of exponential moving averages of squared past gradients. Thus, these methods do not take advantage of local change in gradients. In this paper, a novel optimizer is proposed based on the difference between the present and the immediate past gradient (i.e., diffGrad). In the proposed diffGrad optimization technique, the step size is adjusted for each parameter in such a way that it should have a larger step size for faster gradient changing parameters and a lower step size for lower gradient changing parameters. The convergence analysis is done using the regret bound approach of online learning framework. Rigorous analysis is made in this paper over three synthetic complex non-convex functions. The image categorization experiments are also conducted over the CIFAR10 and CIFAR100 datasets to observe the performance of diffGrad with respect to the state-of-the-art optimizers such as SGDM, AdaGrad, AdaDelta, RMSProp, AMSGrad, and Adam. The residual unit (ResNet) based Convolutional Neural Networks (CNN) architecture is used in the experiments. The experiments show that diffGrad outperforms other optimizers. Also, we show that diffGrad performs uniformly well for training CNN using different activation functions. The source code is made publicly available at https://github.com/shivram1987/diffGrad.
[ { "created": "Thu, 12 Sep 2019 06:20:05 GMT", "version": "v1" }, { "created": "Tue, 24 Dec 2019 06:11:50 GMT", "version": "v2" }, { "created": "Fri, 6 Mar 2020 06:51:39 GMT", "version": "v3" }, { "created": "Sat, 27 Nov 2021 01:58:07 GMT", "version": "v4" } ]
2021-11-30
[ [ "Dubey", "Shiv Ram", "" ], [ "Chakraborty", "Soumendu", "" ], [ "Roy", "Swalpa Kumar", "" ], [ "Mukherjee", "Snehasis", "" ], [ "Singh", "Satish Kumar", "" ], [ "Chaudhuri", "Bidyut Baran", "" ] ]
Stochastic Gradient Decent (SGD) is one of the core techniques behind the success of deep neural networks. The gradient provides information on the direction in which a function has the steepest rate of change. The main problem with basic SGD is to change by equal sized steps for all parameters, irrespective of gradient behavior. Hence, an efficient way of deep network optimization is to make adaptive step sizes for each parameter. Recently, several attempts have been made to improve gradient descent methods such as AdaGrad, AdaDelta, RMSProp and Adam. These methods rely on the square roots of exponential moving averages of squared past gradients. Thus, these methods do not take advantage of local change in gradients. In this paper, a novel optimizer is proposed based on the difference between the present and the immediate past gradient (i.e., diffGrad). In the proposed diffGrad optimization technique, the step size is adjusted for each parameter in such a way that it should have a larger step size for faster gradient changing parameters and a lower step size for lower gradient changing parameters. The convergence analysis is done using the regret bound approach of online learning framework. Rigorous analysis is made in this paper over three synthetic complex non-convex functions. The image categorization experiments are also conducted over the CIFAR10 and CIFAR100 datasets to observe the performance of diffGrad with respect to the state-of-the-art optimizers such as SGDM, AdaGrad, AdaDelta, RMSProp, AMSGrad, and Adam. The residual unit (ResNet) based Convolutional Neural Networks (CNN) architecture is used in the experiments. The experiments show that diffGrad outperforms other optimizers. Also, we show that diffGrad performs uniformly well for training CNN using different activation functions. The source code is made publicly available at https://github.com/shivram1987/diffGrad.
1906.09880
Federico Fusco
Shant Boodaghians, Federico Fusco, Stefano Leonardi, Yishay Mansour, Ruta Mehta
Online Revenue Maximization for Server Pricing
null
Auton Agent Multi-Agent Syst 36, 11 (2022)
10.1007/s10458-022-09544-y
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficient and truthful mechanisms to price resources on remote servers/machines has been the subject of much work in recent years due to the importance of the cloud market. This paper considers revenue maximization in the online stochastic setting with non-preemptive jobs and a unit capacity server. One agent/job arrives at every time step, with parameters drawn from an underlying unknown distribution. We design a posted-price mechanism which can be efficiently computed, and is revenue-optimal in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent's type, and the computed pricing scheme is deterministic, depending only on the length of the allotted time interval and on the earliest time the server is available. If the distribution of agent's type is only learned from observing the jobs that are executed, we prove that a polynomial number of samples is sufficient to obtain a near-optimal truthful pricing strategy.
[ { "created": "Mon, 24 Jun 2019 12:26:13 GMT", "version": "v1" }, { "created": "Fri, 19 Jul 2019 10:50:31 GMT", "version": "v2" }, { "created": "Tue, 1 Oct 2019 12:32:43 GMT", "version": "v3" } ]
2024-02-20
[ [ "Boodaghians", "Shant", "" ], [ "Fusco", "Federico", "" ], [ "Leonardi", "Stefano", "" ], [ "Mansour", "Yishay", "" ], [ "Mehta", "Ruta", "" ] ]
Efficient and truthful mechanisms to price resources on remote servers/machines has been the subject of much work in recent years due to the importance of the cloud market. This paper considers revenue maximization in the online stochastic setting with non-preemptive jobs and a unit capacity server. One agent/job arrives at every time step, with parameters drawn from an underlying unknown distribution. We design a posted-price mechanism which can be efficiently computed, and is revenue-optimal in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent's type, and the computed pricing scheme is deterministic, depending only on the length of the allotted time interval and on the earliest time the server is available. If the distribution of agent's type is only learned from observing the jobs that are executed, we prove that a polynomial number of samples is sufficient to obtain a near-optimal truthful pricing strategy.
2006.07503
Nicol\`o Campolongo
Nicol\`o Campolongo, Francesco Orabona
Temporal Variability in Implicit Online Learning
18 pages, 12 figures
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
In the setting of online learning, Implicit algorithms turn out to be highly successful from a practical standpoint. However, the tightest regret analyses only show marginal improvements over Online Mirror Descent. In this work, we shed light on this behavior carrying out a careful regret analysis. We prove a novel static regret bound that depends on the temporal variability of the sequence of loss functions, a quantity which is often encountered when considering dynamic competitors. We show, for example, that the regret can be constant if the temporal variability is constant and the learning rate is tuned appropriately, without the need of smooth losses. Moreover, we present an adaptive algorithm that achieves this regret bound without prior knowledge of the temporal variability and prove a matching lower bound. Finally, we validate our theoretical findings on classification and regression datasets.
[ { "created": "Fri, 12 Jun 2020 22:50:34 GMT", "version": "v1" }, { "created": "Fri, 6 Nov 2020 19:59:09 GMT", "version": "v2" } ]
2020-11-10
[ [ "Campolongo", "Nicolò", "" ], [ "Orabona", "Francesco", "" ] ]
In the setting of online learning, Implicit algorithms turn out to be highly successful from a practical standpoint. However, the tightest regret analyses only show marginal improvements over Online Mirror Descent. In this work, we shed light on this behavior carrying out a careful regret analysis. We prove a novel static regret bound that depends on the temporal variability of the sequence of loss functions, a quantity which is often encountered when considering dynamic competitors. We show, for example, that the regret can be constant if the temporal variability is constant and the learning rate is tuned appropriately, without the need of smooth losses. Moreover, we present an adaptive algorithm that achieves this regret bound without prior knowledge of the temporal variability and prove a matching lower bound. Finally, we validate our theoretical findings on classification and regression datasets.
1408.4143
Mohammed Abdelsamea
Marghny H. Mohamed and Mohammed M. Abdelsamea
Self Organization Map based Texture Feature Extraction for Efficient Medical Image Categorization
In Proceedings of the 4th ACM International Conference on Intelligent Computing and Information Systems, ICICIS 2009, Cairo, Egypt 2009
null
null
null
cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Texture is one of the most important properties of visual surface that helps in discriminating one object from another or an object from background. The self-organizing map (SOM) is an excellent tool in exploratory phase of data mining. It projects its input space on prototypes of a low-dimensional regular grid that can be effectively utilized to visualize and explore properties of the data. This paper proposes an enhancement extraction method for accurate extracting features for efficient image representation it based on SOM neural network. In this approach, we apply three different partitioning approaches as a region of interested (ROI) selection methods for extracting different accurate textural features from medical image as a primary step of our extraction method. Fisherfaces feature selection is used, for selecting discriminated features form extracted textural features. Experimental result showed the high accuracy of medical image categorization with our proposed extraction method. Experiments held on Mammographic Image Analysis Society (MIAS) dataset.
[ { "created": "Mon, 14 Jul 2014 13:43:19 GMT", "version": "v1" } ]
2014-08-20
[ [ "Mohamed", "Marghny H.", "" ], [ "Abdelsamea", "Mohammed M.", "" ] ]
Texture is one of the most important properties of visual surface that helps in discriminating one object from another or an object from background. The self-organizing map (SOM) is an excellent tool in exploratory phase of data mining. It projects its input space on prototypes of a low-dimensional regular grid that can be effectively utilized to visualize and explore properties of the data. This paper proposes an enhancement extraction method for accurate extracting features for efficient image representation it based on SOM neural network. In this approach, we apply three different partitioning approaches as a region of interested (ROI) selection methods for extracting different accurate textural features from medical image as a primary step of our extraction method. Fisherfaces feature selection is used, for selecting discriminated features form extracted textural features. Experimental result showed the high accuracy of medical image categorization with our proposed extraction method. Experiments held on Mammographic Image Analysis Society (MIAS) dataset.
2205.15947
Michael Oberst
Nikolaj Thams, Michael Oberst, David Sontag
Evaluating Robustness to Dataset Shift via Parametric Robustness Sets
NeurIPS 2022; Equal Contribution by Nikolaj/Michael, order determined by coin flip
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a method for proactively identifying small, plausible shifts in distribution which lead to large differences in model performance. These shifts are defined via parametric changes in the causal mechanisms of observed variables, where constraints on parameters yield a "robustness set" of plausible distributions and a corresponding worst-case loss over the set. While the loss under an individual parametric shift can be estimated via reweighting techniques such as importance sampling, the resulting worst-case optimization problem is non-convex, and the estimate may suffer from large variance. For small shifts, however, we can construct a local second-order approximation to the loss under shift and cast the problem of finding a worst-case shift as a particular non-convex quadratic optimization problem, for which efficient algorithms are available. We demonstrate that this second-order approximation can be estimated directly for shifts in conditional exponential family models, and we bound the approximation error. We apply our approach to a computer vision task (classifying gender from images), revealing sensitivity to shifts in non-causal attributes.
[ { "created": "Tue, 31 May 2022 16:44:18 GMT", "version": "v1" }, { "created": "Wed, 6 Jul 2022 15:11:48 GMT", "version": "v2" }, { "created": "Sun, 23 Oct 2022 16:27:18 GMT", "version": "v3" }, { "created": "Sun, 15 Jan 2023 05:43:40 GMT", "version": "v4" } ]
2023-01-18
[ [ "Thams", "Nikolaj", "" ], [ "Oberst", "Michael", "" ], [ "Sontag", "David", "" ] ]
We give a method for proactively identifying small, plausible shifts in distribution which lead to large differences in model performance. These shifts are defined via parametric changes in the causal mechanisms of observed variables, where constraints on parameters yield a "robustness set" of plausible distributions and a corresponding worst-case loss over the set. While the loss under an individual parametric shift can be estimated via reweighting techniques such as importance sampling, the resulting worst-case optimization problem is non-convex, and the estimate may suffer from large variance. For small shifts, however, we can construct a local second-order approximation to the loss under shift and cast the problem of finding a worst-case shift as a particular non-convex quadratic optimization problem, for which efficient algorithms are available. We demonstrate that this second-order approximation can be estimated directly for shifts in conditional exponential family models, and we bound the approximation error. We apply our approach to a computer vision task (classifying gender from images), revealing sensitivity to shifts in non-causal attributes.
1810.09878
Tara Salman
Deval Bhamare, Tara Salman, Mohammed Samaka, Aiman Erbad, Raj Jain
Feasibility of Supervised Machine Learning for Cloud Security
null
2016 International Conference on Information Science and Security (ICISS)
10.1109/ICISSEC.2016.7885853
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cloud computing is gaining significant attention, however, security is the biggest hurdle in its wide acceptance. Users of cloud services are under constant fear of data loss, security threats and availability issues. Recently, learning-based methods for security applications are gaining popularity in the literature with the advents in machine learning techniques. However, the major challenge in these methods is obtaining real-time and unbiased datasets. Many datasets are internal and cannot be shared due to privacy issues or may lack certain statistical characteristics. As a result of this, researchers prefer to generate datasets for training and testing purpose in the simulated or closed experimental environments which may lack comprehensiveness. Machine learning models trained with such a single dataset generally result in a semantic gap between results and their application. There is a dearth of research work which demonstrates the effectiveness of these models across multiple datasets obtained in different environments. We argue that it is necessary to test the robustness of the machine learning models, especially in diversified operating conditions, which are prevalent in cloud scenarios. In this work, we use the UNSW dataset to train the supervised machine learning models. We then test these models with ISOT dataset. We present our results and argue that more research in the field of machine learning is still required for its applicability to the cloud security.
[ { "created": "Tue, 23 Oct 2018 14:23:43 GMT", "version": "v1" } ]
2018-10-24
[ [ "Bhamare", "Deval", "" ], [ "Salman", "Tara", "" ], [ "Samaka", "Mohammed", "" ], [ "Erbad", "Aiman", "" ], [ "Jain", "Raj", "" ] ]
Cloud computing is gaining significant attention, however, security is the biggest hurdle in its wide acceptance. Users of cloud services are under constant fear of data loss, security threats and availability issues. Recently, learning-based methods for security applications are gaining popularity in the literature with the advents in machine learning techniques. However, the major challenge in these methods is obtaining real-time and unbiased datasets. Many datasets are internal and cannot be shared due to privacy issues or may lack certain statistical characteristics. As a result of this, researchers prefer to generate datasets for training and testing purpose in the simulated or closed experimental environments which may lack comprehensiveness. Machine learning models trained with such a single dataset generally result in a semantic gap between results and their application. There is a dearth of research work which demonstrates the effectiveness of these models across multiple datasets obtained in different environments. We argue that it is necessary to test the robustness of the machine learning models, especially in diversified operating conditions, which are prevalent in cloud scenarios. In this work, we use the UNSW dataset to train the supervised machine learning models. We then test these models with ISOT dataset. We present our results and argue that more research in the field of machine learning is still required for its applicability to the cloud security.
1210.6855
Michal \v{C}\'ap
Michal \v{C}\'ap and Peter Nov\'ak and Ji\v{r}\'i Vok\v{r}\'inek and Michal P\v{e}chou\v{c}ek
Asynchronous Decentralized Algorithm for Space-Time Cooperative Pathfinding
null
Spatio-Temporal Dynamics (STeDy 2012). Editors: Mehul Bhatt, Hans Guesgen, and Ernest Davis. Workshop Proceedings of the European Conference on Articial Intelligence (ECAI 2012), Montpellier, France
null
null
cs.AI cs.DC cs.RO
http://creativecommons.org/licenses/by-nc-sa/3.0/
Cooperative pathfinding is a multi-agent path planning problem where a group of vehicles searches for a corresponding set of non-conflicting space-time trajectories. Many of the practical methods for centralized solving of cooperative pathfinding problems are based on the prioritized planning strategy. However, in some domains (e.g., multi-robot teams of unmanned aerial vehicles, autonomous underwater vehicles, or unmanned ground vehicles) a decentralized approach may be more desirable than a centralized one due to communication limitations imposed by the domain and/or privacy concerns. In this paper we present an asynchronous decentralized variant of prioritized planning ADPP and its interruptible version IADPP. The algorithm exploits the inherent parallelism of distributed systems and allows for a speed up of the computation process. Unlike the synchronized planning approaches, the algorithm allows an agent to react to updates about other agents' paths immediately and invoke its local spatio-temporal path planner to find the best trajectory, as response to the other agents' choices. We provide a proof of correctness of the algorithms and experimentally evaluate them on synthetic domains.
[ { "created": "Thu, 25 Oct 2012 14:35:27 GMT", "version": "v1" } ]
2012-10-26
[ [ "Čáp", "Michal", "" ], [ "Novák", "Peter", "" ], [ "Vokřínek", "Jiří", "" ], [ "Pěchouček", "Michal", "" ] ]
Cooperative pathfinding is a multi-agent path planning problem where a group of vehicles searches for a corresponding set of non-conflicting space-time trajectories. Many of the practical methods for centralized solving of cooperative pathfinding problems are based on the prioritized planning strategy. However, in some domains (e.g., multi-robot teams of unmanned aerial vehicles, autonomous underwater vehicles, or unmanned ground vehicles) a decentralized approach may be more desirable than a centralized one due to communication limitations imposed by the domain and/or privacy concerns. In this paper we present an asynchronous decentralized variant of prioritized planning ADPP and its interruptible version IADPP. The algorithm exploits the inherent parallelism of distributed systems and allows for a speed up of the computation process. Unlike the synchronized planning approaches, the algorithm allows an agent to react to updates about other agents' paths immediately and invoke its local spatio-temporal path planner to find the best trajectory, as response to the other agents' choices. We provide a proof of correctness of the algorithms and experimentally evaluate them on synthetic domains.
1910.09036
Aude Genevay
Aude Genevay, Gabriel Dulac-Arnold, Jean-Philippe Vert
Differentiable Deep Clustering with Cluster Size Constraints
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clustering is a fundamental unsupervised learning approach. Many clustering algorithms -- such as $k$-means -- rely on the euclidean distance as a similarity measure, which is often not the most relevant metric for high dimensional data such as images. Learning a lower-dimensional embedding that can better reflect the geometry of the dataset is therefore instrumental for performance. We propose a new approach for this task where the embedding is performed by a differentiable model such as a deep neural network. By rewriting the $k$-means clustering algorithm as an optimal transport task, and adding an entropic regularization, we derive a fully differentiable loss function that can be minimized with respect to both the embedding parameters and the cluster parameters via stochastic gradient descent. We show that this new formulation generalizes a recently proposed state-of-the-art method based on soft-$k$-means by adding constraints on the cluster sizes. Empirical evaluations on image classification benchmarks suggest that compared to state-of-the-art methods, our optimal transport-based approach provide better unsupervised accuracy and does not require a pre-training phase.
[ { "created": "Sun, 20 Oct 2019 17:54:45 GMT", "version": "v1" } ]
2019-10-22
[ [ "Genevay", "Aude", "" ], [ "Dulac-Arnold", "Gabriel", "" ], [ "Vert", "Jean-Philippe", "" ] ]
Clustering is a fundamental unsupervised learning approach. Many clustering algorithms -- such as $k$-means -- rely on the euclidean distance as a similarity measure, which is often not the most relevant metric for high dimensional data such as images. Learning a lower-dimensional embedding that can better reflect the geometry of the dataset is therefore instrumental for performance. We propose a new approach for this task where the embedding is performed by a differentiable model such as a deep neural network. By rewriting the $k$-means clustering algorithm as an optimal transport task, and adding an entropic regularization, we derive a fully differentiable loss function that can be minimized with respect to both the embedding parameters and the cluster parameters via stochastic gradient descent. We show that this new formulation generalizes a recently proposed state-of-the-art method based on soft-$k$-means by adding constraints on the cluster sizes. Empirical evaluations on image classification benchmarks suggest that compared to state-of-the-art methods, our optimal transport-based approach provide better unsupervised accuracy and does not require a pre-training phase.
0704.1068
Leo Liberti
Giacomo Nannicini, Philippe Baptiste, Gilles Barbier, Daniel Krob, Leo Liberti
Fast paths in large-scale dynamic road networks
12 pages, 4 figures
null
null
null
cs.NI cs.DS
null
Efficiently computing fast paths in large scale dynamic road networks (where dynamic traffic information is known over a part of the network) is a practical problem faced by several traffic information service providers who wish to offer a realistic fast path computation to GPS terminal enabled vehicles. The heuristic solution method we propose is based on a highway hierarchy-based shortest path algorithm for static large-scale networks; we maintain a static highway hierarchy and perform each query on the dynamically evaluated network.
[ { "created": "Mon, 9 Apr 2007 07:04:19 GMT", "version": "v1" }, { "created": "Wed, 27 Jun 2007 18:17:35 GMT", "version": "v2" } ]
2007-06-27
[ [ "Nannicini", "Giacomo", "" ], [ "Baptiste", "Philippe", "" ], [ "Barbier", "Gilles", "" ], [ "Krob", "Daniel", "" ], [ "Liberti", "Leo", "" ] ]
Efficiently computing fast paths in large scale dynamic road networks (where dynamic traffic information is known over a part of the network) is a practical problem faced by several traffic information service providers who wish to offer a realistic fast path computation to GPS terminal enabled vehicles. The heuristic solution method we propose is based on a highway hierarchy-based shortest path algorithm for static large-scale networks; we maintain a static highway hierarchy and perform each query on the dynamically evaluated network.
1112.2113
Varun Raj Kompella
Varun Raj Kompella, Matthew Luciw and Juergen Schmidhuber
Incremental Slow Feature Analysis: Adaptive and Episodic Learning from High-Dimensional Input Streams
null
Neural Computation, 2012, Vol. 24, No. 11, Pages 2994-3024
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Slow Feature Analysis (SFA) extracts features representing the underlying causes of changes within a temporally coherent high-dimensional raw sensory input signal. Our novel incremental version of SFA (IncSFA) combines incremental Principal Components Analysis and Minor Components Analysis. Unlike standard batch-based SFA, IncSFA adapts along with non-stationary environments, is amenable to episodic training, is not corrupted by outliers, and is covariance-free. These properties make IncSFA a generally useful unsupervised preprocessor for autonomous learning agents and robots. In IncSFA, the CCIPCA and MCA updates take the form of Hebbian and anti-Hebbian updating, extending the biological plausibility of SFA. In both single node and deep network versions, IncSFA learns to encode its input streams (such as high-dimensional video) by informative slow features representing meaningful abstract environmental properties. It can handle cases where batch SFA fails.
[ { "created": "Fri, 9 Dec 2011 15:01:25 GMT", "version": "v1" } ]
2012-10-11
[ [ "Kompella", "Varun Raj", "" ], [ "Luciw", "Matthew", "" ], [ "Schmidhuber", "Juergen", "" ] ]
Slow Feature Analysis (SFA) extracts features representing the underlying causes of changes within a temporally coherent high-dimensional raw sensory input signal. Our novel incremental version of SFA (IncSFA) combines incremental Principal Components Analysis and Minor Components Analysis. Unlike standard batch-based SFA, IncSFA adapts along with non-stationary environments, is amenable to episodic training, is not corrupted by outliers, and is covariance-free. These properties make IncSFA a generally useful unsupervised preprocessor for autonomous learning agents and robots. In IncSFA, the CCIPCA and MCA updates take the form of Hebbian and anti-Hebbian updating, extending the biological plausibility of SFA. In both single node and deep network versions, IncSFA learns to encode its input streams (such as high-dimensional video) by informative slow features representing meaningful abstract environmental properties. It can handle cases where batch SFA fails.
1907.10247
Yijie Guo
Yijie Guo, Jongwook Choi, Marcin Moczulski, Shengyu Feng, Samy Bengio, Mohammad Norouzi, Honglak Lee
Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning with sparse rewards is challenging because an agent can rarely obtain non-zero rewards and hence, gradient-based optimization of parameterized policies can be incremental and slow. Recent work demonstrated that using a memory buffer of previous successful trajectories can result in more effective policies. However, existing methods may overly exploit past successful experiences, which can encourage the agent to adopt sub-optimal and myopic behaviors. In this work, instead of focusing on good experiences with limited diversity, we propose to learn a trajectory-conditioned policy to follow and expand diverse past trajectories from a memory buffer. Our method allows the agent to reach diverse regions in the state space and improve upon the past trajectories to reach new states. We empirically show that our approach significantly outperforms count-based exploration methods (parametric approach) and self-imitation learning (parametric approach with non-parametric memory) on various complex tasks with local optima. In particular, without using expert demonstrations or resetting to arbitrary states, we achieve the state-of-the-art scores under five billion number of frames, on challenging Atari games such as Montezuma's Revenge and Pitfall.
[ { "created": "Wed, 24 Jul 2019 05:46:27 GMT", "version": "v1" }, { "created": "Wed, 20 Nov 2019 00:41:38 GMT", "version": "v2" }, { "created": "Mon, 15 Feb 2021 03:53:20 GMT", "version": "v3" } ]
2021-02-16
[ [ "Guo", "Yijie", "" ], [ "Choi", "Jongwook", "" ], [ "Moczulski", "Marcin", "" ], [ "Feng", "Shengyu", "" ], [ "Bengio", "Samy", "" ], [ "Norouzi", "Mohammad", "" ], [ "Lee", "Honglak", "" ] ]
Reinforcement learning with sparse rewards is challenging because an agent can rarely obtain non-zero rewards and hence, gradient-based optimization of parameterized policies can be incremental and slow. Recent work demonstrated that using a memory buffer of previous successful trajectories can result in more effective policies. However, existing methods may overly exploit past successful experiences, which can encourage the agent to adopt sub-optimal and myopic behaviors. In this work, instead of focusing on good experiences with limited diversity, we propose to learn a trajectory-conditioned policy to follow and expand diverse past trajectories from a memory buffer. Our method allows the agent to reach diverse regions in the state space and improve upon the past trajectories to reach new states. We empirically show that our approach significantly outperforms count-based exploration methods (parametric approach) and self-imitation learning (parametric approach with non-parametric memory) on various complex tasks with local optima. In particular, without using expert demonstrations or resetting to arbitrary states, we achieve the state-of-the-art scores under five billion number of frames, on challenging Atari games such as Montezuma's Revenge and Pitfall.
2401.08930
Haorui Ji
Haorui Ji, Hongdong Li
3D Human Pose Analysis via Diffusion Synthesis
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion models have demonstrated remarkable success in generative modeling. In this paper, we propose PADS (Pose Analysis by Diffusion Synthesis), a novel framework designed to address various challenges in 3D human pose analysis through a unified pipeline. Central to PADS are two distinctive strategies: i) learning a task-agnostic pose prior using a diffusion synthesis process to effectively capture the kinematic constraints in human pose data, and ii) unifying multiple pose analysis tasks like estimation, completion, denoising, etc, as instances of inverse problems. The learned pose prior will be treated as a regularization imposing on task-specific constraints, guiding the optimization process through a series of conditional denoising steps. PADS represents the first diffusion-based framework for tackling general 3D human pose analysis within the inverse problem framework. Its performance has been validated on different benchmarks, signaling the adaptability and robustness of this pipeline.
[ { "created": "Wed, 17 Jan 2024 02:59:34 GMT", "version": "v1" } ]
2024-01-18
[ [ "Ji", "Haorui", "" ], [ "Li", "Hongdong", "" ] ]
Diffusion models have demonstrated remarkable success in generative modeling. In this paper, we propose PADS (Pose Analysis by Diffusion Synthesis), a novel framework designed to address various challenges in 3D human pose analysis through a unified pipeline. Central to PADS are two distinctive strategies: i) learning a task-agnostic pose prior using a diffusion synthesis process to effectively capture the kinematic constraints in human pose data, and ii) unifying multiple pose analysis tasks like estimation, completion, denoising, etc, as instances of inverse problems. The learned pose prior will be treated as a regularization imposing on task-specific constraints, guiding the optimization process through a series of conditional denoising steps. PADS represents the first diffusion-based framework for tackling general 3D human pose analysis within the inverse problem framework. Its performance has been validated on different benchmarks, signaling the adaptability and robustness of this pipeline.
2309.04802
Qingtian Bian
Qingtian Bian, Jiaxing Xu, Hui Fang, Yiping Ke
CPMR: Context-Aware Incremental Sequential Recommendation with Pseudo-Multi-Task Learning
Accepted by CIKM 2023. Alias: "Modeling Context-Aware Temporal Dynamics via Pseudo-Multi-Task Learning"
ACM International Conference on Information and Knowledge Management(CIKM '23), October 21-25,2023,Birmingham,United Kingdom
10.1145/3583780.3615512
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The motivations of users to make interactions can be divided into static preference and dynamic interest. To accurately model user representations over time, recent studies in sequential recommendation utilize information propagation and evolution to mine from batches of arriving interactions. However, they ignore the fact that people are easily influenced by the recent actions of other users in the contextual scenario, and applying evolution across all historical interactions dilutes the importance of recent ones, thus failing to model the evolution of dynamic interest accurately. To address this issue, we propose a Context-Aware Pseudo-Multi-Task Recommender System (CPMR) to model the evolution in both historical and contextual scenarios by creating three representations for each user and item under different dynamics: static embedding, historical temporal states, and contextual temporal states. To dually improve the performance of temporal states evolution and incremental recommendation, we design a Pseudo-Multi-Task Learning (PMTL) paradigm by stacking the incremental single-target recommendations into one multi-target task for joint optimization. Within the PMTL paradigm, CPMR employs a shared-bottom network to conduct the evolution of temporal states across historical and contextual scenarios, as well as the fusion of them at the user-item level. In addition, CPMR incorporates one real tower for incremental predictions, and two pseudo towers dedicated to updating the respective temporal states based on new batches of interactions. Experimental results on four benchmark recommendation datasets show that CPMR consistently outperforms state-of-the-art baselines and achieves significant gains on three of them. The code is available at: https://github.com/DiMarzioBian/CPMR.
[ { "created": "Sat, 9 Sep 2023 14:07:11 GMT", "version": "v1" }, { "created": "Thu, 14 Sep 2023 02:31:12 GMT", "version": "v2" }, { "created": "Sat, 16 Sep 2023 08:52:00 GMT", "version": "v3" } ]
2023-09-19
[ [ "Bian", "Qingtian", "" ], [ "Xu", "Jiaxing", "" ], [ "Fang", "Hui", "" ], [ "Ke", "Yiping", "" ] ]
The motivations of users to make interactions can be divided into static preference and dynamic interest. To accurately model user representations over time, recent studies in sequential recommendation utilize information propagation and evolution to mine from batches of arriving interactions. However, they ignore the fact that people are easily influenced by the recent actions of other users in the contextual scenario, and applying evolution across all historical interactions dilutes the importance of recent ones, thus failing to model the evolution of dynamic interest accurately. To address this issue, we propose a Context-Aware Pseudo-Multi-Task Recommender System (CPMR) to model the evolution in both historical and contextual scenarios by creating three representations for each user and item under different dynamics: static embedding, historical temporal states, and contextual temporal states. To dually improve the performance of temporal states evolution and incremental recommendation, we design a Pseudo-Multi-Task Learning (PMTL) paradigm by stacking the incremental single-target recommendations into one multi-target task for joint optimization. Within the PMTL paradigm, CPMR employs a shared-bottom network to conduct the evolution of temporal states across historical and contextual scenarios, as well as the fusion of them at the user-item level. In addition, CPMR incorporates one real tower for incremental predictions, and two pseudo towers dedicated to updating the respective temporal states based on new batches of interactions. Experimental results on four benchmark recommendation datasets show that CPMR consistently outperforms state-of-the-art baselines and achieves significant gains on three of them. The code is available at: https://github.com/DiMarzioBian/CPMR.
2306.09244
Zijian Zhou
Zijian Zhou, Oluwatosin Alabi, Meng Wei, Tom Vercauteren, Miaojing Shi
Text Promptable Surgical Instrument Segmentation with Vision-Language Models
NeurIPS 2023
https://proceedings.neurips.cc/paper_files/paper/2023/hash/5af741d487c5f0b08bfe56e11d1883e4-Abstract-Conference.html
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a novel text promptable surgical instrument segmentation approach to overcome challenges associated with diversity and differentiation of surgical instruments in minimally invasive surgeries. We redefine the task as text promptable, thereby enabling a more nuanced comprehension of surgical instruments and adaptability to new instrument types. Inspired by recent advancements in vision-language models, we leverage pretrained image and text encoders as our model backbone and design a text promptable mask decoder consisting of attention- and convolution-based prompting schemes for surgical instrument segmentation prediction. Our model leverages multiple text prompts for each surgical instrument through a new mixture of prompts mechanism, resulting in enhanced segmentation performance. Additionally, we introduce a hard instrument area reinforcement module to improve image feature comprehension and segmentation precision. Extensive experiments on several surgical instrument segmentation datasets demonstrate our model's superior performance and promising generalization capability. To our knowledge, this is the first implementation of a promptable approach to surgical instrument segmentation, offering significant potential for practical application in the field of robotic-assisted surgery. Code is available at https://github.com/franciszzj/TP-SIS.
[ { "created": "Thu, 15 Jun 2023 16:26:20 GMT", "version": "v1" }, { "created": "Sun, 29 Oct 2023 10:07:43 GMT", "version": "v2" }, { "created": "Wed, 8 Nov 2023 15:36:17 GMT", "version": "v3" } ]
2024-06-05
[ [ "Zhou", "Zijian", "" ], [ "Alabi", "Oluwatosin", "" ], [ "Wei", "Meng", "" ], [ "Vercauteren", "Tom", "" ], [ "Shi", "Miaojing", "" ] ]
In this paper, we propose a novel text promptable surgical instrument segmentation approach to overcome challenges associated with diversity and differentiation of surgical instruments in minimally invasive surgeries. We redefine the task as text promptable, thereby enabling a more nuanced comprehension of surgical instruments and adaptability to new instrument types. Inspired by recent advancements in vision-language models, we leverage pretrained image and text encoders as our model backbone and design a text promptable mask decoder consisting of attention- and convolution-based prompting schemes for surgical instrument segmentation prediction. Our model leverages multiple text prompts for each surgical instrument through a new mixture of prompts mechanism, resulting in enhanced segmentation performance. Additionally, we introduce a hard instrument area reinforcement module to improve image feature comprehension and segmentation precision. Extensive experiments on several surgical instrument segmentation datasets demonstrate our model's superior performance and promising generalization capability. To our knowledge, this is the first implementation of a promptable approach to surgical instrument segmentation, offering significant potential for practical application in the field of robotic-assisted surgery. Code is available at https://github.com/franciszzj/TP-SIS.
2303.15198
Pengwei Liang
Pengwei Liang, Junjun Jiang, Xianming Liu, Jiayi Ma
Image Deblurring by Exploring In-depth Properties of Transformer
accept by IEEE Transactions on Neural Networks and Learning Systems
IEEE Transactions on Neural Networks and Learning Systems 2024
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image deblurring continues to achieve impressive performance with the development of generative models. Nonetheless, there still remains a displeasing problem if one wants to improve perceptual quality and quantitative scores of recovered image at the same time. In this study, drawing inspiration from the research of transformer properties, we introduce the pretrained transformers to address this problem. In particular, we leverage deep features extracted from a pretrained vision transformer (ViT) to encourage recovered images to be sharp without sacrificing the performance measured by the quantitative metrics. The pretrained transformer can capture the global topological relations (i.e., self-similarity) of image, and we observe that the captured topological relations about the sharp image will change when blur occurs. By comparing the transformer features between recovered image and target one, the pretrained transformer provides high-resolution blur-sensitive semantic information, which is critical in measuring the sharpness of the deblurred image. On the basis of the advantages, we present two types of novel perceptual losses to guide image deblurring. One regards the features as vectors and computes the discrepancy between representations extracted from recovered image and target one in Euclidean space. The other type considers the features extracted from an image as a distribution and compares the distribution discrepancy between recovered image and target one. We demonstrate the effectiveness of transformer properties in improving the perceptual quality while not sacrificing the quantitative scores (PSNR) over the most competitive models, such as Uformer, Restormer, and NAFNet, on defocus deblurring and motion deblurring tasks.
[ { "created": "Fri, 24 Mar 2023 14:14:25 GMT", "version": "v1" }, { "created": "Sat, 27 Jan 2024 05:47:40 GMT", "version": "v2" } ]
2024-01-30
[ [ "Liang", "Pengwei", "" ], [ "Jiang", "Junjun", "" ], [ "Liu", "Xianming", "" ], [ "Ma", "Jiayi", "" ] ]
Image deblurring continues to achieve impressive performance with the development of generative models. Nonetheless, there still remains a displeasing problem if one wants to improve perceptual quality and quantitative scores of recovered image at the same time. In this study, drawing inspiration from the research of transformer properties, we introduce the pretrained transformers to address this problem. In particular, we leverage deep features extracted from a pretrained vision transformer (ViT) to encourage recovered images to be sharp without sacrificing the performance measured by the quantitative metrics. The pretrained transformer can capture the global topological relations (i.e., self-similarity) of image, and we observe that the captured topological relations about the sharp image will change when blur occurs. By comparing the transformer features between recovered image and target one, the pretrained transformer provides high-resolution blur-sensitive semantic information, which is critical in measuring the sharpness of the deblurred image. On the basis of the advantages, we present two types of novel perceptual losses to guide image deblurring. One regards the features as vectors and computes the discrepancy between representations extracted from recovered image and target one in Euclidean space. The other type considers the features extracted from an image as a distribution and compares the distribution discrepancy between recovered image and target one. We demonstrate the effectiveness of transformer properties in improving the perceptual quality while not sacrificing the quantitative scores (PSNR) over the most competitive models, such as Uformer, Restormer, and NAFNet, on defocus deblurring and motion deblurring tasks.
1601.07279
Mikko Lauri
Mikko Lauri, Nikolay Atanasov, George J. Pappas, Risto Ritala
Myopic Policy Bounds for Information Acquisition POMDPs
8 pages, 3 figures
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the problem of optimal control of robotic sensing systems aimed at autonomous information gathering in scenarios such as environmental monitoring, search and rescue, and surveillance and reconnaissance. The information gathering problem is formulated as a partially observable Markov decision process (POMDP) with a reward function that captures uncertainty reduction. Unlike the classical POMDP formulation, the resulting reward structure is nonlinear in the belief state and the traditional approaches do not apply directly. Instead of developing a new approximation algorithm, we show that if attention is restricted to a class of problems with certain structural properties, one can derive (often tight) upper and lower bounds on the optimal policy via an efficient myopic computation. These policy bounds can be applied in conjunction with an online branch-and-bound algorithm to accelerate the computation of the optimal policy. We obtain informative lower and upper policy bounds with low computational effort in a target tracking domain. The performance of branch-and-bounding is demonstrated and compared with exact value iteration.
[ { "created": "Wed, 27 Jan 2016 07:10:06 GMT", "version": "v1" } ]
2016-01-28
[ [ "Lauri", "Mikko", "" ], [ "Atanasov", "Nikolay", "" ], [ "Pappas", "George J.", "" ], [ "Ritala", "Risto", "" ] ]
This paper addresses the problem of optimal control of robotic sensing systems aimed at autonomous information gathering in scenarios such as environmental monitoring, search and rescue, and surveillance and reconnaissance. The information gathering problem is formulated as a partially observable Markov decision process (POMDP) with a reward function that captures uncertainty reduction. Unlike the classical POMDP formulation, the resulting reward structure is nonlinear in the belief state and the traditional approaches do not apply directly. Instead of developing a new approximation algorithm, we show that if attention is restricted to a class of problems with certain structural properties, one can derive (often tight) upper and lower bounds on the optimal policy via an efficient myopic computation. These policy bounds can be applied in conjunction with an online branch-and-bound algorithm to accelerate the computation of the optimal policy. We obtain informative lower and upper policy bounds with low computational effort in a target tracking domain. The performance of branch-and-bounding is demonstrated and compared with exact value iteration.
2404.01701
Tristan Ratz
Marcel Nawrath, Agnieszka Nowak, Tristan Ratz, Danilo C. Walenta, Juri Opitz, Leonardo F. R. Ribeiro, Jo\~ao Sedoc, Daniel Deutsch, Simon Mille, Yixin Liu, Lining Zhang, Sebastian Gehrmann, Saad Mahamood, Miruna Clinciu, Khyathi Chandu, Yufang Hou
On the Role of Summary Content Units in Text Summarization Evaluation
10 Pages, 3 Figures, 3 Tables, camera ready version accepted at NAACL 2024
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At the heart of the Pyramid evaluation method for text summarization lie human written summary content units (SCUs). These SCUs are concise sentences that decompose a summary into small facts. Such SCUs can be used to judge the quality of a candidate summary, possibly partially automated via natural language inference (NLI) systems. Interestingly, with the aim to fully automate the Pyramid evaluation, Zhang and Bansal (2021) show that SCUs can be approximated by automatically generated semantic role triplets (STUs). However, several questions currently lack answers, in particular: i) Are there other ways of approximating SCUs that can offer advantages? ii) Under which conditions are SCUs (or their approximations) offering the most value? In this work, we examine two novel strategies to approximate SCUs: generating SCU approximations from AMR meaning representations (SMUs) and from large language models (SGUs), respectively. We find that while STUs and SMUs are competitive, the best approximation quality is achieved by SGUs. We also show through a simple sentence-decomposition baseline (SSUs) that SCUs (and their approximations) offer the most value when ranking short summaries, but may not help as much when ranking systems or longer summaries.
[ { "created": "Tue, 2 Apr 2024 07:09:44 GMT", "version": "v1" } ]
2024-04-03
[ [ "Nawrath", "Marcel", "" ], [ "Nowak", "Agnieszka", "" ], [ "Ratz", "Tristan", "" ], [ "Walenta", "Danilo C.", "" ], [ "Opitz", "Juri", "" ], [ "Ribeiro", "Leonardo F. R.", "" ], [ "Sedoc", "João", "" ], [ "Deutsch", "Daniel", "" ], [ "Mille", "Simon", "" ], [ "Liu", "Yixin", "" ], [ "Zhang", "Lining", "" ], [ "Gehrmann", "Sebastian", "" ], [ "Mahamood", "Saad", "" ], [ "Clinciu", "Miruna", "" ], [ "Chandu", "Khyathi", "" ], [ "Hou", "Yufang", "" ] ]
At the heart of the Pyramid evaluation method for text summarization lie human written summary content units (SCUs). These SCUs are concise sentences that decompose a summary into small facts. Such SCUs can be used to judge the quality of a candidate summary, possibly partially automated via natural language inference (NLI) systems. Interestingly, with the aim to fully automate the Pyramid evaluation, Zhang and Bansal (2021) show that SCUs can be approximated by automatically generated semantic role triplets (STUs). However, several questions currently lack answers, in particular: i) Are there other ways of approximating SCUs that can offer advantages? ii) Under which conditions are SCUs (or their approximations) offering the most value? In this work, we examine two novel strategies to approximate SCUs: generating SCU approximations from AMR meaning representations (SMUs) and from large language models (SGUs), respectively. We find that while STUs and SMUs are competitive, the best approximation quality is achieved by SGUs. We also show through a simple sentence-decomposition baseline (SSUs) that SCUs (and their approximations) offer the most value when ranking short summaries, but may not help as much when ranking systems or longer summaries.
1909.06228
S VenkataKeerthy
S. VenkataKeerthy, Rohit Aggarwal, Shalini Jain, Maunendra Sankar Desarkar, Ramakrishna Upadrasta and Y. N. Srikant
IR2Vec: LLVM IR based Scalable Program Embeddings
Accepted in ACM TACO
null
10.1145/3418463
null
cs.PL cs.LG cs.NE cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose IR2Vec, a Concise and Scalable encoding infrastructure to represent programs as a distributed embedding in continuous space. This distributed embedding is obtained by combining representation learning methods with flow information to capture the syntax as well as the semantics of the input programs. As our infrastructure is based on the Intermediate Representation (IR) of the source code, obtained embeddings are both language and machine independent. The entities of the IR are modeled as relationships, and their representations are learned to form a seed embedding vocabulary. Using this infrastructure, we propose two incremental encodings:Symbolic and Flow-Aware. Symbolic encodings are obtained from the seed embedding vocabulary, and Flow-Aware encodings are obtained by augmenting the Symbolic encodings with the flow information. We show the effectiveness of our methodology on two optimization tasks (Heterogeneous device mapping and Thread coarsening). Our way of representing the programs enables us to use non-sequential models resulting in orders of magnitude of faster training time. Both the encodings generated by IR2Vec outperform the existing methods in both the tasks, even while using simple machine learning models. In particular, our results improve or match the state-of-the-art speedup in 11/14 benchmark-suites in the device mapping task across two platforms and 53/68 benchmarks in the Thread coarsening task across four different platforms. When compared to the other methods, our embeddings are more scalable, is non-data-hungry, and has betterOut-Of-Vocabulary (OOV) characteristics.
[ { "created": "Fri, 13 Sep 2019 13:41:40 GMT", "version": "v1" }, { "created": "Wed, 1 Jan 2020 06:22:25 GMT", "version": "v2" }, { "created": "Tue, 1 Sep 2020 09:24:01 GMT", "version": "v3" } ]
2020-12-25
[ [ "VenkataKeerthy", "S.", "" ], [ "Aggarwal", "Rohit", "" ], [ "Jain", "Shalini", "" ], [ "Desarkar", "Maunendra Sankar", "" ], [ "Upadrasta", "Ramakrishna", "" ], [ "Srikant", "Y. N.", "" ] ]
We propose IR2Vec, a Concise and Scalable encoding infrastructure to represent programs as a distributed embedding in continuous space. This distributed embedding is obtained by combining representation learning methods with flow information to capture the syntax as well as the semantics of the input programs. As our infrastructure is based on the Intermediate Representation (IR) of the source code, obtained embeddings are both language and machine independent. The entities of the IR are modeled as relationships, and their representations are learned to form a seed embedding vocabulary. Using this infrastructure, we propose two incremental encodings:Symbolic and Flow-Aware. Symbolic encodings are obtained from the seed embedding vocabulary, and Flow-Aware encodings are obtained by augmenting the Symbolic encodings with the flow information. We show the effectiveness of our methodology on two optimization tasks (Heterogeneous device mapping and Thread coarsening). Our way of representing the programs enables us to use non-sequential models resulting in orders of magnitude of faster training time. Both the encodings generated by IR2Vec outperform the existing methods in both the tasks, even while using simple machine learning models. In particular, our results improve or match the state-of-the-art speedup in 11/14 benchmark-suites in the device mapping task across two platforms and 53/68 benchmarks in the Thread coarsening task across four different platforms. When compared to the other methods, our embeddings are more scalable, is non-data-hungry, and has betterOut-Of-Vocabulary (OOV) characteristics.
1903.01392
Sherif Abdulatif
Karim Armanious, Sherif Abdulatif, Fady Aziz, Urs Schneider, Bin Yang
An Adversarial Super-Resolution Remedy for Radar Design Trade-offs
Accepted in EUSIPCO 2019, 5 pages
null
10.23919/EUSIPCO.2019.8902510
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Radar is of vital importance in many fields, such as autonomous driving, safety and surveillance applications. However, it suffers from stringent constraints on its design parametrization leading to multiple trade-offs. For example, the bandwidth in FMCW radars is inversely proportional with both the maximum unambiguous range and range resolution. In this work, we introduce a new method for circumventing radar design trade-offs. We propose the use of recent advances in computer vision, more specifically generative adversarial networks (GANs), to enhance low-resolution radar acquisitions into higher resolution counterparts while maintaining the advantages of the low-resolution parametrization. The capability of the proposed method was evaluated on the velocity resolution and range-azimuth trade-offs in micro-Doppler signatures and FMCW uniform linear array (ULA) radars, respectively.
[ { "created": "Mon, 4 Mar 2019 17:41:26 GMT", "version": "v1" }, { "created": "Thu, 20 Jun 2019 16:23:55 GMT", "version": "v2" } ]
2019-11-26
[ [ "Armanious", "Karim", "" ], [ "Abdulatif", "Sherif", "" ], [ "Aziz", "Fady", "" ], [ "Schneider", "Urs", "" ], [ "Yang", "Bin", "" ] ]
Radar is of vital importance in many fields, such as autonomous driving, safety and surveillance applications. However, it suffers from stringent constraints on its design parametrization leading to multiple trade-offs. For example, the bandwidth in FMCW radars is inversely proportional with both the maximum unambiguous range and range resolution. In this work, we introduce a new method for circumventing radar design trade-offs. We propose the use of recent advances in computer vision, more specifically generative adversarial networks (GANs), to enhance low-resolution radar acquisitions into higher resolution counterparts while maintaining the advantages of the low-resolution parametrization. The capability of the proposed method was evaluated on the velocity resolution and range-azimuth trade-offs in micro-Doppler signatures and FMCW uniform linear array (ULA) radars, respectively.
1004.4128
Emanuel Gluskin
Emanuel Gluskin
An approximate analytical (structural) superposition in terms of two, or more, "alfa"-circuits of the same topology: Pt.1 - description of the superposition
This is my old (2005-6) Ms.. The "f-connection" is new and thus the work seems to be too detailed, but some central proofs were difficult for me, and having to be sure in good precision of the "analytical superposition", I calculated different cases. See in http://www.ee.bgu.ac.il/~gluskin/ Article no 50 and the Conference Presentation of 2008. 25 pages, 7 figures, 1 table.
null
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One-ports named "f-circuits", composed of similar conductors described by a monotonic polynomial, or quasi-polynomial (i.e. with positive but not necessarily integer, powers) characteristic i = f(v) are studied, focusing on the algebraic map f --> F. Here F(.) is the input conductivity characteristic; i.e., iin = F(vin) is the input current. The "power-law" "alfa-circuit" introduced in [1], for which f(v) ~ v^"alfa", is an important particular case. By means of a generalization of a parallel connection, the f-circuits are constructed from the alfa-circuits of the same topology, with different "alfa", so that the given topology is kept, and 'f' is an additive function of the connection. We observe and consider an associated, generally approximated, but, in all of the cases studied, always high-precision, specific superposition. This superposition is in terms of f --> F, and it means that F(.) of the connection is close to the sum of the input currents of the independent "alfa"-circuits, all connected in parallel to the same source. In other words, F(.) is well approximated by a linear combination of the same degrees of the independent variable as in f(.), i.e. the map of the characteristics f --> F is close to a linear one. This unexpected result is useful for understanding nonlinear algebraic circuits, and is missed in the classical theory. The cases of f(v) = D1v + D2v^2 and f(v) = D1v + D3v^3, are analyzed in examples. Special topologies when the superposition must be ideal, are also considered. In the second part [2] of the work the "circuit mechanism" that is responsible for the high precision of the superposition, in the most general case, will be explained.
[ { "created": "Fri, 23 Apr 2010 13:26:34 GMT", "version": "v1" }, { "created": "Mon, 26 Apr 2010 06:36:05 GMT", "version": "v2" } ]
2010-04-28
[ [ "Gluskin", "Emanuel", "" ] ]
One-ports named "f-circuits", composed of similar conductors described by a monotonic polynomial, or quasi-polynomial (i.e. with positive but not necessarily integer, powers) characteristic i = f(v) are studied, focusing on the algebraic map f --> F. Here F(.) is the input conductivity characteristic; i.e., iin = F(vin) is the input current. The "power-law" "alfa-circuit" introduced in [1], for which f(v) ~ v^"alfa", is an important particular case. By means of a generalization of a parallel connection, the f-circuits are constructed from the alfa-circuits of the same topology, with different "alfa", so that the given topology is kept, and 'f' is an additive function of the connection. We observe and consider an associated, generally approximated, but, in all of the cases studied, always high-precision, specific superposition. This superposition is in terms of f --> F, and it means that F(.) of the connection is close to the sum of the input currents of the independent "alfa"-circuits, all connected in parallel to the same source. In other words, F(.) is well approximated by a linear combination of the same degrees of the independent variable as in f(.), i.e. the map of the characteristics f --> F is close to a linear one. This unexpected result is useful for understanding nonlinear algebraic circuits, and is missed in the classical theory. The cases of f(v) = D1v + D2v^2 and f(v) = D1v + D3v^3, are analyzed in examples. Special topologies when the superposition must be ideal, are also considered. In the second part [2] of the work the "circuit mechanism" that is responsible for the high precision of the superposition, in the most general case, will be explained.
cs/0501046
Tommaso Toffoli
Tommaso Toffoli
Thermodynamics of used punched tape: A weak and a strong equivalence principle
7 pages, 8 figures
null
null
null
cs.IT math.IT
null
We study the repeated use of a monotonic recording medium--such as punched tape or photographic plate--where marks can be added at any time but never erased. (For practical purposes, also the electromagnetic "ether" falls into this class.) Our emphasis is on the case where the successive users act independently and selfishly, but not maliciously; typically, the "first user" would be a blind natural process tending to degrade the recording medium, and the "second user" a human trying to make the most of whatever capacity is left. To what extent is a length of used tape "equivalent"--for information transmission purposes--to a shorter length of virgin tape? Can we characterize a piece of used tape by an appropriate "effective length" and forget all other details? We identify two equivalence principles. The weak principle is exact, but only holds for a sequence of infinitesimal usage increments. The strong principle holds for any amount of incremental usage, but is only approximate; nonetheless, it is quite accurate even in the worst case and is virtually exact over most of the range--becoming exact in the limit of heavily used tape. The fact that strong equivalence does not hold exactly, but then it does almost exactly, comes as a bit of a surprise.
[ { "created": "Fri, 21 Jan 2005 04:17:50 GMT", "version": "v1" } ]
2007-07-13
[ [ "Toffoli", "Tommaso", "" ] ]
We study the repeated use of a monotonic recording medium--such as punched tape or photographic plate--where marks can be added at any time but never erased. (For practical purposes, also the electromagnetic "ether" falls into this class.) Our emphasis is on the case where the successive users act independently and selfishly, but not maliciously; typically, the "first user" would be a blind natural process tending to degrade the recording medium, and the "second user" a human trying to make the most of whatever capacity is left. To what extent is a length of used tape "equivalent"--for information transmission purposes--to a shorter length of virgin tape? Can we characterize a piece of used tape by an appropriate "effective length" and forget all other details? We identify two equivalence principles. The weak principle is exact, but only holds for a sequence of infinitesimal usage increments. The strong principle holds for any amount of incremental usage, but is only approximate; nonetheless, it is quite accurate even in the worst case and is virtually exact over most of the range--becoming exact in the limit of heavily used tape. The fact that strong equivalence does not hold exactly, but then it does almost exactly, comes as a bit of a surprise.
2010.12885
Tong Niu
Tong Niu, Semih Yavuz, Yingbo Zhou, Nitish Shirish Keskar, Huan Wang, Caiming Xiong
Unsupervised Paraphrasing with Pretrained Language Models
Accepted at EMNLP 2021 main conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Paraphrase generation has benefited extensively from recent progress in the designing of training objectives and model architectures. However, previous explorations have largely focused on supervised methods, which require a large amount of labeled data that is costly to collect. To address this drawback, we adopt a transfer learning approach and propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting. Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking (DB). To enforce a surface form dissimilar from the input, whenever the language model emits a token contained in the source sequence, DB prevents the model from outputting the subsequent source token for the next generation step. We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair (QQP) and the ParaNMT datasets and is robust to domain shift between the two datasets of distinct distributions. We also demonstrate that our model transfers to paraphrasing in other languages without any additional finetuning.
[ { "created": "Sat, 24 Oct 2020 11:55:28 GMT", "version": "v1" }, { "created": "Fri, 10 Sep 2021 20:50:19 GMT", "version": "v2" } ]
2021-09-14
[ [ "Niu", "Tong", "" ], [ "Yavuz", "Semih", "" ], [ "Zhou", "Yingbo", "" ], [ "Keskar", "Nitish Shirish", "" ], [ "Wang", "Huan", "" ], [ "Xiong", "Caiming", "" ] ]
Paraphrase generation has benefited extensively from recent progress in the designing of training objectives and model architectures. However, previous explorations have largely focused on supervised methods, which require a large amount of labeled data that is costly to collect. To address this drawback, we adopt a transfer learning approach and propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting. Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking (DB). To enforce a surface form dissimilar from the input, whenever the language model emits a token contained in the source sequence, DB prevents the model from outputting the subsequent source token for the next generation step. We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair (QQP) and the ParaNMT datasets and is robust to domain shift between the two datasets of distinct distributions. We also demonstrate that our model transfers to paraphrasing in other languages without any additional finetuning.
1910.02830
Viraj Prabhu
Viraj Prabhu, Anitha Kannan, Geoffrey J. Tso, Namit Katariya, Manish Chablani, David Sontag, Xavier Amatriain
Open Set Medical Diagnosis
Abbreviated version to appear at Machine Learning for Healthcare (ML4H) Workshop at NeurIPS 2019
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine-learned diagnosis models have shown promise as medical aides but are trained under a closed-set assumption, i.e. that models will only encounter conditions on which they have been trained. However, it is practically infeasible to obtain sufficient training data for every human condition, and once deployed such models will invariably face previously unseen conditions. We frame machine-learned diagnosis as an open-set learning problem, and study how state-of-the-art approaches compare. Further, we extend our study to a setting where training data is distributed across several healthcare sites that do not allow data pooling, and experiment with different strategies of building open-set diagnostic ensembles. Across both settings, we observe consistent gains from explicitly modeling unseen conditions, but find the optimal training strategy to vary across settings.
[ { "created": "Mon, 7 Oct 2019 14:45:47 GMT", "version": "v1" } ]
2019-10-08
[ [ "Prabhu", "Viraj", "" ], [ "Kannan", "Anitha", "" ], [ "Tso", "Geoffrey J.", "" ], [ "Katariya", "Namit", "" ], [ "Chablani", "Manish", "" ], [ "Sontag", "David", "" ], [ "Amatriain", "Xavier", "" ] ]
Machine-learned diagnosis models have shown promise as medical aides but are trained under a closed-set assumption, i.e. that models will only encounter conditions on which they have been trained. However, it is practically infeasible to obtain sufficient training data for every human condition, and once deployed such models will invariably face previously unseen conditions. We frame machine-learned diagnosis as an open-set learning problem, and study how state-of-the-art approaches compare. Further, we extend our study to a setting where training data is distributed across several healthcare sites that do not allow data pooling, and experiment with different strategies of building open-set diagnostic ensembles. Across both settings, we observe consistent gains from explicitly modeling unseen conditions, but find the optimal training strategy to vary across settings.
2006.10923
Amish Patel
Amish Patel and Aravind Varier
Hyperparameter Analysis for Image Captioning
10 pages, 9 figures, and 7 tables
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
In this paper, we perform a thorough sensitivity analysis on state-of-the-art image captioning approaches using two different architectures: CNN+LSTM and CNN+Transformer. Experiments were carried out using the Flickr8k dataset. The biggest takeaway from the experiments is that fine-tuning the CNN encoder outperforms the baseline and all other experiments carried out for both architectures.
[ { "created": "Fri, 19 Jun 2020 01:49:37 GMT", "version": "v1" } ]
2020-06-22
[ [ "Patel", "Amish", "" ], [ "Varier", "Aravind", "" ] ]
In this paper, we perform a thorough sensitivity analysis on state-of-the-art image captioning approaches using two different architectures: CNN+LSTM and CNN+Transformer. Experiments were carried out using the Flickr8k dataset. The biggest takeaway from the experiments is that fine-tuning the CNN encoder outperforms the baseline and all other experiments carried out for both architectures.
2101.02496
Manuel Lagunas
Manuel Lagunas, Ana Serrano, Diego Gutierrez, Belen Masia
The joint role of geometry and illumination on material recognition
15 pages, 16 figures, Accepted to the Journal of Vision, 2021
Journal of Vision February 2021, Vol.21, 2
10.1167/jov.21.2.2
null
cs.CV cs.AI cs.GR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Observing and recognizing materials is a fundamental part of our daily life. Under typical viewing conditions, we are capable of effortlessly identifying the objects that surround us and recognizing the materials they are made of. Nevertheless, understanding the underlying perceptual processes that take place to accurately discern the visual properties of an object is a long-standing problem. In this work, we perform a comprehensive and systematic analysis of how the interplay of geometry, illumination, and their spatial frequencies affects human performance on material recognition tasks. We carry out large-scale behavioral experiments where participants are asked to recognize different reference materials among a pool of candidate samples. In the different experiments, we carefully sample the information in the frequency domain of the stimuli. From our analysis, we find significant first-order interactions between the geometry and the illumination, of both the reference and the candidates. In addition, we observe that simple image statistics and higher-order image histograms do not correlate with human performance. Therefore, we perform a high-level comparison of highly non-linear statistics by training a deep neural network on material recognition tasks. Our results show that such models can accurately classify materials, which suggests that they are capable of defining a meaningful representation of material appearance from labeled proximal image data. Last, we find preliminary evidence that these highly non-linear models and humans may use similar high-level factors for material recognition tasks.
[ { "created": "Thu, 7 Jan 2021 11:29:52 GMT", "version": "v1" }, { "created": "Thu, 4 Feb 2021 12:35:25 GMT", "version": "v2" } ]
2021-02-05
[ [ "Lagunas", "Manuel", "" ], [ "Serrano", "Ana", "" ], [ "Gutierrez", "Diego", "" ], [ "Masia", "Belen", "" ] ]
Observing and recognizing materials is a fundamental part of our daily life. Under typical viewing conditions, we are capable of effortlessly identifying the objects that surround us and recognizing the materials they are made of. Nevertheless, understanding the underlying perceptual processes that take place to accurately discern the visual properties of an object is a long-standing problem. In this work, we perform a comprehensive and systematic analysis of how the interplay of geometry, illumination, and their spatial frequencies affects human performance on material recognition tasks. We carry out large-scale behavioral experiments where participants are asked to recognize different reference materials among a pool of candidate samples. In the different experiments, we carefully sample the information in the frequency domain of the stimuli. From our analysis, we find significant first-order interactions between the geometry and the illumination, of both the reference and the candidates. In addition, we observe that simple image statistics and higher-order image histograms do not correlate with human performance. Therefore, we perform a high-level comparison of highly non-linear statistics by training a deep neural network on material recognition tasks. Our results show that such models can accurately classify materials, which suggests that they are capable of defining a meaningful representation of material appearance from labeled proximal image data. Last, we find preliminary evidence that these highly non-linear models and humans may use similar high-level factors for material recognition tasks.
1909.07208
Emna Rejaibi
Emna Rejaibi, Ali Komaty, Fabrice Meriaudeau, Said Agrebi, and Alice Othmani
MFCC-based Recurrent Neural Network for Automatic Clinical Depression Recognition and Assessment from Speech
14 pages, 7 figures, 9 tables
null
null
null
cs.HC cs.AI cs.LG eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Clinical depression or Major Depressive Disorder (MDD) is a common and serious medical illness. In this paper, a deep recurrent neural network-based framework is presented to detect depression and to predict its severity level from speech. Low-level and high-level audio features are extracted from audio recordings to predict the 24 scores of the Patient Health Questionnaire and the binary class of depression diagnosis. To overcome the problem of the small size of Speech Depression Recognition (SDR) datasets, expanding training labels and transferred features are considered. The proposed approach outperforms the state-of-art approaches on the DAIC-WOZ database with an overall accuracy of 76.27% and a root mean square error of 0.4 in assessing depression, while a root mean square error of 0.168 is achieved in predicting the depression severity levels. The proposed framework has several advantages (fastness, non-invasiveness, and non-intrusion), which makes it convenient for real-time applications. The performances of the proposed approach are evaluated under a multi-modal and a multi-features experiments. MFCC based high-level features hold relevant information related to depression. Yet, adding visual action units and different other acoustic features further boosts the classification results by 20% and 10% to reach an accuracy of 95.6% and 86%, respectively. Considering visual-facial modality needs to be carefully studied as it sparks patient privacy concerns while adding more acoustic features increases the computation time.
[ { "created": "Mon, 16 Sep 2019 14:03:01 GMT", "version": "v1" }, { "created": "Thu, 12 Mar 2020 13:09:24 GMT", "version": "v2" } ]
2020-03-13
[ [ "Rejaibi", "Emna", "" ], [ "Komaty", "Ali", "" ], [ "Meriaudeau", "Fabrice", "" ], [ "Agrebi", "Said", "" ], [ "Othmani", "Alice", "" ] ]
Clinical depression or Major Depressive Disorder (MDD) is a common and serious medical illness. In this paper, a deep recurrent neural network-based framework is presented to detect depression and to predict its severity level from speech. Low-level and high-level audio features are extracted from audio recordings to predict the 24 scores of the Patient Health Questionnaire and the binary class of depression diagnosis. To overcome the problem of the small size of Speech Depression Recognition (SDR) datasets, expanding training labels and transferred features are considered. The proposed approach outperforms the state-of-art approaches on the DAIC-WOZ database with an overall accuracy of 76.27% and a root mean square error of 0.4 in assessing depression, while a root mean square error of 0.168 is achieved in predicting the depression severity levels. The proposed framework has several advantages (fastness, non-invasiveness, and non-intrusion), which makes it convenient for real-time applications. The performances of the proposed approach are evaluated under a multi-modal and a multi-features experiments. MFCC based high-level features hold relevant information related to depression. Yet, adding visual action units and different other acoustic features further boosts the classification results by 20% and 10% to reach an accuracy of 95.6% and 86%, respectively. Considering visual-facial modality needs to be carefully studied as it sparks patient privacy concerns while adding more acoustic features increases the computation time.
2012.10613
Piotr Antonik
Piotr Antonik, Marc Haelterman, Serge Massar
Online training for high-performance analogue readout layers in photonic reservoir computers
11 pages, 5 figures
Cognitive Computation (Volume: 9, Pages: 297-306, 11 March 2017)
10.1007/s12559-017-9459-3
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Introduction. Reservoir Computing is a bio-inspired computing paradigm for processing time-dependent signals. The performance of its hardware implementation is comparable to state-of-the-art digital algorithms on a series of benchmark tasks. The major bottleneck of these implementation is the readout layer, based on slow offline post-processing. Few analogue solutions have been proposed, but all suffered from notice able decrease in performance due to added complexity of the setup. Methods. Here we propose the use of online training to solve these issues. We study the applicability of this method using numerical simulations of an experimentally feasible reservoir computer with an analogue readout layer. We also consider a nonlinear output layer, which would be very difficult to train with traditional methods. Results. We show numerically that online learning allows to circumvent the added complexity of the analogue layer and obtain the same level of performance as with a digital layer. Conclusion. This work paves the way to high-performance fully-analogue reservoir computers through the use of online training of the output layers.
[ { "created": "Sat, 19 Dec 2020 07:12:26 GMT", "version": "v1" } ]
2020-12-22
[ [ "Antonik", "Piotr", "" ], [ "Haelterman", "Marc", "" ], [ "Massar", "Serge", "" ] ]
Introduction. Reservoir Computing is a bio-inspired computing paradigm for processing time-dependent signals. The performance of its hardware implementation is comparable to state-of-the-art digital algorithms on a series of benchmark tasks. The major bottleneck of these implementation is the readout layer, based on slow offline post-processing. Few analogue solutions have been proposed, but all suffered from notice able decrease in performance due to added complexity of the setup. Methods. Here we propose the use of online training to solve these issues. We study the applicability of this method using numerical simulations of an experimentally feasible reservoir computer with an analogue readout layer. We also consider a nonlinear output layer, which would be very difficult to train with traditional methods. Results. We show numerically that online learning allows to circumvent the added complexity of the analogue layer and obtain the same level of performance as with a digital layer. Conclusion. This work paves the way to high-performance fully-analogue reservoir computers through the use of online training of the output layers.
1602.01516
Sina Parhizi
Sina Parhizi and Amin Khodaei
Market-based Microgrid Optimal Scheduling
Appeared in 6th IEEE International Conference on Smart Grid Communications (SmartGridComm 2015)
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an optimal scheduling model for a microgrid participating in the electricity distribution market in interaction with the Distribution Market Operator (DMO). The DMO is a concept proposed here, which administers the established electricity market in the distribution level, i.e., similar to the role of Independent System Operator (ISO) in the wholesale electricity market, sets electricity prices, determines the amounts of the power exchange between market participators, and interacts with the ISO. Considering a predetermined main grid power transfer to the microgrid, the microgrid scheduling problem will aim at balancing the power supply and demand while taking financial objectives into account. A stochastic programming method is employed to model prevailing uncertainties in the microgrid grid-connected and islanded operations. Numerical simulations exhibit the application and the effectiveness of the proposed market-based microgrid scheduling model.
[ { "created": "Thu, 4 Feb 2016 00:49:07 GMT", "version": "v1" } ]
2016-02-05
[ [ "Parhizi", "Sina", "" ], [ "Khodaei", "Amin", "" ] ]
This paper presents an optimal scheduling model for a microgrid participating in the electricity distribution market in interaction with the Distribution Market Operator (DMO). The DMO is a concept proposed here, which administers the established electricity market in the distribution level, i.e., similar to the role of Independent System Operator (ISO) in the wholesale electricity market, sets electricity prices, determines the amounts of the power exchange between market participators, and interacts with the ISO. Considering a predetermined main grid power transfer to the microgrid, the microgrid scheduling problem will aim at balancing the power supply and demand while taking financial objectives into account. A stochastic programming method is employed to model prevailing uncertainties in the microgrid grid-connected and islanded operations. Numerical simulations exhibit the application and the effectiveness of the proposed market-based microgrid scheduling model.
2303.01125
Xuechen Liu
Xuechen Liu, Md Sahidullah, Tomi Kinnunen
Distilling Multi-Level X-vector Knowledge for Small-footprint Speaker Verification
Submitted to Data & Knowledge Engineering at Dec. 2023. Copyright may be transferred without notice
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
Even though deep speaker models have demonstrated impressive accuracy in speaker verification tasks, this often comes at the expense of increased model size and computation time, presenting challenges for deployment in resource-constrained environments. Our research focuses on addressing this limitation through the development of small footprint deep speaker embedding extraction using knowledge distillation. While previous work in this domain has concentrated on speaker embedding extraction at the utterance level, our approach involves amalgamating embeddings from different levels of the x-vector model (teacher network) to train a compact student network. The results highlight the significance of frame-level information, with the student models exhibiting a remarkable size reduction of 85%-91% compared to their teacher counterparts, depending on the size of the teacher embeddings. Notably, by concatenating teacher embeddings, we achieve student networks that maintain comparable performance to the teacher while enjoying a substantial 75% reduction in model size. These findings and insights extend to other x-vector variants, underscoring the broad applicability of our approach.
[ { "created": "Thu, 2 Mar 2023 10:09:11 GMT", "version": "v1" }, { "created": "Fri, 15 Dec 2023 13:37:19 GMT", "version": "v2" }, { "created": "Tue, 19 Dec 2023 23:25:01 GMT", "version": "v3" } ]
2023-12-21
[ [ "Liu", "Xuechen", "" ], [ "Sahidullah", "Md", "" ], [ "Kinnunen", "Tomi", "" ] ]
Even though deep speaker models have demonstrated impressive accuracy in speaker verification tasks, this often comes at the expense of increased model size and computation time, presenting challenges for deployment in resource-constrained environments. Our research focuses on addressing this limitation through the development of small footprint deep speaker embedding extraction using knowledge distillation. While previous work in this domain has concentrated on speaker embedding extraction at the utterance level, our approach involves amalgamating embeddings from different levels of the x-vector model (teacher network) to train a compact student network. The results highlight the significance of frame-level information, with the student models exhibiting a remarkable size reduction of 85%-91% compared to their teacher counterparts, depending on the size of the teacher embeddings. Notably, by concatenating teacher embeddings, we achieve student networks that maintain comparable performance to the teacher while enjoying a substantial 75% reduction in model size. These findings and insights extend to other x-vector variants, underscoring the broad applicability of our approach.
1611.10305
Qunwei Li
Qunwei Li, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Zhenliang Zhang, Pramod K. Varshney
Influential Node Detection in Implicit Social Networks using Multi-task Gaussian Copula Models
NIPS 2016 Workshop, JMLR: Workshop and Conference Proceedings
null
null
null
cs.SI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Influential node detection is a central research topic in social network analysis. Many existing methods rely on the assumption that the network structure is completely known \textit{a priori}. However, in many applications, network structure is unavailable to explain the underlying information diffusion phenomenon. To address the challenge of information diffusion analysis with incomplete knowledge of network structure, we develop a multi-task low rank linear influence model. By exploiting the relationships between contagions, our approach can simultaneously predict the volume (i.e. time series prediction) for each contagion (or topic) and automatically identify the most influential nodes for each contagion. The proposed model is validated using synthetic data and an ISIS twitter dataset. In addition to improving the volume prediction performance significantly, we show that the proposed approach can reliably infer the most influential users for specific contagions.
[ { "created": "Wed, 30 Nov 2016 18:46:55 GMT", "version": "v1" } ]
2016-12-01
[ [ "Li", "Qunwei", "" ], [ "Kailkhura", "Bhavya", "" ], [ "Thiagarajan", "Jayaraman J.", "" ], [ "Zhang", "Zhenliang", "" ], [ "Varshney", "Pramod K.", "" ] ]
Influential node detection is a central research topic in social network analysis. Many existing methods rely on the assumption that the network structure is completely known \textit{a priori}. However, in many applications, network structure is unavailable to explain the underlying information diffusion phenomenon. To address the challenge of information diffusion analysis with incomplete knowledge of network structure, we develop a multi-task low rank linear influence model. By exploiting the relationships between contagions, our approach can simultaneously predict the volume (i.e. time series prediction) for each contagion (or topic) and automatically identify the most influential nodes for each contagion. The proposed model is validated using synthetic data and an ISIS twitter dataset. In addition to improving the volume prediction performance significantly, we show that the proposed approach can reliably infer the most influential users for specific contagions.
2402.01342
Zexi Li
Zexi Li, Zhiqi Li, Jie Lin, Tao Shen, Tao Lin, Chao Wu
Training-time Neuron Alignment through Permutation Subspace for Improving Linear Mode Connectivity and Model Fusion
preprint
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In deep learning, stochastic gradient descent often yields functionally similar yet widely scattered solutions in the weight space even under the same initialization, causing barriers in the Linear Mode Connectivity (LMC) landscape. Overcoming these barriers is crucial for understanding deep learning dynamics and enhancing model-fusion algorithms. Previous studies highlight the role of permutation symmetry in reducing post-training barriers through network permutation. However, these post-hoc methods, demanding extra computations, are less effective for larger, complex models (e.g., ViT, LLM) due to numerous permutation matrices. Thus, in this paper, we study training-time neuron alignment. Our hypothesis suggests that training-time permutation subspace can reduce LMC barriers for free. We find that pruning at initialization supports this. Beyond pruning, we introduce TNA-PFN, a simple yet lossless algorithm using a partial gradient mask during training. TNA-PFN is theoretically and empirically validated for reducing LMC barriers. It excels in wide model fusion applications, especially in federated learning, two algorithms based on TNA-FPN that are proposed to show its prospects even under heterogeneous datasets. Moreover, TNA-PFN can enhance the generalization of model soup for vision transformers and ColD fusion for pretrained language models.
[ { "created": "Fri, 2 Feb 2024 11:57:50 GMT", "version": "v1" } ]
2024-02-05
[ [ "Li", "Zexi", "" ], [ "Li", "Zhiqi", "" ], [ "Lin", "Jie", "" ], [ "Shen", "Tao", "" ], [ "Lin", "Tao", "" ], [ "Wu", "Chao", "" ] ]
In deep learning, stochastic gradient descent often yields functionally similar yet widely scattered solutions in the weight space even under the same initialization, causing barriers in the Linear Mode Connectivity (LMC) landscape. Overcoming these barriers is crucial for understanding deep learning dynamics and enhancing model-fusion algorithms. Previous studies highlight the role of permutation symmetry in reducing post-training barriers through network permutation. However, these post-hoc methods, demanding extra computations, are less effective for larger, complex models (e.g., ViT, LLM) due to numerous permutation matrices. Thus, in this paper, we study training-time neuron alignment. Our hypothesis suggests that training-time permutation subspace can reduce LMC barriers for free. We find that pruning at initialization supports this. Beyond pruning, we introduce TNA-PFN, a simple yet lossless algorithm using a partial gradient mask during training. TNA-PFN is theoretically and empirically validated for reducing LMC barriers. It excels in wide model fusion applications, especially in federated learning, two algorithms based on TNA-FPN that are proposed to show its prospects even under heterogeneous datasets. Moreover, TNA-PFN can enhance the generalization of model soup for vision transformers and ColD fusion for pretrained language models.
1701.03041
Matthew Veres
Matthew Veres, Medhat Moussa, Graham W. Taylor
Modeling Grasp Motor Imagery through Deep Conditional Generative Models
Accepted for publication in Robotics and Automation Letters (RA-L)
null
null
null
cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Grasping is a complex process involving knowledge of the object, the surroundings, and of oneself. While humans are able to integrate and process all of the sensory information required for performing this task, equipping machines with this capability is an extremely challenging endeavor. In this paper, we investigate how deep learning techniques can allow us to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. We explore a paradigm based on generative models for learning integrated object-action representations, and demonstrate its capacity for capturing and generating multimodal, multi-finger grasp configurations on a simulated grasping dataset.
[ { "created": "Wed, 11 Jan 2017 16:20:39 GMT", "version": "v1" } ]
2017-01-12
[ [ "Veres", "Matthew", "" ], [ "Moussa", "Medhat", "" ], [ "Taylor", "Graham W.", "" ] ]
Grasping is a complex process involving knowledge of the object, the surroundings, and of oneself. While humans are able to integrate and process all of the sensory information required for performing this task, equipping machines with this capability is an extremely challenging endeavor. In this paper, we investigate how deep learning techniques can allow us to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. We explore a paradigm based on generative models for learning integrated object-action representations, and demonstrate its capacity for capturing and generating multimodal, multi-finger grasp configurations on a simulated grasping dataset.
2404.04531
Xianping Ma
Xianping Ma, Xiaokang Zhang, Xingchen Ding, Man-On Pun, Siwei Ma
Frequency Decomposition-Driven Unsupervised Domain Adaptation for Remote Sensing Image Semantic Segmentation
28 pages, 13 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-domain semantic segmentation of remote sensing (RS) imagery based on unsupervised domain adaptation (UDA) techniques has significantly advanced deep-learning applications in the geosciences. Recently, with its ingenious and versatile architecture, the Transformer model has been successfully applied in RS-UDA tasks. However, existing UDA methods mainly focus on domain alignment in the high-level feature space. It is still challenging to retain cross-domain local spatial details and global contextual semantics simultaneously, which is crucial for the RS image semantic segmentation task. To address these problems, we propose novel high/low-frequency decomposition (HLFD) techniques to guide representation alignment in cross-domain semantic segmentation. Specifically, HLFD attempts to decompose the feature maps into high- and low-frequency components before performing the domain alignment in the corresponding subspaces. Secondly, to further facilitate the alignment of decomposed features, we propose a fully global-local generative adversarial network, namely GLGAN, to learn domain-invariant detailed and semantic features across domains by leveraging global-local transformer blocks (GLTBs). By integrating HLFD techniques and the GLGAN, a novel UDA framework called FD-GLGAN is developed to improve the cross-domain transferability and generalization capability of semantic segmentation models. Extensive experiments on two fine-resolution benchmark datasets, namely ISPRS Potsdam and ISPRS Vaihingen, highlight the effectiveness and superiority of the proposed approach as compared to the state-of-the-art UDA methods. The source code for this work will be accessible at https://github.com/sstary/SSRS.
[ { "created": "Sat, 6 Apr 2024 07:13:49 GMT", "version": "v1" } ]
2024-04-09
[ [ "Ma", "Xianping", "" ], [ "Zhang", "Xiaokang", "" ], [ "Ding", "Xingchen", "" ], [ "Pun", "Man-On", "" ], [ "Ma", "Siwei", "" ] ]
Cross-domain semantic segmentation of remote sensing (RS) imagery based on unsupervised domain adaptation (UDA) techniques has significantly advanced deep-learning applications in the geosciences. Recently, with its ingenious and versatile architecture, the Transformer model has been successfully applied in RS-UDA tasks. However, existing UDA methods mainly focus on domain alignment in the high-level feature space. It is still challenging to retain cross-domain local spatial details and global contextual semantics simultaneously, which is crucial for the RS image semantic segmentation task. To address these problems, we propose novel high/low-frequency decomposition (HLFD) techniques to guide representation alignment in cross-domain semantic segmentation. Specifically, HLFD attempts to decompose the feature maps into high- and low-frequency components before performing the domain alignment in the corresponding subspaces. Secondly, to further facilitate the alignment of decomposed features, we propose a fully global-local generative adversarial network, namely GLGAN, to learn domain-invariant detailed and semantic features across domains by leveraging global-local transformer blocks (GLTBs). By integrating HLFD techniques and the GLGAN, a novel UDA framework called FD-GLGAN is developed to improve the cross-domain transferability and generalization capability of semantic segmentation models. Extensive experiments on two fine-resolution benchmark datasets, namely ISPRS Potsdam and ISPRS Vaihingen, highlight the effectiveness and superiority of the proposed approach as compared to the state-of-the-art UDA methods. The source code for this work will be accessible at https://github.com/sstary/SSRS.
2401.17482
Workneh Yilma Ayele
Johan Sandell, Einar Asplund, Workneh Yilma Ayele, Martin Duneld
Performance Comparison Analysis of ArangoDB, MySQL, and Neo4j: An Experimental Study of Querying Connected Data
https://hdl.handle.net/10125/107319
2024, Proceedings of the 57th Hawaii International Conference on System Sciences
null
null
cs.DB
http://creativecommons.org/licenses/by-nc-nd/4.0/
Choosing and developing performant database solutions helps organizations optimize their operational practices and decision-making. Since graph data is becoming more common, it is crucial to develop and use them in big data with complex relationships with high and consistent performance. However, legacy database technologies such as MySQL are tailored to store relational databases and need to perform more complex queries to retrieve graph data. Previous research has dealt with performance aspects such as CPU and memory usage. In contrast, energy usage and temperature of the servers are lacking. Thus, this paper evaluates and compares state-of-the-art graphs and relational databases from the performance aspects to allow a more informed selection of technologies. Graph-based big data applications benefit from informed selection database technologies for data retrieval and analytics problems. The results show that Neo4j performs faster in querying connected data than MySQL and ArangoDB, and energy, CPU, and memory usage performances are reported in this paper.
[ { "created": "Tue, 30 Jan 2024 22:35:26 GMT", "version": "v1" } ]
2024-02-01
[ [ "Sandell", "Johan", "" ], [ "Asplund", "Einar", "" ], [ "Ayele", "Workneh Yilma", "" ], [ "Duneld", "Martin", "" ] ]
Choosing and developing performant database solutions helps organizations optimize their operational practices and decision-making. Since graph data is becoming more common, it is crucial to develop and use them in big data with complex relationships with high and consistent performance. However, legacy database technologies such as MySQL are tailored to store relational databases and need to perform more complex queries to retrieve graph data. Previous research has dealt with performance aspects such as CPU and memory usage. In contrast, energy usage and temperature of the servers are lacking. Thus, this paper evaluates and compares state-of-the-art graphs and relational databases from the performance aspects to allow a more informed selection of technologies. Graph-based big data applications benefit from informed selection database technologies for data retrieval and analytics problems. The results show that Neo4j performs faster in querying connected data than MySQL and ArangoDB, and energy, CPU, and memory usage performances are reported in this paper.
2007.01951
Liwei Wang
Liwei Wang, Jing Huang, Yin Li, Kun Xu, Zhengyuan Yang, Dong Yu
Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Weakly supervised phrase grounding aims at learning region-phrase correspondences using only image-sentence pairs. A major challenge thus lies in the missing links between image regions and sentence phrases during training. To address this challenge, we leverage a generic object detector at training time, and propose a contrastive learning framework that accounts for both region-phrase and image-sentence matching. Our core innovation is the learning of a region-phrase score function, based on which an image-sentence score function is further constructed. Importantly, our region-phrase score function is learned by distilling from soft matching scores between the detected object names and candidate phrases within an image-sentence pair, while the image-sentence score function is supervised by ground-truth image-sentence pairs. The design of such score functions removes the need of object detection at test time, thereby significantly reducing the inference cost. Without bells and whistles, our approach achieves state-of-the-art results on visual phrase grounding, surpassing previous methods that require expensive object detectors at test time.
[ { "created": "Fri, 3 Jul 2020 22:02:00 GMT", "version": "v1" }, { "created": "Sun, 25 Apr 2021 05:11:11 GMT", "version": "v2" } ]
2021-04-27
[ [ "Wang", "Liwei", "" ], [ "Huang", "Jing", "" ], [ "Li", "Yin", "" ], [ "Xu", "Kun", "" ], [ "Yang", "Zhengyuan", "" ], [ "Yu", "Dong", "" ] ]
Weakly supervised phrase grounding aims at learning region-phrase correspondences using only image-sentence pairs. A major challenge thus lies in the missing links between image regions and sentence phrases during training. To address this challenge, we leverage a generic object detector at training time, and propose a contrastive learning framework that accounts for both region-phrase and image-sentence matching. Our core innovation is the learning of a region-phrase score function, based on which an image-sentence score function is further constructed. Importantly, our region-phrase score function is learned by distilling from soft matching scores between the detected object names and candidate phrases within an image-sentence pair, while the image-sentence score function is supervised by ground-truth image-sentence pairs. The design of such score functions removes the need of object detection at test time, thereby significantly reducing the inference cost. Without bells and whistles, our approach achieves state-of-the-art results on visual phrase grounding, surpassing previous methods that require expensive object detectors at test time.
2407.15787
Yike Zhang
Yike Zhang, Dingjie Su, Eduardo Davalos, Jack H. Noble
Unsupervised Mastoidectomy for Cochlear CT Mesh Reconstruction Using Highly Noisy Data
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cochlear Implant (CI) procedures involve inserting an array of electrodes into the cochlea located inside the inner ear. Mastoidectomy is a surgical procedure that uses a high-speed drill to remove part of the mastoid region of the temporal bone, providing safe access to the cochlea through the middle and inner ear. We aim to develop an intraoperative navigation system that registers plans created using 3D preoperative Computerized Tomography (CT) volumes with the 2D surgical microscope view. Herein, we propose a method to synthesize the mastoidectomy volume using only the preoperative CT scan, where the mastoid is intact. We introduce an unsupervised learning framework designed to synthesize mastoidectomy. For model training purposes, this method uses postoperative CT scans to avoid manual data cleaning or labeling, even when the region removed during mastoidectomy is visible but affected by metal artifacts, low signal-to-noise ratio, or electrode wiring. Our approach estimates mastoidectomy regions with a mean dice score of 70.0%. This approach represents a major step forward for CI intraoperative navigation by predicting realistic mastoidectomy-removed regions in preoperative planning that can be used to register the pre-surgery plan to intraoperative microscopy.
[ { "created": "Mon, 22 Jul 2024 16:47:29 GMT", "version": "v1" }, { "created": "Thu, 8 Aug 2024 14:33:12 GMT", "version": "v2" } ]
2024-08-09
[ [ "Zhang", "Yike", "" ], [ "Su", "Dingjie", "" ], [ "Davalos", "Eduardo", "" ], [ "Noble", "Jack H.", "" ] ]
Cochlear Implant (CI) procedures involve inserting an array of electrodes into the cochlea located inside the inner ear. Mastoidectomy is a surgical procedure that uses a high-speed drill to remove part of the mastoid region of the temporal bone, providing safe access to the cochlea through the middle and inner ear. We aim to develop an intraoperative navigation system that registers plans created using 3D preoperative Computerized Tomography (CT) volumes with the 2D surgical microscope view. Herein, we propose a method to synthesize the mastoidectomy volume using only the preoperative CT scan, where the mastoid is intact. We introduce an unsupervised learning framework designed to synthesize mastoidectomy. For model training purposes, this method uses postoperative CT scans to avoid manual data cleaning or labeling, even when the region removed during mastoidectomy is visible but affected by metal artifacts, low signal-to-noise ratio, or electrode wiring. Our approach estimates mastoidectomy regions with a mean dice score of 70.0%. This approach represents a major step forward for CI intraoperative navigation by predicting realistic mastoidectomy-removed regions in preoperative planning that can be used to register the pre-surgery plan to intraoperative microscopy.
2403.00592
Zhaochong An
Zhaochong An, Guolei Sun, Yun Liu, Fayao Liu, Zongwei Wu, Dan Wang, Luc Van Gool, Serge Belongie
Rethinking Few-shot 3D Point Cloud Semantic Segmentation
Accepted to CVPR 2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS), with a focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution. The former arises from non-uniform point sampling, allowing models to distinguish the density disparities between foreground and background for easier segmentation. The latter results from sampling only 2,048 points, limiting semantic information and deviating from the real-world practice. To address these issues, we introduce a standardized FS-PCS setting, upon which a new benchmark is built. Moreover, we propose a novel FS-PCS model. While previous methods are based on feature optimization by mainly refining support features to enhance prototypes, our method is based on correlation optimization, referred to as Correlation Optimization Segmentation (COSeg). Specifically, we compute Class-specific Multi-prototypical Correlation (CMC) for each query point, representing its correlations to category prototypes. Then, we propose the Hyper Correlation Augmentation (HCA) module to enhance CMC. Furthermore, tackling the inherent property of few-shot training to incur base susceptibility for models, we propose to learn non-parametric prototypes for the base classes during training. The learned base prototypes are used to calibrate correlations for the background class through a Base Prototypes Calibration (BPC) module. Experiments on popular datasets demonstrate the superiority of COSeg over existing methods. The code is available at: https://github.com/ZhaochongAn/COSeg
[ { "created": "Fri, 1 Mar 2024 15:14:47 GMT", "version": "v1" } ]
2024-03-04
[ [ "An", "Zhaochong", "" ], [ "Sun", "Guolei", "" ], [ "Liu", "Yun", "" ], [ "Liu", "Fayao", "" ], [ "Wu", "Zongwei", "" ], [ "Wang", "Dan", "" ], [ "Van Gool", "Luc", "" ], [ "Belongie", "Serge", "" ] ]
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS), with a focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution. The former arises from non-uniform point sampling, allowing models to distinguish the density disparities between foreground and background for easier segmentation. The latter results from sampling only 2,048 points, limiting semantic information and deviating from the real-world practice. To address these issues, we introduce a standardized FS-PCS setting, upon which a new benchmark is built. Moreover, we propose a novel FS-PCS model. While previous methods are based on feature optimization by mainly refining support features to enhance prototypes, our method is based on correlation optimization, referred to as Correlation Optimization Segmentation (COSeg). Specifically, we compute Class-specific Multi-prototypical Correlation (CMC) for each query point, representing its correlations to category prototypes. Then, we propose the Hyper Correlation Augmentation (HCA) module to enhance CMC. Furthermore, tackling the inherent property of few-shot training to incur base susceptibility for models, we propose to learn non-parametric prototypes for the base classes during training. The learned base prototypes are used to calibrate correlations for the background class through a Base Prototypes Calibration (BPC) module. Experiments on popular datasets demonstrate the superiority of COSeg over existing methods. The code is available at: https://github.com/ZhaochongAn/COSeg
2311.17910
Muhammed Kocabas
Muhammed Kocabas, Jen-Hao Rick Chang, James Gabriel, Oncel Tuzel, Anurag Ranjan
HUGS: Human Gaussian Splats
null
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in neural rendering have improved both training and rendering times by orders of magnitude. While these methods demonstrate state-of-the-art quality and speed, they are designed for photogrammetry of static scenes and do not generalize well to freely moving humans in the environment. In this work, we introduce Human Gaussian Splats (HUGS) that represents an animatable human together with the scene using 3D Gaussian Splatting (3DGS). Our method takes only a monocular video with a small number of (50-100) frames, and it automatically learns to disentangle the static scene and a fully animatable human avatar within 30 minutes. We utilize the SMPL body model to initialize the human Gaussians. To capture details that are not modeled by SMPL (e.g. cloth, hairs), we allow the 3D Gaussians to deviate from the human body model. Utilizing 3D Gaussians for animated humans brings new challenges, including the artifacts created when articulating the Gaussians. We propose to jointly optimize the linear blend skinning weights to coordinate the movements of individual Gaussians during animation. Our approach enables novel-pose synthesis of human and novel view synthesis of both the human and the scene. We achieve state-of-the-art rendering quality with a rendering speed of 60 FPS while being ~100x faster to train over previous work. Our code will be announced here: https://github.com/apple/ml-hugs
[ { "created": "Wed, 29 Nov 2023 18:56:32 GMT", "version": "v1" } ]
2023-11-30
[ [ "Kocabas", "Muhammed", "" ], [ "Chang", "Jen-Hao Rick", "" ], [ "Gabriel", "James", "" ], [ "Tuzel", "Oncel", "" ], [ "Ranjan", "Anurag", "" ] ]
Recent advances in neural rendering have improved both training and rendering times by orders of magnitude. While these methods demonstrate state-of-the-art quality and speed, they are designed for photogrammetry of static scenes and do not generalize well to freely moving humans in the environment. In this work, we introduce Human Gaussian Splats (HUGS) that represents an animatable human together with the scene using 3D Gaussian Splatting (3DGS). Our method takes only a monocular video with a small number of (50-100) frames, and it automatically learns to disentangle the static scene and a fully animatable human avatar within 30 minutes. We utilize the SMPL body model to initialize the human Gaussians. To capture details that are not modeled by SMPL (e.g. cloth, hairs), we allow the 3D Gaussians to deviate from the human body model. Utilizing 3D Gaussians for animated humans brings new challenges, including the artifacts created when articulating the Gaussians. We propose to jointly optimize the linear blend skinning weights to coordinate the movements of individual Gaussians during animation. Our approach enables novel-pose synthesis of human and novel view synthesis of both the human and the scene. We achieve state-of-the-art rendering quality with a rendering speed of 60 FPS while being ~100x faster to train over previous work. Our code will be announced here: https://github.com/apple/ml-hugs
2308.00939
Xinze Li
Xinze Li, Kezhi Mao, Fanfan Lin, Zijian Feng
Feature-aware conditional GAN for category text generation
27 pages, 8 figures
null
10.1016/j.neucom.2023.126352
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Category text generation receives considerable attentions since it is beneficial for various natural language processing tasks. Recently, the generative adversarial network (GAN) has attained promising performance in text generation, attributed to its adversarial training process. However, there are several issues in text GANs, including discreteness, training instability, mode collapse, lack of diversity and controllability etc. To address these issues, this paper proposes a novel GAN framework, the feature-aware conditional GAN (FA-GAN), for controllable category text generation. In FA-GAN, the generator has a sequence-to-sequence structure for improving sentence diversity, which consists of three encoders including a special feature-aware encoder and a category-aware encoder, and one relational-memory-core-based decoder with the Gumbel SoftMax activation function. The discriminator has an additional category classification head. To generate sentences with specified categories, the multi-class classification loss is supplemented in the adversarial training. Comprehensive experiments have been conducted, and the results show that FA-GAN consistently outperforms 10 state-of-the-art text generation approaches on 6 text classification datasets. The case study demonstrates that the synthetic sentences generated by FA-GAN can match the required categories and are aware of the features of conditioned sentences, with good readability, fluency, and text authenticity.
[ { "created": "Wed, 2 Aug 2023 04:43:54 GMT", "version": "v1" } ]
2023-08-03
[ [ "Li", "Xinze", "" ], [ "Mao", "Kezhi", "" ], [ "Lin", "Fanfan", "" ], [ "Feng", "Zijian", "" ] ]
Category text generation receives considerable attentions since it is beneficial for various natural language processing tasks. Recently, the generative adversarial network (GAN) has attained promising performance in text generation, attributed to its adversarial training process. However, there are several issues in text GANs, including discreteness, training instability, mode collapse, lack of diversity and controllability etc. To address these issues, this paper proposes a novel GAN framework, the feature-aware conditional GAN (FA-GAN), for controllable category text generation. In FA-GAN, the generator has a sequence-to-sequence structure for improving sentence diversity, which consists of three encoders including a special feature-aware encoder and a category-aware encoder, and one relational-memory-core-based decoder with the Gumbel SoftMax activation function. The discriminator has an additional category classification head. To generate sentences with specified categories, the multi-class classification loss is supplemented in the adversarial training. Comprehensive experiments have been conducted, and the results show that FA-GAN consistently outperforms 10 state-of-the-art text generation approaches on 6 text classification datasets. The case study demonstrates that the synthetic sentences generated by FA-GAN can match the required categories and are aware of the features of conditioned sentences, with good readability, fluency, and text authenticity.
2101.12736
Osman Ramadan
Osman Ramadan, James Withers, Douglas Orr
N-grams Bayesian Differential Privacy
12 pages, 6 figures
null
null
null
cs.CR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differential privacy has gained popularity in machine learning as a strong privacy guarantee, in contrast to privacy mitigation techniques such as k-anonymity. However, applying differential privacy to n-gram counts significantly degrades the utility of derived language models due to their large vocabularies. We propose a differential privacy mechanism that uses public data as a prior in a Bayesian setup to provide tighter bounds on the privacy loss metric epsilon, and thus better privacy-utility trade-offs. It first transforms the counts to log space, approximating the distribution of the public and private data as Gaussian. The posterior distribution is then evaluated and softmax is applied to produce a probability distribution. This technique achieves up to 85% reduction in KL divergence compared to previously known mechanisms at epsilon equals 0.1. We compare our mechanism to k-anonymity in a n-gram language modelling task and show that it offers competitive performance at large vocabulary sizes, while also providing superior privacy protection.
[ { "created": "Fri, 29 Jan 2021 18:48:49 GMT", "version": "v1" } ]
2021-02-01
[ [ "Ramadan", "Osman", "" ], [ "Withers", "James", "" ], [ "Orr", "Douglas", "" ] ]
Differential privacy has gained popularity in machine learning as a strong privacy guarantee, in contrast to privacy mitigation techniques such as k-anonymity. However, applying differential privacy to n-gram counts significantly degrades the utility of derived language models due to their large vocabularies. We propose a differential privacy mechanism that uses public data as a prior in a Bayesian setup to provide tighter bounds on the privacy loss metric epsilon, and thus better privacy-utility trade-offs. It first transforms the counts to log space, approximating the distribution of the public and private data as Gaussian. The posterior distribution is then evaluated and softmax is applied to produce a probability distribution. This technique achieves up to 85% reduction in KL divergence compared to previously known mechanisms at epsilon equals 0.1. We compare our mechanism to k-anonymity in a n-gram language modelling task and show that it offers competitive performance at large vocabulary sizes, while also providing superior privacy protection.