id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2009.08753
Yan Hong
Yan Hong, Li Niu, Jianfu Zhang, Jing Liang, Liqing Zhang
DeltaGAN: Towards Diverse Few-shot Image Generation with Sample-Specific Delta
This paper is accepted by ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning to generate new images for a novel category based on only a few images, named as few-shot image generation, has attracted increasing research interest. Several state-of-the-art works have yielded impressive results, but the diversity is still limited. In this work, we propose a novel Delta Generative Adversarial Network (DeltaGAN), which consists of a reconstruction subnetwork and a generation subnetwork. The reconstruction subnetwork captures intra-category transformation, i.e., "delta", between same-category pairs. The generation subnetwork generates sample-specific "delta" for an input image, which is combined with this input image to generate a new image within the same category. Besides, an adversarial delta matching loss is designed to link the above two subnetworks together. Extensive experiments on five few-shot image datasets demonstrate the effectiveness of our proposed method.
[ { "created": "Fri, 18 Sep 2020 11:25:05 GMT", "version": "v1" }, { "created": "Wed, 16 Dec 2020 01:10:20 GMT", "version": "v2" }, { "created": "Wed, 14 Apr 2021 02:44:46 GMT", "version": "v3" }, { "created": "Thu, 28 Jul 2022 03:43:23 GMT", "version": "v4" } ]
2022-07-29
[ [ "Hong", "Yan", "" ], [ "Niu", "Li", "" ], [ "Zhang", "Jianfu", "" ], [ "Liang", "Jing", "" ], [ "Zhang", "Liqing", "" ] ]
Learning to generate new images for a novel category based on only a few images, named as few-shot image generation, has attracted increasing research interest. Several state-of-the-art works have yielded impressive results, but the diversity is still limited. In this work, we propose a novel Delta Generative Adversarial Network (DeltaGAN), which consists of a reconstruction subnetwork and a generation subnetwork. The reconstruction subnetwork captures intra-category transformation, i.e., "delta", between same-category pairs. The generation subnetwork generates sample-specific "delta" for an input image, which is combined with this input image to generate a new image within the same category. Besides, an adversarial delta matching loss is designed to link the above two subnetworks together. Extensive experiments on five few-shot image datasets demonstrate the effectiveness of our proposed method.
2311.07410
Oussama Ben Sghaier
Oussama Ben Sghaier, Jean-Sebastien Boudrias, Houari Sahraoui
Toward Optimal Psychological Functioning in AI-driven Software Engineering Tasks: The SEWELL-CARE Assessment Framework
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
In the field of software engineering, there has been a shift towards utilizing various artificial intelligence techniques to address challenges and create innovative tools. These solutions are aimed at enhancing efficiency, automating tasks, and providing valuable support to developers. While the technical aspects are crucial, the well-being and psychology of the individuals performing these tasks are often overlooked. This paper argues that a holistic approach is essential, one that considers the technical, psychological, and social aspects of software engineering tasks. To address this gap, we introduce SEWELL-CARE, a conceptual framework designed to assess AI-driven software engineering tasks from multiple perspectives, with the goal of customizing the tools to improve the efficiency, well-being, and psychological functioning of developers. By emphasizing both technical and human dimensions, our framework provides a nuanced evaluation that goes beyond traditional technical metrics.
[ { "created": "Mon, 13 Nov 2023 15:44:07 GMT", "version": "v1" } ]
2023-11-14
[ [ "Sghaier", "Oussama Ben", "" ], [ "Boudrias", "Jean-Sebastien", "" ], [ "Sahraoui", "Houari", "" ] ]
In the field of software engineering, there has been a shift towards utilizing various artificial intelligence techniques to address challenges and create innovative tools. These solutions are aimed at enhancing efficiency, automating tasks, and providing valuable support to developers. While the technical aspects are crucial, the well-being and psychology of the individuals performing these tasks are often overlooked. This paper argues that a holistic approach is essential, one that considers the technical, psychological, and social aspects of software engineering tasks. To address this gap, we introduce SEWELL-CARE, a conceptual framework designed to assess AI-driven software engineering tasks from multiple perspectives, with the goal of customizing the tools to improve the efficiency, well-being, and psychological functioning of developers. By emphasizing both technical and human dimensions, our framework provides a nuanced evaluation that goes beyond traditional technical metrics.
1412.4564
Karel Lenc
Andrea Vedaldi, Karel Lenc
MatConvNet - Convolutional Neural Networks for MATLAB
Updated for release v1.0-beta20
null
null
null
cs.CV cs.LG cs.MS cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
MatConvNet is an implementation of Convolutional Neural Networks (CNNs) for MATLAB. The toolbox is designed with an emphasis on simplicity and flexibility. It exposes the building blocks of CNNs as easy-to-use MATLAB functions, providing routines for computing linear convolutions with filter banks, feature pooling, and many more. In this manner, MatConvNet allows fast prototyping of new CNN architectures; at the same time, it supports efficient computation on CPU and GPU allowing to train complex models on large datasets such as ImageNet ILSVRC. This document provides an overview of CNNs and how they are implemented in MatConvNet and gives the technical details of each computational block in the toolbox.
[ { "created": "Mon, 15 Dec 2014 12:23:35 GMT", "version": "v1" }, { "created": "Sun, 21 Jun 2015 15:35:25 GMT", "version": "v2" }, { "created": "Thu, 5 May 2016 14:31:06 GMT", "version": "v3" } ]
2016-05-06
[ [ "Vedaldi", "Andrea", "" ], [ "Lenc", "Karel", "" ] ]
MatConvNet is an implementation of Convolutional Neural Networks (CNNs) for MATLAB. The toolbox is designed with an emphasis on simplicity and flexibility. It exposes the building blocks of CNNs as easy-to-use MATLAB functions, providing routines for computing linear convolutions with filter banks, feature pooling, and many more. In this manner, MatConvNet allows fast prototyping of new CNN architectures; at the same time, it supports efficient computation on CPU and GPU allowing to train complex models on large datasets such as ImageNet ILSVRC. This document provides an overview of CNNs and how they are implemented in MatConvNet and gives the technical details of each computational block in the toolbox.
2205.15236
Yu Gong
Yu Gong, Greg Mori, Frederick Tung
RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression
Accepted to ICML 2022
null
null
null
cs.LG cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data imbalance, in which a plurality of the data samples come from a small proportion of labels, poses a challenge in training deep neural networks. Unlike classification, in regression the labels are continuous, potentially boundless, and form a natural ordering. These distinct features of regression call for new techniques that leverage the additional information encoded in label-space relationships. This paper presents the RankSim (ranking similarity) regularizer for deep imbalanced regression, which encodes an inductive bias that samples that are closer in label space should also be closer in feature space. In contrast to recent distribution smoothing based approaches, RankSim captures both nearby and distant relationships: for a given data sample, RankSim encourages the sorted list of its neighbors in label space to match the sorted list of its neighbors in feature space. RankSim is complementary to conventional imbalanced learning techniques, including re-weighting, two-stage training, and distribution smoothing, and lifts the state-of-the-art performance on three imbalanced regression benchmarks: IMDB-WIKI-DIR, AgeDB-DIR, and STS-B-DIR.
[ { "created": "Mon, 30 May 2022 16:51:25 GMT", "version": "v1" }, { "created": "Fri, 24 Jun 2022 16:43:23 GMT", "version": "v2" } ]
2022-06-27
[ [ "Gong", "Yu", "" ], [ "Mori", "Greg", "" ], [ "Tung", "Frederick", "" ] ]
Data imbalance, in which a plurality of the data samples come from a small proportion of labels, poses a challenge in training deep neural networks. Unlike classification, in regression the labels are continuous, potentially boundless, and form a natural ordering. These distinct features of regression call for new techniques that leverage the additional information encoded in label-space relationships. This paper presents the RankSim (ranking similarity) regularizer for deep imbalanced regression, which encodes an inductive bias that samples that are closer in label space should also be closer in feature space. In contrast to recent distribution smoothing based approaches, RankSim captures both nearby and distant relationships: for a given data sample, RankSim encourages the sorted list of its neighbors in label space to match the sorted list of its neighbors in feature space. RankSim is complementary to conventional imbalanced learning techniques, including re-weighting, two-stage training, and distribution smoothing, and lifts the state-of-the-art performance on three imbalanced regression benchmarks: IMDB-WIKI-DIR, AgeDB-DIR, and STS-B-DIR.
2012.15002
Spencer Compton
Spencer Compton, Slobodan Mitrovi\'c, Ronitt Rubinfeld
New Partitioning Techniques and Faster Algorithms for Approximate Interval Scheduling
Main result (Theorem 2) has stronger guarantees, updates/queries now in $\operatorname{poly}(\log(n),\frac{1}{\varepsilon})$ time
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interval scheduling is a basic problem in the theory of algorithms and a classical task in combinatorial optimization. We develop a set of techniques for partitioning and grouping jobs based on their starting and ending times, that enable us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in dynamic and local settings of computation leads to several new results. For $(1+\varepsilon)$-approximation of job scheduling of $n$ jobs on a single machine, we develop a fully dynamic algorithm with $O(\frac{\log{n}}{\varepsilon})$ update and $O(\log{n})$ query worst-case time. Further, we design a local computation algorithm that uses only $O(\frac{\log{N}}{\varepsilon})$ queries when all jobs are length at least $1$ and have starting/ending times within $[0,N]$. Our techniques are also applicable in a setting where jobs have rewards/weights. For this case we design a fully dynamic deterministic algorithm whose worst-case update and query time are $\operatorname{poly}(\log n,\frac{1}{\varepsilon})$. Equivalently, this is the first algorithm that maintains a $(1+\varepsilon)$-approximation of the maximum independent set of a collection of weighted intervals in $\operatorname{poly}(\log n,\frac{1}{\varepsilon})$ time updates/queries. This is an exponential improvement in $1/\varepsilon$ over the running time of a randomized algorithm of Henzinger, Neumann, and Wiese ~[SoCG, 2020], while also removing all dependence on the values of the jobs' starting/ending times and rewards, as well as removing the need for any randomness. We also extend our approaches for interval scheduling on a single machine to examine the setting with $M$ machines.
[ { "created": "Wed, 30 Dec 2020 01:58:16 GMT", "version": "v1" }, { "created": "Wed, 10 Mar 2021 10:01:14 GMT", "version": "v2" }, { "created": "Fri, 11 Mar 2022 05:02:58 GMT", "version": "v3" }, { "created": "Mon, 14 Mar 2022 01:25:42 GMT", "version": "v4" }, { "created": "Thu, 23 Feb 2023 20:44:51 GMT", "version": "v5" } ]
2023-02-27
[ [ "Compton", "Spencer", "" ], [ "Mitrović", "Slobodan", "" ], [ "Rubinfeld", "Ronitt", "" ] ]
Interval scheduling is a basic problem in the theory of algorithms and a classical task in combinatorial optimization. We develop a set of techniques for partitioning and grouping jobs based on their starting and ending times, that enable us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in dynamic and local settings of computation leads to several new results. For $(1+\varepsilon)$-approximation of job scheduling of $n$ jobs on a single machine, we develop a fully dynamic algorithm with $O(\frac{\log{n}}{\varepsilon})$ update and $O(\log{n})$ query worst-case time. Further, we design a local computation algorithm that uses only $O(\frac{\log{N}}{\varepsilon})$ queries when all jobs are length at least $1$ and have starting/ending times within $[0,N]$. Our techniques are also applicable in a setting where jobs have rewards/weights. For this case we design a fully dynamic deterministic algorithm whose worst-case update and query time are $\operatorname{poly}(\log n,\frac{1}{\varepsilon})$. Equivalently, this is the first algorithm that maintains a $(1+\varepsilon)$-approximation of the maximum independent set of a collection of weighted intervals in $\operatorname{poly}(\log n,\frac{1}{\varepsilon})$ time updates/queries. This is an exponential improvement in $1/\varepsilon$ over the running time of a randomized algorithm of Henzinger, Neumann, and Wiese ~[SoCG, 2020], while also removing all dependence on the values of the jobs' starting/ending times and rewards, as well as removing the need for any randomness. We also extend our approaches for interval scheduling on a single machine to examine the setting with $M$ machines.
2109.08219
Anil Gaihre
Anil Gaihre, Da Zheng, Scott Weitze, Lingda Li, Shuaiwen Leon Song, Caiwen Ding, Xiaoye S Li, Hang Liu
Dr. Top-k: Delegate-Centric Top-k on GPUs
To be published in The International Conference for High Performance Computing, Networking, Storage and Analysis (SC 21)
null
10.1145/3458817.3476141
null
cs.IR cs.DB cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent top-$k$ computation efforts explore the possibility of revising various sorting algorithms to answer top-$k$ queries on GPUs. These endeavors, unfortunately, perform significantly more work than needed. This paper introduces Dr. Top-k, a Delegate-centric top-$k$ system on GPUs that can reduce the top-$k$ workloads significantly. Particularly, it contains three major contributions: First, we introduce a comprehensive design of the delegate-centric concept, including maximum delegate, delegate-based filtering, and $\beta$ delegate mechanisms to help reduce the workload for top-$k$ up to more than 99%. Second, due to the difficulty and importance of deriving a proper subrange size, we perform a rigorous theoretical analysis, coupled with thorough experimental validations to identify the desirable subrange size. Third, we introduce four key system optimizations to enable fast multi-GPU top-$k$ computation. Taken together, this work constantly outperforms the state-of-the-art.
[ { "created": "Thu, 16 Sep 2021 20:59:33 GMT", "version": "v1" } ]
2021-09-20
[ [ "Gaihre", "Anil", "" ], [ "Zheng", "Da", "" ], [ "Weitze", "Scott", "" ], [ "Li", "Lingda", "" ], [ "Song", "Shuaiwen Leon", "" ], [ "Ding", "Caiwen", "" ], [ "Li", "Xiaoye S", "" ], [ "Liu", "Hang", "" ] ]
Recent top-$k$ computation efforts explore the possibility of revising various sorting algorithms to answer top-$k$ queries on GPUs. These endeavors, unfortunately, perform significantly more work than needed. This paper introduces Dr. Top-k, a Delegate-centric top-$k$ system on GPUs that can reduce the top-$k$ workloads significantly. Particularly, it contains three major contributions: First, we introduce a comprehensive design of the delegate-centric concept, including maximum delegate, delegate-based filtering, and $\beta$ delegate mechanisms to help reduce the workload for top-$k$ up to more than 99%. Second, due to the difficulty and importance of deriving a proper subrange size, we perform a rigorous theoretical analysis, coupled with thorough experimental validations to identify the desirable subrange size. Third, we introduce four key system optimizations to enable fast multi-GPU top-$k$ computation. Taken together, this work constantly outperforms the state-of-the-art.
1509.06079
EPTCS
Sol Swords (Centaur Technology, Inc.), Jared Davis (Centaur Technology, Inc.)
Fix Your Types
In Proceedings ACL2 2015, arXiv:1509.05526
EPTCS 192, 2015, pp. 3-16
10.4204/EPTCS.192.2
null
cs.LO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When using existing ACL2 datatype frameworks, many theorems require type hypotheses. These hypotheses slow down the theorem prover, are tedious to write, and are easy to forget. We describe a principled approach to types that provides strong type safety and execution efficiency while avoiding type hypotheses, and we present a library that automates this approach. Using this approach, types help you catch programming errors and then get out of the way of theorem proving.
[ { "created": "Mon, 21 Sep 2015 00:34:37 GMT", "version": "v1" } ]
2015-09-22
[ [ "Swords", "Sol", "", "Centaur Technology, Inc." ], [ "Davis", "Jared", "", "Centaur\n Technology, Inc." ] ]
When using existing ACL2 datatype frameworks, many theorems require type hypotheses. These hypotheses slow down the theorem prover, are tedious to write, and are easy to forget. We describe a principled approach to types that provides strong type safety and execution efficiency while avoiding type hypotheses, and we present a library that automates this approach. Using this approach, types help you catch programming errors and then get out of the way of theorem proving.
2304.14278
Beyza Dabak
Beyza Dabak, Ece Tiryaki, Robert Calderbank, Ahmed Hareedy
LDPC Decoders Prefer More Reliable Parity Bits: Unequal Data Protection Over BSC
8 pages (double column), 1 figure, submitted to the International Symposium on Topics in Coding (ISTC)
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Low-density parity-check (LDPC) codes are specified by graphs, and are the error correction technique of choice in many communications and data storage contexts. Message passing decoders diffuse information carried by parity bits into the payload, and this paper measures the value of engineering parity bits to be more reliable than message bits. We consider the binary symmetric channel (BSC) and measure the impact of unequal data protection on the threshold of a regular LDPC code. Our analysis also includes doping where the parity bits are known to the decoder. We investigate BSC with Gallager-A decoder, with a $3$-level-alphabet decoder, and with a full belief propagation decoder. We demonstrate through theoretical analysis and simulation that non-equiprobable inputs lead to significant improvements both in the threshold and in the speed with which the decoder converges. We also show that all these improvements are possible even with a simple $3$-level-alphabet decoder.
[ { "created": "Thu, 27 Apr 2023 15:34:58 GMT", "version": "v1" }, { "created": "Sun, 30 Apr 2023 04:00:07 GMT", "version": "v2" } ]
2023-05-02
[ [ "Dabak", "Beyza", "" ], [ "Tiryaki", "Ece", "" ], [ "Calderbank", "Robert", "" ], [ "Hareedy", "Ahmed", "" ] ]
Low-density parity-check (LDPC) codes are specified by graphs, and are the error correction technique of choice in many communications and data storage contexts. Message passing decoders diffuse information carried by parity bits into the payload, and this paper measures the value of engineering parity bits to be more reliable than message bits. We consider the binary symmetric channel (BSC) and measure the impact of unequal data protection on the threshold of a regular LDPC code. Our analysis also includes doping where the parity bits are known to the decoder. We investigate BSC with Gallager-A decoder, with a $3$-level-alphabet decoder, and with a full belief propagation decoder. We demonstrate through theoretical analysis and simulation that non-equiprobable inputs lead to significant improvements both in the threshold and in the speed with which the decoder converges. We also show that all these improvements are possible even with a simple $3$-level-alphabet decoder.
2102.01356
Tao Bai
Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, Qian Wang
Recent Advances in Adversarial Training for Adversarial Robustness
accepted by International Joint Conference on Artificial Intelligence (IJCAI-21)
null
null
null
cs.LG cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial training is one of the most effective approaches defending against adversarial examples for deep learning models. Unlike other defense strategies, adversarial training aims to promote the robustness of models intrinsically. During the last few years, adversarial training has been studied and discussed from various aspects. A variety of improvements and developments of adversarial training are proposed, which were, however, neglected in existing surveys. For the first time in this survey, we systematically review the recent progress on adversarial training for adversarial robustness with a novel taxonomy. Then we discuss the generalization problems in adversarial training from three perspectives. Finally, we highlight the challenges which are not fully tackled and present potential future directions.
[ { "created": "Tue, 2 Feb 2021 07:10:22 GMT", "version": "v1" }, { "created": "Thu, 11 Feb 2021 07:13:24 GMT", "version": "v2" }, { "created": "Thu, 18 Feb 2021 06:56:07 GMT", "version": "v3" }, { "created": "Tue, 23 Feb 2021 09:49:42 GMT", "version": "v4" }, { "created": "Wed, 21 Apr 2021 01:57:53 GMT", "version": "v5" } ]
2021-04-22
[ [ "Bai", "Tao", "" ], [ "Luo", "Jinqi", "" ], [ "Zhao", "Jun", "" ], [ "Wen", "Bihan", "" ], [ "Wang", "Qian", "" ] ]
Adversarial training is one of the most effective approaches defending against adversarial examples for deep learning models. Unlike other defense strategies, adversarial training aims to promote the robustness of models intrinsically. During the last few years, adversarial training has been studied and discussed from various aspects. A variety of improvements and developments of adversarial training are proposed, which were, however, neglected in existing surveys. For the first time in this survey, we systematically review the recent progress on adversarial training for adversarial robustness with a novel taxonomy. Then we discuss the generalization problems in adversarial training from three perspectives. Finally, we highlight the challenges which are not fully tackled and present potential future directions.
1603.08020
Raj Jain
Sastri Kota, Mukul Goyal, Rohit Goyal and Raj Jain
Multimedia Satellite Networks and TCP/IP Traffic Transport
null
International Journal of Computers and Applications, Vol. 23, No. 2, 2001, pp. 115-128
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To meet an increasing demand for multimedia services and electronic connectivity across the world, satellite networks will play an indispensable role in the deployment of global networks. The new services gaining momentum include mobile services, private intranets and high data rate internet access carried over integrated satellite-fiber networks. Several performance issues need to be addressed before a transport layer protocol, like TCP can satisfactorily work over satellite ATM for large delay-bandwidth networks. In this paper, we review the proposed satellite systems and discuss challenges such as, traffic management and QoS requirements for broadband satellite ATM networks. The performance results of TCP enhancements for Unspecified Bit Rate over ATM (ATM-UBR+) for large bandwidth-delay environments with various end system policies and drop policies for several buffer sizes are presented.
[ { "created": "Fri, 25 Mar 2016 20:22:52 GMT", "version": "v1" } ]
2016-03-29
[ [ "Kota", "Sastri", "" ], [ "Goyal", "Mukul", "" ], [ "Goyal", "Rohit", "" ], [ "Jain", "Raj", "" ] ]
To meet an increasing demand for multimedia services and electronic connectivity across the world, satellite networks will play an indispensable role in the deployment of global networks. The new services gaining momentum include mobile services, private intranets and high data rate internet access carried over integrated satellite-fiber networks. Several performance issues need to be addressed before a transport layer protocol, like TCP can satisfactorily work over satellite ATM for large delay-bandwidth networks. In this paper, we review the proposed satellite systems and discuss challenges such as, traffic management and QoS requirements for broadband satellite ATM networks. The performance results of TCP enhancements for Unspecified Bit Rate over ATM (ATM-UBR+) for large bandwidth-delay environments with various end system policies and drop policies for several buffer sizes are presented.
1604.04653
Xavier Gir\'o-i-Nieto
Eva Mohedano, Amaia Salvador, Kevin McGuinness, Ferran Marques, Noel E. O'Connor and Xavier Giro-i-Nieto
Bags of Local Convolutional Features for Scalable Instance Search
Preprint of a short paper accepted in the ACM International Conference on Multimedia Retrieval (ICMR) 2016 (New York City, NY, USA)
null
10.1145/2911996.2912061
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work proposes a simple instance retrieval pipeline based on encoding the convolutional features of CNN using the bag of words aggregation scheme (BoW). Assigning each local array of activations in a convolutional layer to a visual word produces an \textit{assignment map}, a compact representation that relates regions of an image with a visual word. We use the assignment map for fast spatial reranking, obtaining object localizations that are used for query expansion. We demonstrate the suitability of the BoW representation based on local CNN features for instance retrieval, achieving competitive performance on the Oxford and Paris buildings benchmarks. We show that our proposed system for CNN feature aggregation with BoW outperforms state-of-the-art techniques using sum pooling at a subset of the challenging TRECVid INS benchmark.
[ { "created": "Fri, 15 Apr 2016 22:02:22 GMT", "version": "v1" } ]
2016-06-21
[ [ "Mohedano", "Eva", "" ], [ "Salvador", "Amaia", "" ], [ "McGuinness", "Kevin", "" ], [ "Marques", "Ferran", "" ], [ "O'Connor", "Noel E.", "" ], [ "Giro-i-Nieto", "Xavier", "" ] ]
This work proposes a simple instance retrieval pipeline based on encoding the convolutional features of CNN using the bag of words aggregation scheme (BoW). Assigning each local array of activations in a convolutional layer to a visual word produces an \textit{assignment map}, a compact representation that relates regions of an image with a visual word. We use the assignment map for fast spatial reranking, obtaining object localizations that are used for query expansion. We demonstrate the suitability of the BoW representation based on local CNN features for instance retrieval, achieving competitive performance on the Oxford and Paris buildings benchmarks. We show that our proposed system for CNN feature aggregation with BoW outperforms state-of-the-art techniques using sum pooling at a subset of the challenging TRECVid INS benchmark.
2005.06647
MohammadNoor Injadat
MohammadNoor Injadat, Abdallah Moubayed, Ali Bou Nassif, Abdallah Shami
Systematic Ensemble Model Selection Approach for Educational Data Mining
47 Pages, 20 figures, 13 tables, accepted in Elsevier's Knowledge-Based Systems
null
10.1016/j.knosys.2020.105992
null
cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A plethora of research has been done in the past focusing on predicting student's performance in order to support their development. Many institutions are focused on improving the performance and the education quality; and this can be achieved by utilizing data mining techniques to analyze and predict students' performance and to determine possible factors that may affect their final marks. To address this issue, this work starts by thoroughly exploring and analyzing two different datasets at two separate stages of course delivery (20 percent and 50 percent respectively) using multiple graphical, statistical, and quantitative techniques. The feature analysis provides insights into the nature of the different features considered and helps in the choice of the machine learning algorithms and their parameters. Furthermore, this work proposes a systematic approach based on Gini index and p-value to select a suitable ensemble learner from a combination of six potential machine learning algorithms. Experimental results show that the proposed ensemble models achieve high accuracy and low false positive rate at all stages for both datasets.
[ { "created": "Wed, 13 May 2020 22:25:58 GMT", "version": "v1" } ]
2020-05-15
[ [ "Injadat", "MohammadNoor", "" ], [ "Moubayed", "Abdallah", "" ], [ "Nassif", "Ali Bou", "" ], [ "Shami", "Abdallah", "" ] ]
A plethora of research has been done in the past focusing on predicting student's performance in order to support their development. Many institutions are focused on improving the performance and the education quality; and this can be achieved by utilizing data mining techniques to analyze and predict students' performance and to determine possible factors that may affect their final marks. To address this issue, this work starts by thoroughly exploring and analyzing two different datasets at two separate stages of course delivery (20 percent and 50 percent respectively) using multiple graphical, statistical, and quantitative techniques. The feature analysis provides insights into the nature of the different features considered and helps in the choice of the machine learning algorithms and their parameters. Furthermore, this work proposes a systematic approach based on Gini index and p-value to select a suitable ensemble learner from a combination of six potential machine learning algorithms. Experimental results show that the proposed ensemble models achieve high accuracy and low false positive rate at all stages for both datasets.
2003.00187
Takehiko Ohkawa
Takehiko Ohkawa, Naoto Inoue, Hirokatsu Kataoka, Nakamasa Inoue
Augmented Cyclic Consistency Regularization for Unpaired Image-to-Image Translation
Accepted to ICPR2020
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unpaired image-to-image (I2I) translation has received considerable attention in pattern recognition and computer vision because of recent advancements in generative adversarial networks (GANs). However, due to the lack of explicit supervision, unpaired I2I models often fail to generate realistic images, especially in challenging datasets with different backgrounds and poses. Hence, stabilization is indispensable for GANs and applications of I2I translation. Herein, we propose Augmented Cyclic Consistency Regularization (ACCR), a novel regularization method for unpaired I2I translation. Our main idea is to enforce consistency regularization originating from semi-supervised learning on the discriminators leveraging real, fake, reconstructed, and augmented samples. We regularize the discriminators to output similar predictions when fed pairs of original and perturbed images. We qualitatively clarify why consistency regularization on fake and reconstructed samples works well. Quantitatively, our method outperforms the consistency regularized GAN (CR-GAN) in real-world translations and demonstrates efficacy against several data augmentation variants and cycle-consistent constraints.
[ { "created": "Sat, 29 Feb 2020 06:20:20 GMT", "version": "v1" }, { "created": "Mon, 12 Oct 2020 16:07:23 GMT", "version": "v2" } ]
2020-10-13
[ [ "Ohkawa", "Takehiko", "" ], [ "Inoue", "Naoto", "" ], [ "Kataoka", "Hirokatsu", "" ], [ "Inoue", "Nakamasa", "" ] ]
Unpaired image-to-image (I2I) translation has received considerable attention in pattern recognition and computer vision because of recent advancements in generative adversarial networks (GANs). However, due to the lack of explicit supervision, unpaired I2I models often fail to generate realistic images, especially in challenging datasets with different backgrounds and poses. Hence, stabilization is indispensable for GANs and applications of I2I translation. Herein, we propose Augmented Cyclic Consistency Regularization (ACCR), a novel regularization method for unpaired I2I translation. Our main idea is to enforce consistency regularization originating from semi-supervised learning on the discriminators leveraging real, fake, reconstructed, and augmented samples. We regularize the discriminators to output similar predictions when fed pairs of original and perturbed images. We qualitatively clarify why consistency regularization on fake and reconstructed samples works well. Quantitatively, our method outperforms the consistency regularized GAN (CR-GAN) in real-world translations and demonstrates efficacy against several data augmentation variants and cycle-consistent constraints.
1401.0245
Sujit Gath
S.J Gath and R.V Kulkarni
A Review: Expert System for Diagnosis of Myocardial Infarction
7 pages. arXiv admin note: text overlap with arXiv:1006.4544 by other authors
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A computer Program Capable of performing at a human-expert level in a narrow problem domain area is called an expert system. Management of uncertainty is an intrinsically important issue in the design of expert systems because much of the information in the knowledge base of a typical expert system is imprecise, incomplete or not totally reliable. In this paper, the author present s the review of past work that has been carried out by various researchers based on development of expert systems for the diagnosis of cardiac disease
[ { "created": "Wed, 1 Jan 2014 03:59:22 GMT", "version": "v1" } ]
2014-01-03
[ [ "Gath", "S. J", "" ], [ "Kulkarni", "R. V", "" ] ]
A computer Program Capable of performing at a human-expert level in a narrow problem domain area is called an expert system. Management of uncertainty is an intrinsically important issue in the design of expert systems because much of the information in the knowledge base of a typical expert system is imprecise, incomplete or not totally reliable. In this paper, the author present s the review of past work that has been carried out by various researchers based on development of expert systems for the diagnosis of cardiac disease
2004.05290
Ian Manchester
Max Revay, Ruigang Wang, Ian R. Manchester
A Convex Parameterization of Robust Recurrent Neural Networks
conference submission, 6 pages
null
null
null
cs.LG cs.SY eess.SY math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recurrent neural networks (RNNs) are a class of nonlinear dynamical systems often used to model sequence-to-sequence maps. RNNs have excellent expressive power but lack the stability or robustness guarantees that are necessary for many applications. In this paper, we formulate convex sets of RNNs with stability and robustness guarantees. The guarantees are derived using incremental quadratic constraints and can ensure global exponential stability of all solutions, and bounds on incremental $ \ell_2 $ gain (the Lipschitz constant of the learned sequence-to-sequence mapping). Using an implicit model structure, we construct a parametrization of RNNs that is jointly convex in the model parameters and stability certificate. We prove that this model structure includes all previously-proposed convex sets of stable RNNs as special cases, and also includes all stable linear dynamical systems. We illustrate the utility of the proposed model class in the context of non-linear system identification.
[ { "created": "Sat, 11 Apr 2020 03:12:42 GMT", "version": "v1" }, { "created": "Sat, 3 Oct 2020 08:48:04 GMT", "version": "v2" } ]
2020-10-06
[ [ "Revay", "Max", "" ], [ "Wang", "Ruigang", "" ], [ "Manchester", "Ian R.", "" ] ]
Recurrent neural networks (RNNs) are a class of nonlinear dynamical systems often used to model sequence-to-sequence maps. RNNs have excellent expressive power but lack the stability or robustness guarantees that are necessary for many applications. In this paper, we formulate convex sets of RNNs with stability and robustness guarantees. The guarantees are derived using incremental quadratic constraints and can ensure global exponential stability of all solutions, and bounds on incremental $ \ell_2 $ gain (the Lipschitz constant of the learned sequence-to-sequence mapping). Using an implicit model structure, we construct a parametrization of RNNs that is jointly convex in the model parameters and stability certificate. We prove that this model structure includes all previously-proposed convex sets of stable RNNs as special cases, and also includes all stable linear dynamical systems. We illustrate the utility of the proposed model class in the context of non-linear system identification.
1411.2684
Francisco Pe\~nu\~nuri
F. Penunuri, O. Carvente, M. A. Zambrano-Arjona, Carlos A. Cruz-Villar
Dual Algorithms
17 pages
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cubic spline interpolation method, the Runge--Kutta method, and the Newton-Raphson method are extended to dual versions (developed in the context of dual numbers). This extension allows the calculation of the derivatives of complicated compositions of functions which are not necessarily defined by a closed form expression. The code for the algorithms has been written in Fortran and some examples are presented. Among them, we use the dual Newton--Raphson method to obtain the derivatives of the output angle in the RRRCR spatial mechanism; we use the dual normal cubic spline interpolation algorithm to obtain the thermal diffusivity using photothermal techniques; and we use the dual Runge--Kutta method to obtain the derivatives of functions depending on the solution of the Duffing equation.
[ { "created": "Tue, 11 Nov 2014 02:35:50 GMT", "version": "v1" }, { "created": "Wed, 12 Nov 2014 13:58:02 GMT", "version": "v2" }, { "created": "Tue, 10 Jan 2017 19:39:39 GMT", "version": "v3" } ]
2017-01-12
[ [ "Penunuri", "F.", "" ], [ "Carvente", "O.", "" ], [ "Zambrano-Arjona", "M. A.", "" ], [ "Cruz-Villar", "Carlos A.", "" ] ]
The cubic spline interpolation method, the Runge--Kutta method, and the Newton-Raphson method are extended to dual versions (developed in the context of dual numbers). This extension allows the calculation of the derivatives of complicated compositions of functions which are not necessarily defined by a closed form expression. The code for the algorithms has been written in Fortran and some examples are presented. Among them, we use the dual Newton--Raphson method to obtain the derivatives of the output angle in the RRRCR spatial mechanism; we use the dual normal cubic spline interpolation algorithm to obtain the thermal diffusivity using photothermal techniques; and we use the dual Runge--Kutta method to obtain the derivatives of functions depending on the solution of the Duffing equation.
2009.09828
Eric Bonjour
Felipe Sanchez (ERPI), Davy Monticolo (ERPI), Eric Bonjour (ERPI), Jean-Pierre Mica\"elli
Use of Bayesian Network characteristics to link project management maturity and risk of project overcost
null
2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Nov 2018, Las Palmas de Gran Canaria, Spain. pp.420-426
10.1109/SITIS.2018.00071
null
cs.SE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The project management field has the imperative to increase the project probability of success. Experts have developed several project management maturity models to assets and improve the project outcome. However, the current literature lacks of models allowing correlating the measured maturity and the expected probability of success. This paper uses the characteristics of Bayesian networks to formalize experts' knowledge and to extract knowledge from a project overcost database. It develops a method to estimate the impact of project management maturity on the risk of project overcost. A general framework is presented. An industrial case is used to illustrate the application of the method.
[ { "created": "Fri, 18 Sep 2020 08:00:25 GMT", "version": "v1" } ]
2020-09-22
[ [ "Sanchez", "Felipe", "", "ERPI" ], [ "Monticolo", "Davy", "", "ERPI" ], [ "Bonjour", "Eric", "", "ERPI" ], [ "Micaëlli", "Jean-Pierre", "" ] ]
The project management field has the imperative to increase the project probability of success. Experts have developed several project management maturity models to assets and improve the project outcome. However, the current literature lacks of models allowing correlating the measured maturity and the expected probability of success. This paper uses the characteristics of Bayesian networks to formalize experts' knowledge and to extract knowledge from a project overcost database. It develops a method to estimate the impact of project management maturity on the risk of project overcost. A general framework is presented. An industrial case is used to illustrate the application of the method.
2405.19644
Ryo Fujii
Ryo Fujii and Masashi Hatano and Hideo Saito and Hiroki Kajita
EgoSurgery-Phase: A Dataset of Surgical Phase Recognition from Egocentric Open Surgery Videos
Early accepted by MICCAI 2024
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Surgical phase recognition has gained significant attention due to its potential to offer solutions to numerous demands of the modern operating room. However, most existing methods concentrate on minimally invasive surgery (MIS), leaving surgical phase recognition for open surgery understudied. This discrepancy is primarily attributed to the scarcity of publicly available open surgery video datasets for surgical phase recognition. To address this issue, we introduce a new egocentric open surgery video dataset for phase recognition, named EgoSurgery-Phase. This dataset comprises 15 hours of real open surgery videos spanning 9 distinct surgical phases all captured using an egocentric camera attached to the surgeon's head. In addition to video, the EgoSurgery-Phase offers eye gaze. As far as we know, it is the first real open surgery video dataset for surgical phase recognition publicly available. Furthermore, inspired by the notable success of masked autoencoders (MAEs) in video understanding tasks (e.g., action recognition), we propose a gaze-guided masked autoencoder (GGMAE). Considering the regions where surgeons' gaze focuses are often critical for surgical phase recognition (e.g., surgical field), in our GGMAE, the gaze information acts as an empirical semantic richness prior to guiding the masking process, promoting better attention to semantically rich spatial regions. GGMAE significantly improves the previous state-of-the-art recognition method (6.4% in Jaccard) and the masked autoencoder-based method (3.1% in Jaccard) on EgoSurgery-Phase. The dataset will be released at https://github.com/Fujiry0/EgoSurgery.
[ { "created": "Thu, 30 May 2024 02:53:19 GMT", "version": "v1" } ]
2024-05-31
[ [ "Fujii", "Ryo", "" ], [ "Hatano", "Masashi", "" ], [ "Saito", "Hideo", "" ], [ "Kajita", "Hiroki", "" ] ]
Surgical phase recognition has gained significant attention due to its potential to offer solutions to numerous demands of the modern operating room. However, most existing methods concentrate on minimally invasive surgery (MIS), leaving surgical phase recognition for open surgery understudied. This discrepancy is primarily attributed to the scarcity of publicly available open surgery video datasets for surgical phase recognition. To address this issue, we introduce a new egocentric open surgery video dataset for phase recognition, named EgoSurgery-Phase. This dataset comprises 15 hours of real open surgery videos spanning 9 distinct surgical phases all captured using an egocentric camera attached to the surgeon's head. In addition to video, the EgoSurgery-Phase offers eye gaze. As far as we know, it is the first real open surgery video dataset for surgical phase recognition publicly available. Furthermore, inspired by the notable success of masked autoencoders (MAEs) in video understanding tasks (e.g., action recognition), we propose a gaze-guided masked autoencoder (GGMAE). Considering the regions where surgeons' gaze focuses are often critical for surgical phase recognition (e.g., surgical field), in our GGMAE, the gaze information acts as an empirical semantic richness prior to guiding the masking process, promoting better attention to semantically rich spatial regions. GGMAE significantly improves the previous state-of-the-art recognition method (6.4% in Jaccard) and the masked autoencoder-based method (3.1% in Jaccard) on EgoSurgery-Phase. The dataset will be released at https://github.com/Fujiry0/EgoSurgery.
0710.4695
EDA Publishing Association
Alan Mishchenko, Robert K. Brayton
SAT-Based Complete Don't-Care Computation for Network Optimization
Submitted on behalf of EDAA (http://www.edaa.com/)
Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)
null
null
cs.LO
null
This paper describes an improved approach to Boolean network optimization using internal don't-cares. The improvements concern the type of don't-cares computed, their scope, and the computation method. Instead of the traditionally used compatible observability don't-cares (CODCs), we introduce and justify the use of complete don't-cares (CDC). To ensure the robustness of the don't-care computation for very large industrial networks, a optional windowing scheme is implemented that computes substantial subsets of the CDCs in reasonable time. Finally, we give a SAT-based don't-care computation algorithm that is more efficient than BDD-based algorithms. Experimental results confirm that these improvements work well in practice. Complete don't-cares allow for a reduction in the number of literals compared to the CODCs. Windowing guarantees robustness, even for very large benchmarks on which previous methods could not be applied. SAT reduces the runtime and enhances robustness, making don't-cares affordable for a variety of other Boolean methods applied to the network.
[ { "created": "Thu, 25 Oct 2007 09:15:10 GMT", "version": "v1" } ]
2011-11-09
[ [ "Mishchenko", "Alan", "" ], [ "Brayton", "Robert K.", "" ] ]
This paper describes an improved approach to Boolean network optimization using internal don't-cares. The improvements concern the type of don't-cares computed, their scope, and the computation method. Instead of the traditionally used compatible observability don't-cares (CODCs), we introduce and justify the use of complete don't-cares (CDC). To ensure the robustness of the don't-care computation for very large industrial networks, a optional windowing scheme is implemented that computes substantial subsets of the CDCs in reasonable time. Finally, we give a SAT-based don't-care computation algorithm that is more efficient than BDD-based algorithms. Experimental results confirm that these improvements work well in practice. Complete don't-cares allow for a reduction in the number of literals compared to the CODCs. Windowing guarantees robustness, even for very large benchmarks on which previous methods could not be applied. SAT reduces the runtime and enhances robustness, making don't-cares affordable for a variety of other Boolean methods applied to the network.
2208.12523
Martin Glauer
Martin Glauer, Robert West, Susan Michie, Janna Hastings
ESC-Rules: Explainable, Semantically Constrained Rule Sets
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
We describe a novel approach to explainable prediction of a continuous variable based on learning fuzzy weighted rules. Our model trains a set of weighted rules to maximise prediction accuracy and minimise an ontology-based 'semantic loss' function including user-specified constraints on the rules that should be learned in order to maximise the explainability of the resulting rule set from a user perspective. This system fuses quantitative sub-symbolic learning with symbolic learning and constraints based on domain knowledge. We illustrate our system on a case study in predicting the outcomes of behavioural interventions for smoking cessation, and show that it outperforms other interpretable approaches, achieving performance close to that of a deep learning model, while offering transparent explainability that is an essential requirement for decision-makers in the health domain.
[ { "created": "Fri, 26 Aug 2022 09:29:30 GMT", "version": "v1" } ]
2022-08-29
[ [ "Glauer", "Martin", "" ], [ "West", "Robert", "" ], [ "Michie", "Susan", "" ], [ "Hastings", "Janna", "" ] ]
We describe a novel approach to explainable prediction of a continuous variable based on learning fuzzy weighted rules. Our model trains a set of weighted rules to maximise prediction accuracy and minimise an ontology-based 'semantic loss' function including user-specified constraints on the rules that should be learned in order to maximise the explainability of the resulting rule set from a user perspective. This system fuses quantitative sub-symbolic learning with symbolic learning and constraints based on domain knowledge. We illustrate our system on a case study in predicting the outcomes of behavioural interventions for smoking cessation, and show that it outperforms other interpretable approaches, achieving performance close to that of a deep learning model, while offering transparent explainability that is an essential requirement for decision-makers in the health domain.
2205.04812
Shane Gilroy
Shane Gilroy, Darragh Mullins, Edward Jones, Ashkan Parsi and Martin Glavin
The Impact of Partial Occlusion on Pedestrian Detectability
This research has been published under the title "Replacing the human driver: An objective benchmark for occluded pedestrian detection" in Biomimetic Intelligence and Robotics https://doi.org/10.1016/j.birob.2023.100115
Biomimetic Intelligence and Robotics. 2023 Jul 18:100115
10.1016/j.birob.2023.100115
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Robust detection of vulnerable road users is a safety critical requirement for the deployment of autonomous vehicles in heterogeneous traffic. One of the most complex outstanding challenges is that of partial occlusion where a target object is only partially available to the sensor due to obstruction by another foreground object. A number of leading pedestrian detection benchmarks provide annotation for partial occlusion, however each benchmark varies greatly in their definition of the occurrence and severity of occlusion. Recent research demonstrates that a high degree of subjectivity is used to classify occlusion level in these cases and occlusion is typically categorized into 2 to 3 broad categories such as partially and heavily occluded. This can lead to inaccurate or inconsistent reporting of pedestrian detection model performance depending on which benchmark is used. This research introduces a novel, objective benchmark for partially occluded pedestrian detection to facilitate the objective characterization of pedestrian detection models. Characterization is carried out on seven popular pedestrian detection models for a range of occlusion levels from 0-99%, in order to demonstrate the efficacy and increased analysis capabilities of the proposed characterization method. Results demonstrate that pedestrian detection performance degrades, and the number of false negative detections increase as pedestrian occlusion level increases. Of the seven popular pedestrian detection routines characterized, CenterNet has the greatest overall performance, followed by SSDlite. RetinaNet has the lowest overall detection performance across the range of occlusion levels.
[ { "created": "Tue, 10 May 2022 11:21:18 GMT", "version": "v1" }, { "created": "Wed, 11 May 2022 10:23:05 GMT", "version": "v2" }, { "created": "Thu, 12 May 2022 08:02:51 GMT", "version": "v3" }, { "created": "Tue, 31 May 2022 11:30:22 GMT", "version": "v4" }, { "created": "Thu, 22 Jun 2023 14:21:56 GMT", "version": "v5" }, { "created": "Thu, 27 Jul 2023 09:57:21 GMT", "version": "v6" } ]
2023-07-28
[ [ "Gilroy", "Shane", "" ], [ "Mullins", "Darragh", "" ], [ "Jones", "Edward", "" ], [ "Parsi", "Ashkan", "" ], [ "Glavin", "Martin", "" ] ]
Robust detection of vulnerable road users is a safety critical requirement for the deployment of autonomous vehicles in heterogeneous traffic. One of the most complex outstanding challenges is that of partial occlusion where a target object is only partially available to the sensor due to obstruction by another foreground object. A number of leading pedestrian detection benchmarks provide annotation for partial occlusion, however each benchmark varies greatly in their definition of the occurrence and severity of occlusion. Recent research demonstrates that a high degree of subjectivity is used to classify occlusion level in these cases and occlusion is typically categorized into 2 to 3 broad categories such as partially and heavily occluded. This can lead to inaccurate or inconsistent reporting of pedestrian detection model performance depending on which benchmark is used. This research introduces a novel, objective benchmark for partially occluded pedestrian detection to facilitate the objective characterization of pedestrian detection models. Characterization is carried out on seven popular pedestrian detection models for a range of occlusion levels from 0-99%, in order to demonstrate the efficacy and increased analysis capabilities of the proposed characterization method. Results demonstrate that pedestrian detection performance degrades, and the number of false negative detections increase as pedestrian occlusion level increases. Of the seven popular pedestrian detection routines characterized, CenterNet has the greatest overall performance, followed by SSDlite. RetinaNet has the lowest overall detection performance across the range of occlusion levels.
2303.09484
Nima Hatami
Nima Hatami and Laura Mechtouff and David Rousseau and Tae-Hee Cho and Omer Eker and Yves Berthezene and Carole Frindel
A Novel Autoencoders-LSTM Model for Stroke Outcome Prediction using Multimodal MRI Data
The IEEE International Symposium on Biomedical Imaging (ISBI). arXiv admin note: text overlap with arXiv:2205.05545
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Patient outcome prediction is critical in management of ischemic stroke. In this paper, a novel machine learning model is proposed for stroke outcome prediction using multimodal Magnetic Resonance Imaging (MRI). The proposed model consists of two serial levels of Autoencoders (AEs), where different AEs at level 1 are used for learning unimodal features from different MRI modalities and a AE at level 2 is used to combine the unimodal features into compressed multimodal features. The sequences of multimodal features of a given patient are then used by an LSTM network for predicting outcome score. The proposed AE2-LSTM model is proved to be an effective approach for better addressing the multimodality and volumetric nature of MRI data. Experimental results show that the proposed AE2-LSTM outperforms the existing state-of-the art models by achieving highest AUC=0.71 and lowest MAE=0.34.
[ { "created": "Thu, 16 Mar 2023 17:00:45 GMT", "version": "v1" } ]
2023-03-17
[ [ "Hatami", "Nima", "" ], [ "Mechtouff", "Laura", "" ], [ "Rousseau", "David", "" ], [ "Cho", "Tae-Hee", "" ], [ "Eker", "Omer", "" ], [ "Berthezene", "Yves", "" ], [ "Frindel", "Carole", "" ] ]
Patient outcome prediction is critical in management of ischemic stroke. In this paper, a novel machine learning model is proposed for stroke outcome prediction using multimodal Magnetic Resonance Imaging (MRI). The proposed model consists of two serial levels of Autoencoders (AEs), where different AEs at level 1 are used for learning unimodal features from different MRI modalities and a AE at level 2 is used to combine the unimodal features into compressed multimodal features. The sequences of multimodal features of a given patient are then used by an LSTM network for predicting outcome score. The proposed AE2-LSTM model is proved to be an effective approach for better addressing the multimodality and volumetric nature of MRI data. Experimental results show that the proposed AE2-LSTM outperforms the existing state-of-the art models by achieving highest AUC=0.71 and lowest MAE=0.34.
2205.02058
Rodrigo Mira
Rodrigo Mira, Alexandros Haliassos, Stavros Petridis, Bj\"orn W. Schuller and Maja Pantic
SVTS: Scalable Video-to-Speech Synthesis
accepted to INTERSPEECH 2022 (Oral Presentation)
null
null
null
cs.SD cs.CV cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video-to-speech synthesis (also known as lip-to-speech) refers to the translation of silent lip movements into the corresponding audio. This task has received an increasing amount of attention due to its self-supervised nature (i.e., can be trained without manual labelling) combined with the ever-growing collection of audio-visual data available online. Despite these strong motivations, contemporary video-to-speech works focus mainly on small- to medium-sized corpora with substantial constraints in both vocabulary and setting. In this work, we introduce a scalable video-to-speech framework consisting of two components: a video-to-spectrogram predictor and a pre-trained neural vocoder, which converts the mel-frequency spectrograms into waveform audio. We achieve state-of-the art results for GRID and considerably outperform previous approaches on LRW. More importantly, by focusing on spectrogram prediction using a simple feedforward model, we can efficiently and effectively scale our method to very large and unconstrained datasets: To the best of our knowledge, we are the first to show intelligible results on the challenging LRS3 dataset.
[ { "created": "Wed, 4 May 2022 13:34:07 GMT", "version": "v1" }, { "created": "Mon, 15 Aug 2022 18:38:37 GMT", "version": "v2" } ]
2022-08-17
[ [ "Mira", "Rodrigo", "" ], [ "Haliassos", "Alexandros", "" ], [ "Petridis", "Stavros", "" ], [ "Schuller", "Björn W.", "" ], [ "Pantic", "Maja", "" ] ]
Video-to-speech synthesis (also known as lip-to-speech) refers to the translation of silent lip movements into the corresponding audio. This task has received an increasing amount of attention due to its self-supervised nature (i.e., can be trained without manual labelling) combined with the ever-growing collection of audio-visual data available online. Despite these strong motivations, contemporary video-to-speech works focus mainly on small- to medium-sized corpora with substantial constraints in both vocabulary and setting. In this work, we introduce a scalable video-to-speech framework consisting of two components: a video-to-spectrogram predictor and a pre-trained neural vocoder, which converts the mel-frequency spectrograms into waveform audio. We achieve state-of-the art results for GRID and considerably outperform previous approaches on LRW. More importantly, by focusing on spectrogram prediction using a simple feedforward model, we can efficiently and effectively scale our method to very large and unconstrained datasets: To the best of our knowledge, we are the first to show intelligible results on the challenging LRS3 dataset.
1305.0423
Somayeh Danafar
Somayeh Danafar, Paola M.V. Rancoita, Tobias Glasmachers, Kevin Whittingstall, Juergen Schmidhuber
Testing Hypotheses by Regularized Maximum Mean Discrepancy
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Do two data samples come from different distributions? Recent studies of this fundamental problem focused on embedding probability distributions into sufficiently rich characteristic Reproducing Kernel Hilbert Spaces (RKHSs), to compare distributions by the distance between their embeddings. We show that Regularized Maximum Mean Discrepancy (RMMD), our novel measure for kernel-based hypothesis testing, yields substantial improvements even when sample sizes are small, and excels at hypothesis tests involving multiple comparisons with power control. We derive asymptotic distributions under the null and alternative hypotheses, and assess power control. Outstanding results are obtained on: challenging EEG data, MNIST, the Berkley Covertype, and the Flare-Solar dataset.
[ { "created": "Thu, 2 May 2013 13:03:53 GMT", "version": "v1" } ]
2013-05-03
[ [ "Danafar", "Somayeh", "" ], [ "Rancoita", "Paola M. V.", "" ], [ "Glasmachers", "Tobias", "" ], [ "Whittingstall", "Kevin", "" ], [ "Schmidhuber", "Juergen", "" ] ]
Do two data samples come from different distributions? Recent studies of this fundamental problem focused on embedding probability distributions into sufficiently rich characteristic Reproducing Kernel Hilbert Spaces (RKHSs), to compare distributions by the distance between their embeddings. We show that Regularized Maximum Mean Discrepancy (RMMD), our novel measure for kernel-based hypothesis testing, yields substantial improvements even when sample sizes are small, and excels at hypothesis tests involving multiple comparisons with power control. We derive asymptotic distributions under the null and alternative hypotheses, and assess power control. Outstanding results are obtained on: challenging EEG data, MNIST, the Berkley Covertype, and the Flare-Solar dataset.
1905.12198
Jiangjie Chen
Jiangjie Chen, Ao Wang, Haiyun Jiang, Suo Feng, Chenguang Li and Yanghua Xiao
Ensuring Readability and Data-fidelity using Head-modifier Templates in Deep Type Description Generation
ACL 2019
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019
10.18653/v1/P19-1196
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A type description is a succinct noun compound which helps human and machines to quickly grasp the informative and distinctive information of an entity. Entities in most knowledge graphs (KGs) still lack such descriptions, thus calling for automatic methods to supplement such information. However, existing generative methods either overlook the grammatical structure or make factual mistakes in generated texts. To solve these problems, we propose a head-modifier template-based method to ensure the readability and data fidelity of generated type descriptions. We also propose a new dataset and two automatic metrics for this task. Experiments show that our method improves substantially compared with baselines and achieves state-of-the-art performance on both datasets.
[ { "created": "Wed, 29 May 2019 03:32:38 GMT", "version": "v1" } ]
2019-10-09
[ [ "Chen", "Jiangjie", "" ], [ "Wang", "Ao", "" ], [ "Jiang", "Haiyun", "" ], [ "Feng", "Suo", "" ], [ "Li", "Chenguang", "" ], [ "Xiao", "Yanghua", "" ] ]
A type description is a succinct noun compound which helps human and machines to quickly grasp the informative and distinctive information of an entity. Entities in most knowledge graphs (KGs) still lack such descriptions, thus calling for automatic methods to supplement such information. However, existing generative methods either overlook the grammatical structure or make factual mistakes in generated texts. To solve these problems, we propose a head-modifier template-based method to ensure the readability and data fidelity of generated type descriptions. We also propose a new dataset and two automatic metrics for this task. Experiments show that our method improves substantially compared with baselines and achieves state-of-the-art performance on both datasets.
1912.02583
Alberto Sonnino
Alberto Sonnino
FMPC: Secure Multiparty Computation from Fourier Series and Parseval's Identity
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
FMPC is a novel multiparty computation protocol of arithmetic circuits based on secret-sharing, capable of computing multiplication of secrets with no online communication; it thus enjoys constant online communication latency in the size of the circuit. FMPC is based on the application of Fourier series to Parseval's identity, and introduces the first generalization of Parseval's identity for Fourier series applicable to an arbitrary number of inputs. FMPC operates in a setting where users wish to compute a function over some secret inputs by submitting the computation to a set of nodes, but is only suitable for the evaluation of low-depth arithmetic circuits. FMPC relies on an offline phase consisting of traditional preprocessing as introduced by established protocols like SPDZ, and innovates on the online phase that mainly consists of each node locally evaluating specific functions. FMPC paves the way for a new kind of multiparty computation protocols capable of computing multiplication of secrets as an alternative to circuit garbling and the traditional algebra introduced by Donald Beaver in 1991.
[ { "created": "Thu, 5 Dec 2019 14:11:35 GMT", "version": "v1" } ]
2019-12-06
[ [ "Sonnino", "Alberto", "" ] ]
FMPC is a novel multiparty computation protocol of arithmetic circuits based on secret-sharing, capable of computing multiplication of secrets with no online communication; it thus enjoys constant online communication latency in the size of the circuit. FMPC is based on the application of Fourier series to Parseval's identity, and introduces the first generalization of Parseval's identity for Fourier series applicable to an arbitrary number of inputs. FMPC operates in a setting where users wish to compute a function over some secret inputs by submitting the computation to a set of nodes, but is only suitable for the evaluation of low-depth arithmetic circuits. FMPC relies on an offline phase consisting of traditional preprocessing as introduced by established protocols like SPDZ, and innovates on the online phase that mainly consists of each node locally evaluating specific functions. FMPC paves the way for a new kind of multiparty computation protocols capable of computing multiplication of secrets as an alternative to circuit garbling and the traditional algebra introduced by Donald Beaver in 1991.
2301.13636
Ella Tamir
Ella Tamir, Martin Trapp, Arno Solin
Transport with Support: Data-Conditional Diffusion Bridges
27 pages, 11 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamic Schr\"odinger bridge problem provides an appealing setting for solving constrained time-series data generation tasks posed as optimal transport problems. It consists of learning non-linear diffusion processes using efficient iterative solvers. Recent works have demonstrated state-of-the-art results (eg. in modelling single-cell embryo RNA sequences or sampling from complex posteriors) but are limited to learning bridges with only initial and terminal constraints. Our work extends this paradigm by proposing the Iterative Smoothing Bridge (ISB). We integrate Bayesian filtering and optimal control into learning the diffusion process, enabling the generation of constrained stochastic processes governed by sparse observations at intermediate stages and terminal constraints. We assess the effectiveness of our method on synthetic and real-world data generation tasks and we show that the ISB generalises well to high-dimensional data, is computationally efficient, and provides accurate estimates of the marginals at intermediate and terminal times.
[ { "created": "Tue, 31 Jan 2023 13:50:16 GMT", "version": "v1" }, { "created": "Fri, 24 Nov 2023 09:43:24 GMT", "version": "v2" } ]
2023-11-27
[ [ "Tamir", "Ella", "" ], [ "Trapp", "Martin", "" ], [ "Solin", "Arno", "" ] ]
The dynamic Schr\"odinger bridge problem provides an appealing setting for solving constrained time-series data generation tasks posed as optimal transport problems. It consists of learning non-linear diffusion processes using efficient iterative solvers. Recent works have demonstrated state-of-the-art results (eg. in modelling single-cell embryo RNA sequences or sampling from complex posteriors) but are limited to learning bridges with only initial and terminal constraints. Our work extends this paradigm by proposing the Iterative Smoothing Bridge (ISB). We integrate Bayesian filtering and optimal control into learning the diffusion process, enabling the generation of constrained stochastic processes governed by sparse observations at intermediate stages and terminal constraints. We assess the effectiveness of our method on synthetic and real-world data generation tasks and we show that the ISB generalises well to high-dimensional data, is computationally efficient, and provides accurate estimates of the marginals at intermediate and terminal times.
2404.15084
Yuta Saito
Yuta Saito, Masahiro Nomura
Hyperparameter Optimization Can Even be Harmful in Off-Policy Learning and How to Deal with It
IJCAI'24
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been a growing interest in off-policy evaluation in the literature such as recommender systems and personalized medicine. We have so far seen significant progress in developing estimators aimed at accurately estimating the effectiveness of counterfactual policies based on biased logged data. However, there are many cases where those estimators are used not only to evaluate the value of decision making policies but also to search for the best hyperparameters from a large candidate space. This work explores the latter hyperparameter optimization (HPO) task for off-policy learning. We empirically show that naively applying an unbiased estimator of the generalization performance as a surrogate objective in HPO can cause an unexpected failure, merely pursuing hyperparameters whose generalization performance is greatly overestimated. We then propose simple and computationally efficient corrections to the typical HPO procedure to deal with the aforementioned issues simultaneously. Empirical investigations demonstrate the effectiveness of our proposed HPO algorithm in situations where the typical procedure fails severely.
[ { "created": "Tue, 23 Apr 2024 14:34:16 GMT", "version": "v1" } ]
2024-04-24
[ [ "Saito", "Yuta", "" ], [ "Nomura", "Masahiro", "" ] ]
There has been a growing interest in off-policy evaluation in the literature such as recommender systems and personalized medicine. We have so far seen significant progress in developing estimators aimed at accurately estimating the effectiveness of counterfactual policies based on biased logged data. However, there are many cases where those estimators are used not only to evaluate the value of decision making policies but also to search for the best hyperparameters from a large candidate space. This work explores the latter hyperparameter optimization (HPO) task for off-policy learning. We empirically show that naively applying an unbiased estimator of the generalization performance as a surrogate objective in HPO can cause an unexpected failure, merely pursuing hyperparameters whose generalization performance is greatly overestimated. We then propose simple and computationally efficient corrections to the typical HPO procedure to deal with the aforementioned issues simultaneously. Empirical investigations demonstrate the effectiveness of our proposed HPO algorithm in situations where the typical procedure fails severely.
2107.11707
Nasib Ullah
Nasib Ullah, Partha Pratim Mohanta
Boosting Video Captioning with Dynamic Loss Network
8 pages, 4 figures, Preprint
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Video captioning is one of the challenging problems at the intersection of vision and language, having many real-life applications in video retrieval, video surveillance, assisting visually challenged people, Human-machine interface, and many more. Recent deep learning based methods have shown promising results but are still on the lower side than other vision tasks (such as image classification, object detection). A significant drawback with existing video captioning methods is that they are optimized over cross-entropy loss function, which is uncorrelated to the de facto evaluation metrics (BLEU, METEOR, CIDER, ROUGE). In other words, cross-entropy is not a proper surrogate of the true loss function for video captioning. To mitigate this, methods like REINFORCE, Actor-Critic, and Minimum Risk Training (MRT) have been applied but have limitations and are not very effective. This paper proposes an alternate solution by introducing a dynamic loss network (DLN), providing an additional feedback signal that reflects the evaluation metrics directly. Our solution proves to be more efficient than other solutions and can be easily adapted to similar tasks. Our results on Microsoft Research Video Description Corpus (MSVD) and MSR-Video to Text (MSRVTT) datasets outperform previous methods.
[ { "created": "Sun, 25 Jul 2021 01:32:02 GMT", "version": "v1" }, { "created": "Mon, 2 Aug 2021 02:37:42 GMT", "version": "v2" }, { "created": "Tue, 1 Feb 2022 19:17:11 GMT", "version": "v3" } ]
2022-02-03
[ [ "Ullah", "Nasib", "" ], [ "Mohanta", "Partha Pratim", "" ] ]
Video captioning is one of the challenging problems at the intersection of vision and language, having many real-life applications in video retrieval, video surveillance, assisting visually challenged people, Human-machine interface, and many more. Recent deep learning based methods have shown promising results but are still on the lower side than other vision tasks (such as image classification, object detection). A significant drawback with existing video captioning methods is that they are optimized over cross-entropy loss function, which is uncorrelated to the de facto evaluation metrics (BLEU, METEOR, CIDER, ROUGE). In other words, cross-entropy is not a proper surrogate of the true loss function for video captioning. To mitigate this, methods like REINFORCE, Actor-Critic, and Minimum Risk Training (MRT) have been applied but have limitations and are not very effective. This paper proposes an alternate solution by introducing a dynamic loss network (DLN), providing an additional feedback signal that reflects the evaluation metrics directly. Our solution proves to be more efficient than other solutions and can be easily adapted to similar tasks. Our results on Microsoft Research Video Description Corpus (MSVD) and MSR-Video to Text (MSRVTT) datasets outperform previous methods.
2407.17765
Md Al Amin
Md Al Amin, Rushabh Shah, Hemanth Tummala, and Indrajit Ray
Utilizing Blockchain and Smart Contracts for Enhanced Fraud Prevention and Minimization in Health Insurance through Multi-Signature Claim Processing
2024 IEEE 4th International Conference on Emerging Trends in Networks and Computer Communications (ETNCC 2024
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Healthcare insurance provides financial support to access medical services for patients while ensuring timely and guaranteed payment for providers. Insurance fraud poses a significant challenge to insurance companies and policyholders, leading to increased costs and compromised healthcare treatment and service delivery. Most frauds, like phantom billing, upcoding, and unbundling, happen due to the lack of required entity participation. Also, claim activities are not transparent and accountable. Fraud can be prevented and minimized by involving every entity and making actions transparent and accountable. This paper proposes a blockchain-powered smart contract-based insurance claim processing mechanism to prevent and minimize fraud in response to this prevailing issue. All entities patients, providers, and insurance companies actively participate in the claim submission, approval, and acknowledgment process through a multi-signature technique. Also, every activity is captured and recorded in the blockchain using smart contracts to make every action transparent and accountable so that no entity can deny its actions and responsibilities. Blockchains' immutable storage property and strong integrity guarantee that recorded activities are not modified. As healthcare systems and insurance companies continue to deal with fraud challenges, this proposed approach holds the potential to significantly reduce fraudulent activities, ultimately benefiting both insurers and policyholders.
[ { "created": "Thu, 25 Jul 2024 04:42:31 GMT", "version": "v1" } ]
2024-07-26
[ [ "Amin", "Md Al", "" ], [ "Shah", "Rushabh", "" ], [ "Tummala", "Hemanth", "" ], [ "Ray", "Indrajit", "" ] ]
Healthcare insurance provides financial support to access medical services for patients while ensuring timely and guaranteed payment for providers. Insurance fraud poses a significant challenge to insurance companies and policyholders, leading to increased costs and compromised healthcare treatment and service delivery. Most frauds, like phantom billing, upcoding, and unbundling, happen due to the lack of required entity participation. Also, claim activities are not transparent and accountable. Fraud can be prevented and minimized by involving every entity and making actions transparent and accountable. This paper proposes a blockchain-powered smart contract-based insurance claim processing mechanism to prevent and minimize fraud in response to this prevailing issue. All entities patients, providers, and insurance companies actively participate in the claim submission, approval, and acknowledgment process through a multi-signature technique. Also, every activity is captured and recorded in the blockchain using smart contracts to make every action transparent and accountable so that no entity can deny its actions and responsibilities. Blockchains' immutable storage property and strong integrity guarantee that recorded activities are not modified. As healthcare systems and insurance companies continue to deal with fraud challenges, this proposed approach holds the potential to significantly reduce fraudulent activities, ultimately benefiting both insurers and policyholders.
1905.09052
Gloria Feher
Gloria Feher, Andreas Spitz, Michael Gertz
Retrieving Multi-Entity Associations: An Evaluation of Combination Modes for Word Embeddings
4 pages; Accepted at SIGIR'19
null
10.1145/3331184.3331366
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Word embeddings have gained significant attention as learnable representations of semantic relations between words, and have been shown to improve upon the results of traditional word representations. However, little effort has been devoted to using embeddings for the retrieval of entity associations beyond pairwise relations. In this paper, we use popular embedding methods to train vector representations of an entity-annotated news corpus, and evaluate their performance for the task of predicting entity participation in news events versus a traditional word cooccurrence network as a baseline. To support queries for events with multiple participating entities, we test a number of combination modes for the embedding vectors. While we find that even the best combination modes for word embeddings do not quite reach the performance of the full cooccurrence network, especially for rare entities, we observe that different embedding methods model different types of relations, thereby indicating the potential for ensemble methods.
[ { "created": "Wed, 22 May 2019 10:13:48 GMT", "version": "v1" } ]
2019-05-23
[ [ "Feher", "Gloria", "" ], [ "Spitz", "Andreas", "" ], [ "Gertz", "Michael", "" ] ]
Word embeddings have gained significant attention as learnable representations of semantic relations between words, and have been shown to improve upon the results of traditional word representations. However, little effort has been devoted to using embeddings for the retrieval of entity associations beyond pairwise relations. In this paper, we use popular embedding methods to train vector representations of an entity-annotated news corpus, and evaluate their performance for the task of predicting entity participation in news events versus a traditional word cooccurrence network as a baseline. To support queries for events with multiple participating entities, we test a number of combination modes for the embedding vectors. While we find that even the best combination modes for word embeddings do not quite reach the performance of the full cooccurrence network, especially for rare entities, we observe that different embedding methods model different types of relations, thereby indicating the potential for ensemble methods.
2306.08073
Dening Lu
Dening Lu, Jun Zhou, Kyle Yilin Gao, Dilong Li, Jing Du, Linlin Xu, Jonathan Li
Dynamic Clustering Transformer Network for Point Cloud Segmentation
8 pages, 4 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Point cloud segmentation is one of the most important tasks in computer vision with widespread scientific, industrial, and commercial applications. The research thereof has resulted in many breakthroughs in 3D object and scene understanding. Previous methods typically utilized hierarchical architectures for feature representation. However, the commonly used sampling and grouping methods in hierarchical networks are only based on point-wise three-dimensional coordinates, ignoring local semantic homogeneity of point clusters. Additionally, the prevalent Farthest Point Sampling (FPS) method is often a computational bottleneck. To address these issues, we propose a novel 3D point cloud representation network, called Dynamic Clustering Transformer Network (DCTNet). It has an encoder-decoder architecture, allowing for both local and global feature learning. Specifically, we propose novel semantic feature-based dynamic sampling and clustering methods in the encoder, which enables the model to be aware of local semantic homogeneity for local feature aggregation. Furthermore, in the decoder, we propose an efficient semantic feature-guided upsampling method. Our method was evaluated on an object-based dataset (ShapeNet), an urban navigation dataset (Toronto-3D), and a multispectral LiDAR dataset, verifying the performance of DCTNet across a wide variety of practical engineering applications. The inference speed of DCTNet is 3.8-16.8$\times$ faster than existing State-of-the-Art (SOTA) models on the ShapeNet dataset, while achieving an instance-wise mIoU of $86.6\%$, the current top score. Our method similarly outperforms previous methods on the other datasets, verifying it as the new State-of-the-Art in point cloud segmentation.
[ { "created": "Tue, 30 May 2023 01:11:05 GMT", "version": "v1" } ]
2023-06-16
[ [ "Lu", "Dening", "" ], [ "Zhou", "Jun", "" ], [ "Gao", "Kyle Yilin", "" ], [ "Li", "Dilong", "" ], [ "Du", "Jing", "" ], [ "Xu", "Linlin", "" ], [ "Li", "Jonathan", "" ] ]
Point cloud segmentation is one of the most important tasks in computer vision with widespread scientific, industrial, and commercial applications. The research thereof has resulted in many breakthroughs in 3D object and scene understanding. Previous methods typically utilized hierarchical architectures for feature representation. However, the commonly used sampling and grouping methods in hierarchical networks are only based on point-wise three-dimensional coordinates, ignoring local semantic homogeneity of point clusters. Additionally, the prevalent Farthest Point Sampling (FPS) method is often a computational bottleneck. To address these issues, we propose a novel 3D point cloud representation network, called Dynamic Clustering Transformer Network (DCTNet). It has an encoder-decoder architecture, allowing for both local and global feature learning. Specifically, we propose novel semantic feature-based dynamic sampling and clustering methods in the encoder, which enables the model to be aware of local semantic homogeneity for local feature aggregation. Furthermore, in the decoder, we propose an efficient semantic feature-guided upsampling method. Our method was evaluated on an object-based dataset (ShapeNet), an urban navigation dataset (Toronto-3D), and a multispectral LiDAR dataset, verifying the performance of DCTNet across a wide variety of practical engineering applications. The inference speed of DCTNet is 3.8-16.8$\times$ faster than existing State-of-the-Art (SOTA) models on the ShapeNet dataset, while achieving an instance-wise mIoU of $86.6\%$, the current top score. Our method similarly outperforms previous methods on the other datasets, verifying it as the new State-of-the-Art in point cloud segmentation.
1811.12599
Hongwei Lin
Chuanfeng Hu, Hongwei Lin
Gregory Solid Construction for Polyhedral Volume Parameterization by Sparse Optimization
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In isogeometric analysis, it is frequently required to handle the geometric models enclosed by four-sided or non-four-sided boundary patches, such as trimmed surfaces. In this paper, we develop a Gregory solid based method to parameterize those models. First, we extend the Gregory patch representation to the trivariate Gregory solid representation. Second, the trivariate Gregory solid representation is employed to interpolate the boundary patches of a geometric model, thus generating the polyhedral volume parametrization. To improve the regularity of the polyhedral volume parametrization, we formulate the construction of the trivariate Gregory solid as a sparse optimization problem, where the optimization objective function is a linear combination of some terms, including a sparse term aiming to reduce the negative Jacobian area of the Gregory solid. Then, the alternating direction method of multipliers (ADMM) is used to solve the sparse optimization problem. Lots of experimental examples illustrated in this paper demonstrate the effectiveness and efficiency of the developed method.
[ { "created": "Fri, 30 Nov 2018 03:27:11 GMT", "version": "v1" } ]
2018-12-03
[ [ "Hu", "Chuanfeng", "" ], [ "Lin", "Hongwei", "" ] ]
In isogeometric analysis, it is frequently required to handle the geometric models enclosed by four-sided or non-four-sided boundary patches, such as trimmed surfaces. In this paper, we develop a Gregory solid based method to parameterize those models. First, we extend the Gregory patch representation to the trivariate Gregory solid representation. Second, the trivariate Gregory solid representation is employed to interpolate the boundary patches of a geometric model, thus generating the polyhedral volume parametrization. To improve the regularity of the polyhedral volume parametrization, we formulate the construction of the trivariate Gregory solid as a sparse optimization problem, where the optimization objective function is a linear combination of some terms, including a sparse term aiming to reduce the negative Jacobian area of the Gregory solid. Then, the alternating direction method of multipliers (ADMM) is used to solve the sparse optimization problem. Lots of experimental examples illustrated in this paper demonstrate the effectiveness and efficiency of the developed method.
1806.06394
Saeid Hosseini
Leila Khalatbari, Mohammad Reza Kangavari, Saeid Hosseini, Hongzhi Yin, Ngai-Man Cheung
MCP: a Multi-Component learning machine to Predict protein secondary structure
null
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Gene or DNA sequence in every cell does not control genetic properties on its own; Rather, this is done through translation of DNA into protein and subsequent formation of a certain 3D structure. The biological function of a protein is tightly connected to its specific 3D structure. Prediction of the protein secondary structure is a crucial intermediate step towards elucidating its 3D structure and function. Traditional experimental methods for prediction of protein structure are expensive and time-consuming. Therefore, various machine learning approaches have been proposed to predict the protein secondary structure. Nevertheless, the average accuracy of the suggested solutions has hardly reached beyond 80%. The possible underlying reasons are the ambiguous sequence-structure relation, noise in input protein data, class imbalance, and the high dimensionality of the encoding schemes that represent the protein sequence. In this paper, we propose an accurate multi-component prediction machine to overcome the challenges of protein structure prediction. We devise a multi-component designation to address the high complexity challenge in sequence-structure relation. Furthermore, we utilize a compound string dissimilarity measure to directly interpret protein sequence content and avoid information loss. In order to improve the accuracy, we employ two different classifiers including support vector machine and fuzzy nearest neighbor and collectively aggregate the classification outcomes to infer the final protein secondary structures. We conduct comprehensive experiments to compare our model with the current state-of-the-art approaches. The experimental results demonstrate that given a set of input sequences, our multi-component framework can accurately predict the protein structure. Nevertheless, the effectiveness of our unified model an be further enhanced through framework configuration.
[ { "created": "Sun, 17 Jun 2018 15:18:31 GMT", "version": "v1" }, { "created": "Sat, 30 Jun 2018 12:00:37 GMT", "version": "v2" }, { "created": "Thu, 13 Sep 2018 12:53:38 GMT", "version": "v3" }, { "created": "Wed, 29 May 2019 14:41:33 GMT", "version": "v4" } ]
2019-05-30
[ [ "Khalatbari", "Leila", "" ], [ "Kangavari", "Mohammad Reza", "" ], [ "Hosseini", "Saeid", "" ], [ "Yin", "Hongzhi", "" ], [ "Cheung", "Ngai-Man", "" ] ]
The Gene or DNA sequence in every cell does not control genetic properties on its own; Rather, this is done through translation of DNA into protein and subsequent formation of a certain 3D structure. The biological function of a protein is tightly connected to its specific 3D structure. Prediction of the protein secondary structure is a crucial intermediate step towards elucidating its 3D structure and function. Traditional experimental methods for prediction of protein structure are expensive and time-consuming. Therefore, various machine learning approaches have been proposed to predict the protein secondary structure. Nevertheless, the average accuracy of the suggested solutions has hardly reached beyond 80%. The possible underlying reasons are the ambiguous sequence-structure relation, noise in input protein data, class imbalance, and the high dimensionality of the encoding schemes that represent the protein sequence. In this paper, we propose an accurate multi-component prediction machine to overcome the challenges of protein structure prediction. We devise a multi-component designation to address the high complexity challenge in sequence-structure relation. Furthermore, we utilize a compound string dissimilarity measure to directly interpret protein sequence content and avoid information loss. In order to improve the accuracy, we employ two different classifiers including support vector machine and fuzzy nearest neighbor and collectively aggregate the classification outcomes to infer the final protein secondary structures. We conduct comprehensive experiments to compare our model with the current state-of-the-art approaches. The experimental results demonstrate that given a set of input sequences, our multi-component framework can accurately predict the protein structure. Nevertheless, the effectiveness of our unified model an be further enhanced through framework configuration.
1807.04355
Oliver Aalami
Varun Shenoy, Elizabeth Foster, Lauren Aalami, Bakar Majeed and Oliver Aalami
Deepwound: Automated Postoperative Wound Assessment and Surgical Site Surveillance through Convolutional Neural Networks
7 pages, 11 figures, 2 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Postoperative wound complications are a significant cause of expense for hospitals, doctors, and patients. Hence, an effective method to diagnose the onset of wound complications is strongly desired. Algorithmically classifying wound images is a difficult task due to the variability in the appearance of wound sites. Convolutional neural networks (CNNs), a subgroup of artificial neural networks that have shown great promise in analyzing visual imagery, can be leveraged to categorize surgical wounds. We present a multi-label CNN ensemble, Deepwound, trained to classify wound images using only image pixels and corresponding labels as inputs. Our final computational model can accurately identify the presence of nine labels: drainage, fibrinous exudate, granulation tissue, surgical site infection, open wound, staples, steri strips, and sutures. Our model achieves receiver operating curve (ROC) area under curve (AUC) scores, sensitivity, specificity, and F1 scores superior to prior work in this area. Smartphones provide a means to deliver accessible wound care due to their increasing ubiquity. Paired with deep neural networks, they offer the capability to provide clinical insight to assist surgeons during postoperative care. We also present a mobile application frontend to Deepwound that assists patients in tracking their wound and surgical recovery from the comfort of their home.
[ { "created": "Wed, 11 Jul 2018 21:17:49 GMT", "version": "v1" } ]
2018-07-13
[ [ "Shenoy", "Varun", "" ], [ "Foster", "Elizabeth", "" ], [ "Aalami", "Lauren", "" ], [ "Majeed", "Bakar", "" ], [ "Aalami", "Oliver", "" ] ]
Postoperative wound complications are a significant cause of expense for hospitals, doctors, and patients. Hence, an effective method to diagnose the onset of wound complications is strongly desired. Algorithmically classifying wound images is a difficult task due to the variability in the appearance of wound sites. Convolutional neural networks (CNNs), a subgroup of artificial neural networks that have shown great promise in analyzing visual imagery, can be leveraged to categorize surgical wounds. We present a multi-label CNN ensemble, Deepwound, trained to classify wound images using only image pixels and corresponding labels as inputs. Our final computational model can accurately identify the presence of nine labels: drainage, fibrinous exudate, granulation tissue, surgical site infection, open wound, staples, steri strips, and sutures. Our model achieves receiver operating curve (ROC) area under curve (AUC) scores, sensitivity, specificity, and F1 scores superior to prior work in this area. Smartphones provide a means to deliver accessible wound care due to their increasing ubiquity. Paired with deep neural networks, they offer the capability to provide clinical insight to assist surgeons during postoperative care. We also present a mobile application frontend to Deepwound that assists patients in tracking their wound and surgical recovery from the comfort of their home.
2104.07898
Thatchaphol Saranurak
Ruoxu Cen, Jason Li, Danupon Nanongkai, Debmalya Panigrahi, Thatchaphol Saranurak
Minimum Cuts in Directed Graphs via $\sqrt{n}$ Max-Flows
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
We give an algorithm to find a mincut in an $n$-vertex, $m$-edge weighted directed graph using $\tilde O(\sqrt{n})$ calls to any maxflow subroutine. Using state of the art maxflow algorithms, this yields a directed mincut algorithm that runs in $\tilde O(m\sqrt{n} + n^2)$ time. This improves on the 30 year old bound of $\tilde O(mn)$ obtained by Hao and Orlin for this problem.
[ { "created": "Fri, 16 Apr 2021 05:47:57 GMT", "version": "v1" } ]
2021-04-19
[ [ "Cen", "Ruoxu", "" ], [ "Li", "Jason", "" ], [ "Nanongkai", "Danupon", "" ], [ "Panigrahi", "Debmalya", "" ], [ "Saranurak", "Thatchaphol", "" ] ]
We give an algorithm to find a mincut in an $n$-vertex, $m$-edge weighted directed graph using $\tilde O(\sqrt{n})$ calls to any maxflow subroutine. Using state of the art maxflow algorithms, this yields a directed mincut algorithm that runs in $\tilde O(m\sqrt{n} + n^2)$ time. This improves on the 30 year old bound of $\tilde O(mn)$ obtained by Hao and Orlin for this problem.
1909.06337
Mohammadreza Soltaninejad PhD
Mohammadreza Soltaninejad, Lei Zhang, Tryphon Lambrou, Guang Yang, Nigel Allinson, Xujiong Ye
MRI Brain Tumor Segmentation using Random Forests and Fully Convolutional Networks
Published in the pre-conference proceeding of "2017 International MICCAI BraTS Challenge"
In Proceeding of 2017 International MICCAI BraTS Challenge, pp. 279-283 (2017)
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel learning based method for automated segmentation of brain tumor in multimodal MRI images, which incorporates two sets of machine -learned and hand crafted features. Fully convolutional networks (FCN) forms the machine learned features and texton based features are considered as hand-crafted features. Random forest (RF) is used to classify the MRI image voxels into normal brain tissues and different parts of tumors, i.e. edema, necrosis and enhancing tumor. The method was evaluated on BRATS 2017 challenge dataset. The results show that the proposed method provides promising segmentations. The mean Dice overlap measure for automatic brain tumor segmentation against ground truth is 0.86, 0.78 and 0.66 for whole tumor, core and enhancing tumor, respectively.
[ { "created": "Fri, 13 Sep 2019 17:26:56 GMT", "version": "v1" } ]
2020-08-10
[ [ "Soltaninejad", "Mohammadreza", "" ], [ "Zhang", "Lei", "" ], [ "Lambrou", "Tryphon", "" ], [ "Yang", "Guang", "" ], [ "Allinson", "Nigel", "" ], [ "Ye", "Xujiong", "" ] ]
In this paper, we propose a novel learning based method for automated segmentation of brain tumor in multimodal MRI images, which incorporates two sets of machine -learned and hand crafted features. Fully convolutional networks (FCN) forms the machine learned features and texton based features are considered as hand-crafted features. Random forest (RF) is used to classify the MRI image voxels into normal brain tissues and different parts of tumors, i.e. edema, necrosis and enhancing tumor. The method was evaluated on BRATS 2017 challenge dataset. The results show that the proposed method provides promising segmentations. The mean Dice overlap measure for automatic brain tumor segmentation against ground truth is 0.86, 0.78 and 0.66 for whole tumor, core and enhancing tumor, respectively.
2011.02692
Zhilin Lu
Zhilin Lu, Jintao Wang, Jian Song
Binary Neural Network Aided CSI Feedback in Massive MIMO System
6 pages, 5 figures, 4 tables. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice
null
null
null
cs.IT cs.AI eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In massive multiple-input multiple-output (MIMO) system, channel state information (CSI) is essential for the base station to achieve high performance gain. Recently, deep learning is widely used in CSI compression to fight against the growing feedback overhead brought by massive MIMO in frequency division duplexing system. However, applying neural network brings extra memory and computation cost, which is non-negligible especially for the resource limited user equipment (UE). In this paper, a novel binarization aided feedback network named BCsiNet is introduced. Moreover, BCsiNet variants are designed to boost the performance under customized training and inference schemes. Experiments shows that BCsiNet offers over 30$\times$ memory saving and around 2$\times$ inference acceleration for encoder at UE compared with CsiNet. Furthermore, the feedback performance of BCsiNet is comparable with original CsiNet. The key results can be reproduced with https://github.com/Kylin9511/BCsiNet.
[ { "created": "Thu, 5 Nov 2020 07:41:09 GMT", "version": "v1" } ]
2020-11-06
[ [ "Lu", "Zhilin", "" ], [ "Wang", "Jintao", "" ], [ "Song", "Jian", "" ] ]
In massive multiple-input multiple-output (MIMO) system, channel state information (CSI) is essential for the base station to achieve high performance gain. Recently, deep learning is widely used in CSI compression to fight against the growing feedback overhead brought by massive MIMO in frequency division duplexing system. However, applying neural network brings extra memory and computation cost, which is non-negligible especially for the resource limited user equipment (UE). In this paper, a novel binarization aided feedback network named BCsiNet is introduced. Moreover, BCsiNet variants are designed to boost the performance under customized training and inference schemes. Experiments shows that BCsiNet offers over 30$\times$ memory saving and around 2$\times$ inference acceleration for encoder at UE compared with CsiNet. Furthermore, the feedback performance of BCsiNet is comparable with original CsiNet. The key results can be reproduced with https://github.com/Kylin9511/BCsiNet.
1907.02689
Antoine Joux
Antoine Joux (IMJ-PRG, OURAGAN), Cecile Pierrot (LORIA)
Algorithmic aspects of elliptic bases in finite field discrete logarithm algorithms
null
null
null
null
cs.CR math.NT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Elliptic bases, introduced by Couveignes and Lercier in 2009, give an elegant way of representing finite field extensions. A natural question which seems to have been considered independently by several groups is to use this representation as a starting point for small characteristic finite field discrete logarithm algorithms. This idea has been recently proposed by two groups working on it, in order to achieve provable quasi-polynomial time for discrete logarithms in small characteristic finite fields. In this paper, we don't try to achieve a provable algorithm but, instead, investigate the practicality of heuristic algorithms based on elliptic bases. Our key idea, is to use a different model of the elliptic curve used for the elliptic basis that allows for a relatively simple adaptation of the techniques used with former Frobenius representation algorithms. We haven't performed any record computation with this new method but our experiments with the field F 3 1345 indicate that switching to elliptic representations might be possible with performances comparable to the current best practical methods.
[ { "created": "Fri, 5 Jul 2019 06:27:39 GMT", "version": "v1" } ]
2019-07-08
[ [ "Joux", "Antoine", "", "IMJ-PRG, OURAGAN" ], [ "Pierrot", "Cecile", "", "LORIA" ] ]
Elliptic bases, introduced by Couveignes and Lercier in 2009, give an elegant way of representing finite field extensions. A natural question which seems to have been considered independently by several groups is to use this representation as a starting point for small characteristic finite field discrete logarithm algorithms. This idea has been recently proposed by two groups working on it, in order to achieve provable quasi-polynomial time for discrete logarithms in small characteristic finite fields. In this paper, we don't try to achieve a provable algorithm but, instead, investigate the practicality of heuristic algorithms based on elliptic bases. Our key idea, is to use a different model of the elliptic curve used for the elliptic basis that allows for a relatively simple adaptation of the techniques used with former Frobenius representation algorithms. We haven't performed any record computation with this new method but our experiments with the field F 3 1345 indicate that switching to elliptic representations might be possible with performances comparable to the current best practical methods.
1608.04437
Mayank Kejriwal
Mayank Kejriwal, Daniel P. Miranker
Self-contained NoSQL Resources for Cross-Domain RDF
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-domain knowledge bases such as DBpedia, Freebase and YAGO have emerged as encyclopedic hubs in the Web of Linked Data. Despite enabling several practical applications in the Semantic Web, the large-scale, schema-free nature of such graphs often precludes research groups from employing them widely as evaluation test cases for entity resolution and instance-based ontology alignment applications. Although the ground-truth linkages between the three knowledge bases above are available, they are not amenable to resource-limited applications. One reason is that the ground-truth files are not self-contained, meaning that a researcher must usually perform a series of expensive joins (typically in MapReduce) to obtain usable information sets. In this paper, we upload several publicly licensed data resources to the public cloud and use simple Hadoop clusters to compile, and make accessible, three cross-domain self-contained test cases involving linked instances from DBpedia, Freebase and YAGO. Self-containment is enabled by virtue of a simple NoSQL JSON-like serialization format. Potential applications for these resources, particularly related to testing transfer learning research hypotheses, are also briefly described.
[ { "created": "Mon, 15 Aug 2016 23:32:31 GMT", "version": "v1" } ]
2016-08-17
[ [ "Kejriwal", "Mayank", "" ], [ "Miranker", "Daniel P.", "" ] ]
Cross-domain knowledge bases such as DBpedia, Freebase and YAGO have emerged as encyclopedic hubs in the Web of Linked Data. Despite enabling several practical applications in the Semantic Web, the large-scale, schema-free nature of such graphs often precludes research groups from employing them widely as evaluation test cases for entity resolution and instance-based ontology alignment applications. Although the ground-truth linkages between the three knowledge bases above are available, they are not amenable to resource-limited applications. One reason is that the ground-truth files are not self-contained, meaning that a researcher must usually perform a series of expensive joins (typically in MapReduce) to obtain usable information sets. In this paper, we upload several publicly licensed data resources to the public cloud and use simple Hadoop clusters to compile, and make accessible, three cross-domain self-contained test cases involving linked instances from DBpedia, Freebase and YAGO. Self-containment is enabled by virtue of a simple NoSQL JSON-like serialization format. Potential applications for these resources, particularly related to testing transfer learning research hypotheses, are also briefly described.
2312.15307
Glauco A. Amigo Gal\'an
Glauco Amigo, Pablo Rivas Perea, Robert J. Marks
Mitigating Algorithmic Bias on Facial Expression Recognition
null
null
null
null
cs.CV cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biased datasets are ubiquitous and present a challenge for machine learning. For a number of categories on a dataset that are equally important but some are sparse and others are common, the learning algorithms will favor the ones with more presence. The problem of biased datasets is especially sensitive when dealing with minority people groups. How can we, from biased data, generate algorithms that treat every person equally? This work explores one way to mitigate bias using a debiasing variational autoencoder with experiments on facial expression recognition.
[ { "created": "Sat, 23 Dec 2023 17:41:30 GMT", "version": "v1" } ]
2023-12-27
[ [ "Amigo", "Glauco", "" ], [ "Perea", "Pablo Rivas", "" ], [ "Marks", "Robert J.", "" ] ]
Biased datasets are ubiquitous and present a challenge for machine learning. For a number of categories on a dataset that are equally important but some are sparse and others are common, the learning algorithms will favor the ones with more presence. The problem of biased datasets is especially sensitive when dealing with minority people groups. How can we, from biased data, generate algorithms that treat every person equally? This work explores one way to mitigate bias using a debiasing variational autoencoder with experiments on facial expression recognition.
2011.11946
Martin Humenberger
No\'e Pion, Martin Humenberger, Gabriela Csurka, Yohann Cabon, Torsten Sattler
Benchmarking Image Retrieval for Visual Localization
International Conference on 3D Vision, 2020
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Visual localization, i.e., camera pose estimation in a known scene, is a core component of technologies such as autonomous driving and augmented reality. State-of-the-art localization approaches often rely on image retrieval techniques for one of two tasks: (1) provide an approximate pose estimate or (2) determine which parts of the scene are potentially visible in a given query image. It is common practice to use state-of-the-art image retrieval algorithms for these tasks. These algorithms are often trained for the goal of retrieving the same landmark under a large range of viewpoint changes. However, robustness to viewpoint changes is not necessarily desirable in the context of visual localization. This paper focuses on understanding the role of image retrieval for multiple visual localization tasks. We introduce a benchmark setup and compare state-of-the-art retrieval representations on multiple datasets. We show that retrieval performance on classical landmark retrieval/recognition tasks correlates only for some but not all tasks to localization performance. This indicates a need for retrieval approaches specifically designed for localization tasks. Our benchmark and evaluation protocols are available at https://github.com/naver/kapture-localization.
[ { "created": "Tue, 24 Nov 2020 07:59:52 GMT", "version": "v1" }, { "created": "Tue, 1 Dec 2020 07:19:03 GMT", "version": "v2" } ]
2020-12-02
[ [ "Pion", "Noé", "" ], [ "Humenberger", "Martin", "" ], [ "Csurka", "Gabriela", "" ], [ "Cabon", "Yohann", "" ], [ "Sattler", "Torsten", "" ] ]
Visual localization, i.e., camera pose estimation in a known scene, is a core component of technologies such as autonomous driving and augmented reality. State-of-the-art localization approaches often rely on image retrieval techniques for one of two tasks: (1) provide an approximate pose estimate or (2) determine which parts of the scene are potentially visible in a given query image. It is common practice to use state-of-the-art image retrieval algorithms for these tasks. These algorithms are often trained for the goal of retrieving the same landmark under a large range of viewpoint changes. However, robustness to viewpoint changes is not necessarily desirable in the context of visual localization. This paper focuses on understanding the role of image retrieval for multiple visual localization tasks. We introduce a benchmark setup and compare state-of-the-art retrieval representations on multiple datasets. We show that retrieval performance on classical landmark retrieval/recognition tasks correlates only for some but not all tasks to localization performance. This indicates a need for retrieval approaches specifically designed for localization tasks. Our benchmark and evaluation protocols are available at https://github.com/naver/kapture-localization.
1310.8599
J. G. Wolff
J. Gerard Wolff
Information Compression, Intelligence, Computing, and Mathematics
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents evidence for the idea that much of artificial intelligence, human perception and cognition, mainstream computing, and mathematics, may be understood as compression of information via the matching and unification of patterns. This is the basis for the "SP theory of intelligence", outlined in the paper and fully described elsewhere. Relevant evidence may be seen: in empirical support for the SP theory; in some advantages of information compression (IC) in terms of biology and engineering; in our use of shorthands and ordinary words in language; in how we merge successive views of any one thing; in visual recognition; in binocular vision; in visual adaptation; in how we learn lexical and grammatical structures in language; and in perceptual constancies. IC via the matching and unification of patterns may be seen in both computing and mathematics: in IC via equations; in the matching and unification of names; in the reduction or removal of redundancy from unary numbers; in the workings of Post's Canonical System and the transition function in the Universal Turing Machine; in the way computers retrieve information from memory; in systems like Prolog; and in the query-by-example technique for information retrieval. The chunking-with-codes technique for IC may be seen in the use of named functions to avoid repetition of computer code. The schema-plus-correction technique may be seen in functions with parameters and in the use of classes in object-oriented programming. And the run-length coding technique may be seen in multiplication, in division, and in several other devices in mathematics and computing. The SP theory resolves the apparent paradox of "decompression by compression". And computing and cognition as IC is compatible with the uses of redundancy in such things as backup copies to safeguard data and understanding speech in a noisy environment.
[ { "created": "Thu, 31 Oct 2013 17:18:17 GMT", "version": "v1" }, { "created": "Tue, 12 Nov 2013 11:17:17 GMT", "version": "v2" }, { "created": "Tue, 29 Apr 2014 18:16:35 GMT", "version": "v3" }, { "created": "Mon, 13 Jul 2015 08:59:41 GMT", "version": "v4" } ]
2015-07-14
[ [ "Wolff", "J. Gerard", "" ] ]
This paper presents evidence for the idea that much of artificial intelligence, human perception and cognition, mainstream computing, and mathematics, may be understood as compression of information via the matching and unification of patterns. This is the basis for the "SP theory of intelligence", outlined in the paper and fully described elsewhere. Relevant evidence may be seen: in empirical support for the SP theory; in some advantages of information compression (IC) in terms of biology and engineering; in our use of shorthands and ordinary words in language; in how we merge successive views of any one thing; in visual recognition; in binocular vision; in visual adaptation; in how we learn lexical and grammatical structures in language; and in perceptual constancies. IC via the matching and unification of patterns may be seen in both computing and mathematics: in IC via equations; in the matching and unification of names; in the reduction or removal of redundancy from unary numbers; in the workings of Post's Canonical System and the transition function in the Universal Turing Machine; in the way computers retrieve information from memory; in systems like Prolog; and in the query-by-example technique for information retrieval. The chunking-with-codes technique for IC may be seen in the use of named functions to avoid repetition of computer code. The schema-plus-correction technique may be seen in functions with parameters and in the use of classes in object-oriented programming. And the run-length coding technique may be seen in multiplication, in division, and in several other devices in mathematics and computing. The SP theory resolves the apparent paradox of "decompression by compression". And computing and cognition as IC is compatible with the uses of redundancy in such things as backup copies to safeguard data and understanding speech in a noisy environment.
1602.01237
Shanshan Zhang
Shanshan Zhang, Rodrigo Benenson, Mohamed Omran, Jan Hosang, and Bernt Schiele
How Far are We from Solving Pedestrian Detection?
CVPR16 camera ready
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Encouraged by the recent progress in pedestrian detection, we investigate the gap between current state-of-the-art methods and the "perfect single frame detector". We enable our analysis by creating a human baseline for pedestrian detection (over the Caltech dataset), and by manually clustering the recurrent errors of a top detector. Our results characterize both localization and background-versus-foreground errors. To address localization errors we study the impact of training annotation noise on the detector performance, and show that we can improve even with a small portion of sanitized training data. To address background/foreground discrimination, we study convnets for pedestrian detection, and discuss which factors affect their performance. Other than our in-depth analysis, we report top performance on the Caltech dataset, and provide a new sanitized set of training and test annotations.
[ { "created": "Wed, 3 Feb 2016 09:45:56 GMT", "version": "v1" }, { "created": "Tue, 21 Jun 2016 11:33:13 GMT", "version": "v2" } ]
2016-06-22
[ [ "Zhang", "Shanshan", "" ], [ "Benenson", "Rodrigo", "" ], [ "Omran", "Mohamed", "" ], [ "Hosang", "Jan", "" ], [ "Schiele", "Bernt", "" ] ]
Encouraged by the recent progress in pedestrian detection, we investigate the gap between current state-of-the-art methods and the "perfect single frame detector". We enable our analysis by creating a human baseline for pedestrian detection (over the Caltech dataset), and by manually clustering the recurrent errors of a top detector. Our results characterize both localization and background-versus-foreground errors. To address localization errors we study the impact of training annotation noise on the detector performance, and show that we can improve even with a small portion of sanitized training data. To address background/foreground discrimination, we study convnets for pedestrian detection, and discuss which factors affect their performance. Other than our in-depth analysis, we report top performance on the Caltech dataset, and provide a new sanitized set of training and test annotations.
1803.08805
Weizhe Liu
Weizhe Liu, Krzysztof Lis, Mathieu Salzmann, Pascal Fua
Geometric and Physical Constraints for Drone-Based Head Plane Crowd Density Estimation
IROS 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art methods for counting people in crowded scenes rely on deep networks to estimate crowd density in the image plane. While useful for this purpose, this image-plane density has no immediate physical meaning because it is subject to perspective distortion. This is a concern in sequences acquired by drones because the viewpoint changes often. This distortion is usually handled implicitly by either learning scale-invariant features or estimating density in patches of different sizes, neither of which accounts for the fact that scale changes must be consistent over the whole scene. In this paper, we explicitly model the scale changes and reason in terms of people per square-meter. We show that feeding the perspective model to the network allows us to enforce global scale consistency and that this model can be obtained on the fly from the drone sensors. In addition, it also enables us to enforce physically-inspired temporal consistency constraints that do not have to be learned. This yields an algorithm that outperforms state-of-the-art methods in inferring crowd density from a moving drone camera especially when perspective effects are strong.
[ { "created": "Fri, 23 Mar 2018 14:19:13 GMT", "version": "v1" }, { "created": "Wed, 17 Jul 2019 14:00:35 GMT", "version": "v2" }, { "created": "Thu, 18 Jul 2019 09:05:50 GMT", "version": "v3" } ]
2019-07-19
[ [ "Liu", "Weizhe", "" ], [ "Lis", "Krzysztof", "" ], [ "Salzmann", "Mathieu", "" ], [ "Fua", "Pascal", "" ] ]
State-of-the-art methods for counting people in crowded scenes rely on deep networks to estimate crowd density in the image plane. While useful for this purpose, this image-plane density has no immediate physical meaning because it is subject to perspective distortion. This is a concern in sequences acquired by drones because the viewpoint changes often. This distortion is usually handled implicitly by either learning scale-invariant features or estimating density in patches of different sizes, neither of which accounts for the fact that scale changes must be consistent over the whole scene. In this paper, we explicitly model the scale changes and reason in terms of people per square-meter. We show that feeding the perspective model to the network allows us to enforce global scale consistency and that this model can be obtained on the fly from the drone sensors. In addition, it also enables us to enforce physically-inspired temporal consistency constraints that do not have to be learned. This yields an algorithm that outperforms state-of-the-art methods in inferring crowd density from a moving drone camera especially when perspective effects are strong.
2402.16486
Mohsin Bilal
Ahmad Saeed, Haasha Bin Atif, Usman Habib and Mohsin Bilal
Intelligent Known and Novel Aircraft Recognition -- A Shift from Classification to Similarity Learning for Combat Identification
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Precise aircraft recognition in low-resolution remote sensing imagery is a challenging yet crucial task in aviation, especially combat identification. This research addresses this problem with a novel, scalable, and AI-driven solution. The primary hurdle in combat identification in remote sensing imagery is the accurate recognition of Novel/Unknown types of aircraft in addition to Known types. Traditional methods, human expert-driven combat identification and image classification, fall short in identifying Novel classes. Our methodology employs similarity learning to discern features of a broad spectrum of military and civilian aircraft. It discerns both Known and Novel aircraft types, leveraging metric learning for the identification and supervised few-shot learning for aircraft type classification. To counter the challenge of limited low-resolution remote sensing data, we propose an end-to-end framework that adapts to the diverse and versatile process of military aircraft recognition by training a generalized embedder in fully supervised manner. Comparative analysis with earlier aircraft image classification methods shows that our approach is effective for aircraft image classification (F1-score Aircraft Type of 0.861) and pioneering for quantifying the identification of Novel types (F1-score Bipartitioning of 0.936). The proposed methodology effectively addresses inherent challenges in remote sensing data, thereby setting new standards in dataset quality. The research opens new avenues for domain experts and demonstrates unique capabilities in distinguishing various aircraft types, contributing to a more robust, domain-adapted potential for real-time aircraft recognition.
[ { "created": "Mon, 26 Feb 2024 11:08:26 GMT", "version": "v1" } ]
2024-02-27
[ [ "Saeed", "Ahmad", "" ], [ "Atif", "Haasha Bin", "" ], [ "Habib", "Usman", "" ], [ "Bilal", "Mohsin", "" ] ]
Precise aircraft recognition in low-resolution remote sensing imagery is a challenging yet crucial task in aviation, especially combat identification. This research addresses this problem with a novel, scalable, and AI-driven solution. The primary hurdle in combat identification in remote sensing imagery is the accurate recognition of Novel/Unknown types of aircraft in addition to Known types. Traditional methods, human expert-driven combat identification and image classification, fall short in identifying Novel classes. Our methodology employs similarity learning to discern features of a broad spectrum of military and civilian aircraft. It discerns both Known and Novel aircraft types, leveraging metric learning for the identification and supervised few-shot learning for aircraft type classification. To counter the challenge of limited low-resolution remote sensing data, we propose an end-to-end framework that adapts to the diverse and versatile process of military aircraft recognition by training a generalized embedder in fully supervised manner. Comparative analysis with earlier aircraft image classification methods shows that our approach is effective for aircraft image classification (F1-score Aircraft Type of 0.861) and pioneering for quantifying the identification of Novel types (F1-score Bipartitioning of 0.936). The proposed methodology effectively addresses inherent challenges in remote sensing data, thereby setting new standards in dataset quality. The research opens new avenues for domain experts and demonstrates unique capabilities in distinguishing various aircraft types, contributing to a more robust, domain-adapted potential for real-time aircraft recognition.
2403.14442
Jiaming Zhang
Yufan Chen, Jiaming Zhang, Kunyu Peng, Junwei Zheng, Ruiping Liu, Philip Torr, Rainer Stiefelhagen
RoDLA: Benchmarking the Robustness of Document Layout Analysis Models
Accepted by CVPR 2024. Project page: https://yufanchen96.github.io/projects/RoDLA
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Before developing a Document Layout Analysis (DLA) model in real-world applications, conducting comprehensive robustness testing is essential. However, the robustness of DLA models remains underexplored in the literature. To address this, we are the first to introduce a robustness benchmark for DLA models, which includes 450K document images of three datasets. To cover realistic corruptions, we propose a perturbation taxonomy with 36 common document perturbations inspired by real-world document processing. Additionally, to better understand document perturbation impacts, we propose two metrics, Mean Perturbation Effect (mPE) for perturbation assessment and Mean Robustness Degradation (mRD) for robustness evaluation. Furthermore, we introduce a self-titled model, i.e., Robust Document Layout Analyzer (RoDLA), which improves attention mechanisms to boost extraction of robust features. Experiments on the proposed benchmarks (PubLayNet-P, DocLayNet-P, and M$^6$Doc-P) demonstrate that RoDLA obtains state-of-the-art mRD scores of 115.7, 135.4, and 150.4, respectively. Compared to previous methods, RoDLA achieves notable improvements in mAP of +3.8%, +7.1% and +12.1%, respectively.
[ { "created": "Thu, 21 Mar 2024 14:47:12 GMT", "version": "v1" } ]
2024-03-22
[ [ "Chen", "Yufan", "" ], [ "Zhang", "Jiaming", "" ], [ "Peng", "Kunyu", "" ], [ "Zheng", "Junwei", "" ], [ "Liu", "Ruiping", "" ], [ "Torr", "Philip", "" ], [ "Stiefelhagen", "Rainer", "" ] ]
Before developing a Document Layout Analysis (DLA) model in real-world applications, conducting comprehensive robustness testing is essential. However, the robustness of DLA models remains underexplored in the literature. To address this, we are the first to introduce a robustness benchmark for DLA models, which includes 450K document images of three datasets. To cover realistic corruptions, we propose a perturbation taxonomy with 36 common document perturbations inspired by real-world document processing. Additionally, to better understand document perturbation impacts, we propose two metrics, Mean Perturbation Effect (mPE) for perturbation assessment and Mean Robustness Degradation (mRD) for robustness evaluation. Furthermore, we introduce a self-titled model, i.e., Robust Document Layout Analyzer (RoDLA), which improves attention mechanisms to boost extraction of robust features. Experiments on the proposed benchmarks (PubLayNet-P, DocLayNet-P, and M$^6$Doc-P) demonstrate that RoDLA obtains state-of-the-art mRD scores of 115.7, 135.4, and 150.4, respectively. Compared to previous methods, RoDLA achieves notable improvements in mAP of +3.8%, +7.1% and +12.1%, respectively.
2109.04448
Emanuele Bugliarello
Stella Frank, Emanuele Bugliarello, Desmond Elliott
Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers
EMNLP 2021
null
null
null
cs.CL cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pretrained vision-and-language BERTs aim to learn representations that combine information from both modalities. We propose a diagnostic method based on cross-modal input ablation to assess the extent to which these models actually integrate cross-modal information. This method involves ablating inputs from one modality, either entirely or selectively based on cross-modal grounding alignments, and evaluating the model prediction performance on the other modality. Model performance is measured by modality-specific tasks that mirror the model pretraining objectives (e.g. masked language modelling for text). Models that have learned to construct cross-modal representations using both modalities are expected to perform worse when inputs are missing from a modality. We find that recently proposed models have much greater relative difficulty predicting text when visual information is ablated, compared to predicting visual object categories when text is ablated, indicating that these models are not symmetrically cross-modal.
[ { "created": "Thu, 9 Sep 2021 17:47:50 GMT", "version": "v1" } ]
2021-09-10
[ [ "Frank", "Stella", "" ], [ "Bugliarello", "Emanuele", "" ], [ "Elliott", "Desmond", "" ] ]
Pretrained vision-and-language BERTs aim to learn representations that combine information from both modalities. We propose a diagnostic method based on cross-modal input ablation to assess the extent to which these models actually integrate cross-modal information. This method involves ablating inputs from one modality, either entirely or selectively based on cross-modal grounding alignments, and evaluating the model prediction performance on the other modality. Model performance is measured by modality-specific tasks that mirror the model pretraining objectives (e.g. masked language modelling for text). Models that have learned to construct cross-modal representations using both modalities are expected to perform worse when inputs are missing from a modality. We find that recently proposed models have much greater relative difficulty predicting text when visual information is ablated, compared to predicting visual object categories when text is ablated, indicating that these models are not symmetrically cross-modal.
1004.2773
Wei Zhang
Long Shi, Wei Zhang, Xiang-Gen Xia
High-Rate and Full-Diversity Space-Time Block Codes with Low Complexity Partial Interference Cancellation Group Decoding
25 pages, 3 figures, submitted to IEEE Trans. Commun. on 23 March 2010.
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a systematic design of space-time block codes (STBC) which can achieve high rate and full diversity when the partial interference cancellation (PIC) group decoding is used at receivers. The proposed codes can be applied to any number of transmit antennas and admit a low decoding complexity while achieving full diversity. For M transmit antennas, in each codeword real and imaginary parts of PM complex information symbols are parsed into P diagonal layers and then encoded, respectively. With PIC group decoding, it is shown that the decoding complexity can be reduced to a joint decoding of M/2 real symbols. In particular, for 4 transmit antennas, the code has real symbol pairwise (i.e., single complex symbol) decoding that achieves full diversity and the code rate is 4/3. Simulation results demonstrate that the full diversity is offered by the newly proposed STBC with the PIC group decoding.
[ { "created": "Fri, 16 Apr 2010 07:57:16 GMT", "version": "v1" } ]
2010-04-19
[ [ "Shi", "Long", "" ], [ "Zhang", "Wei", "" ], [ "Xia", "Xiang-Gen", "" ] ]
In this paper, we propose a systematic design of space-time block codes (STBC) which can achieve high rate and full diversity when the partial interference cancellation (PIC) group decoding is used at receivers. The proposed codes can be applied to any number of transmit antennas and admit a low decoding complexity while achieving full diversity. For M transmit antennas, in each codeword real and imaginary parts of PM complex information symbols are parsed into P diagonal layers and then encoded, respectively. With PIC group decoding, it is shown that the decoding complexity can be reduced to a joint decoding of M/2 real symbols. In particular, for 4 transmit antennas, the code has real symbol pairwise (i.e., single complex symbol) decoding that achieves full diversity and the code rate is 4/3. Simulation results demonstrate that the full diversity is offered by the newly proposed STBC with the PIC group decoding.
2312.02501
Zikang Xu
Zikang Xu, Fenghe Tang, Quan Quan, Jianrui Ding, Chunping Ning, S. Kevin Zhou
Inspecting Model Fairness in Ultrasound Segmentation Tasks
Submitted to ISBI 2024
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
With the rapid expansion of machine learning and deep learning (DL), researchers are increasingly employing learning-based algorithms to alleviate diagnostic challenges across diverse medical tasks and applications. While advancements in diagnostic precision are notable, some researchers have identified a concerning trend: their models exhibit biased performance across subgroups characterized by different sensitive attributes. This bias not only infringes upon the rights of patients but also has the potential to lead to life-altering consequences. In this paper, we inspect a series of DL segmentation models using two ultrasound datasets, aiming to assess the presence of model unfairness in these specific tasks. Our findings reveal that even state-of-the-art DL algorithms demonstrate unfair behavior in ultrasound segmentation tasks. These results serve as a crucial warning, underscoring the necessity for careful model evaluation before their deployment in real-world scenarios. Such assessments are imperative to ensure ethical considerations and mitigate the risk of adverse impacts on patient outcomes.
[ { "created": "Tue, 5 Dec 2023 05:08:08 GMT", "version": "v1" } ]
2023-12-06
[ [ "Xu", "Zikang", "" ], [ "Tang", "Fenghe", "" ], [ "Quan", "Quan", "" ], [ "Ding", "Jianrui", "" ], [ "Ning", "Chunping", "" ], [ "Zhou", "S. Kevin", "" ] ]
With the rapid expansion of machine learning and deep learning (DL), researchers are increasingly employing learning-based algorithms to alleviate diagnostic challenges across diverse medical tasks and applications. While advancements in diagnostic precision are notable, some researchers have identified a concerning trend: their models exhibit biased performance across subgroups characterized by different sensitive attributes. This bias not only infringes upon the rights of patients but also has the potential to lead to life-altering consequences. In this paper, we inspect a series of DL segmentation models using two ultrasound datasets, aiming to assess the presence of model unfairness in these specific tasks. Our findings reveal that even state-of-the-art DL algorithms demonstrate unfair behavior in ultrasound segmentation tasks. These results serve as a crucial warning, underscoring the necessity for careful model evaluation before their deployment in real-world scenarios. Such assessments are imperative to ensure ethical considerations and mitigate the risk of adverse impacts on patient outcomes.
1710.04101
Oren Salzman
Nika Haghtalab, Simon Mackenzie, Ariel D. Procaccia, Oren Salzman and Siddhartha S. Srinivasa
The Provable Virtue of Laziness in Motion Planning
null
null
null
null
cs.RO cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Lazy Shortest Path (LazySP) class consists of motion-planning algorithms that only evaluate edges along shortest paths between the source and target. These algorithms were designed to minimize the number of edge evaluations in settings where edge evaluation dominates the running time of the algorithm; but how close to optimal are LazySP algorithms in terms of this objective? Our main result is an analytical upper bound, in a probabilistic model, on the number of edge evaluations required by LazySP algorithms; a matching lower bound shows that these algorithms are asymptotically optimal in the worst case.
[ { "created": "Wed, 11 Oct 2017 15:00:47 GMT", "version": "v1" } ]
2017-10-12
[ [ "Haghtalab", "Nika", "" ], [ "Mackenzie", "Simon", "" ], [ "Procaccia", "Ariel D.", "" ], [ "Salzman", "Oren", "" ], [ "Srinivasa", "Siddhartha S.", "" ] ]
The Lazy Shortest Path (LazySP) class consists of motion-planning algorithms that only evaluate edges along shortest paths between the source and target. These algorithms were designed to minimize the number of edge evaluations in settings where edge evaluation dominates the running time of the algorithm; but how close to optimal are LazySP algorithms in terms of this objective? Our main result is an analytical upper bound, in a probabilistic model, on the number of edge evaluations required by LazySP algorithms; a matching lower bound shows that these algorithms are asymptotically optimal in the worst case.
1812.11741
Tim French Dr
Tim French, Andrew Gozzard and Mark Reynolds
A modal aleatoric calculus for probabilistic reasoning: extended version
Long version of paper accepted to appear at the 2019 Indian Conference on Logic and Applictaions
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider multi-agent systems where agents actions and beliefs are determined aleatorically, or "by the throw of dice". This system consists of possible worlds that assign distributions to independent random variables, and agents who assign probabilities to these possible worlds. We present a novel syntax and semantics for such system, and show that they generalise Modal Logic. We also give a sound and complete calculus for reasoning in the base semantics, and a sound calculus for the full modal semantics, that we conjecture to be complete. Finally we discuss some application to reasoning about game playing agents.
[ { "created": "Mon, 31 Dec 2018 09:53:28 GMT", "version": "v1" } ]
2019-01-01
[ [ "French", "Tim", "" ], [ "Gozzard", "Andrew", "" ], [ "Reynolds", "Mark", "" ] ]
We consider multi-agent systems where agents actions and beliefs are determined aleatorically, or "by the throw of dice". This system consists of possible worlds that assign distributions to independent random variables, and agents who assign probabilities to these possible worlds. We present a novel syntax and semantics for such system, and show that they generalise Modal Logic. We also give a sound and complete calculus for reasoning in the base semantics, and a sound calculus for the full modal semantics, that we conjecture to be complete. Finally we discuss some application to reasoning about game playing agents.
1103.2897
Dirk Lorenz
Dirk A. Lorenz
Constructing test instances for Basis Pursuit Denoising
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The number of available algorithms for the so-called Basis Pursuit Denoising problem (or the related LASSO-problem) is large and keeps growing. Similarly, the number of experiments to evaluate and compare these algorithms on different instances is growing. In this note, we present a method to produce instances with exact solutions which is based on a simple observation which is related to the so called source condition from sparse regularization.
[ { "created": "Tue, 15 Mar 2011 13:04:10 GMT", "version": "v1" } ]
2011-03-16
[ [ "Lorenz", "Dirk A.", "" ] ]
The number of available algorithms for the so-called Basis Pursuit Denoising problem (or the related LASSO-problem) is large and keeps growing. Similarly, the number of experiments to evaluate and compare these algorithms on different instances is growing. In this note, we present a method to produce instances with exact solutions which is based on a simple observation which is related to the so called source condition from sparse regularization.
2312.07405
Marc-Etienne Brunet
Marc-Etienne Brunet, Ashton Anderson, Richard Zemel
ICL Markup: Structuring In-Context Learning using Soft-Token Tags
R0-FoMo: Workshop on Robustness of Few-shot and Zero-shot Learning in Foundation Models at NeurIPS 2023
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large pretrained language models (LLMs) can be rapidly adapted to a wide variety of tasks via a text-to-text approach, where the instruction and input are fed to the model in natural language. Combined with in-context learning (ICL), this paradigm is impressively flexible and powerful. However, it also burdens users with an overwhelming number of choices, many of them arbitrary. Inspired by markup languages like HTML, we contribute a method of using soft-token tags to compose prompt templates. This approach reduces arbitrary decisions and streamlines the application of ICL. Our method is a form of meta-learning for ICL; it learns these tags in advance during a parameter-efficient fine-tuning ``warm-up'' process. The tags can subsequently be used in templates for ICL on new, unseen tasks without any additional fine-tuning. Our experiments with this approach yield promising initial results, improving LLM performance on important enterprise applications such as few-shot and open-world intent detection, as well as text classification in news and legal domains.
[ { "created": "Tue, 12 Dec 2023 16:25:05 GMT", "version": "v1" } ]
2023-12-13
[ [ "Brunet", "Marc-Etienne", "" ], [ "Anderson", "Ashton", "" ], [ "Zemel", "Richard", "" ] ]
Large pretrained language models (LLMs) can be rapidly adapted to a wide variety of tasks via a text-to-text approach, where the instruction and input are fed to the model in natural language. Combined with in-context learning (ICL), this paradigm is impressively flexible and powerful. However, it also burdens users with an overwhelming number of choices, many of them arbitrary. Inspired by markup languages like HTML, we contribute a method of using soft-token tags to compose prompt templates. This approach reduces arbitrary decisions and streamlines the application of ICL. Our method is a form of meta-learning for ICL; it learns these tags in advance during a parameter-efficient fine-tuning ``warm-up'' process. The tags can subsequently be used in templates for ICL on new, unseen tasks without any additional fine-tuning. Our experiments with this approach yield promising initial results, improving LLM performance on important enterprise applications such as few-shot and open-world intent detection, as well as text classification in news and legal domains.
2006.00144
Xue Li
Xue Li and Yuanzhi Cheng
Understanding the Message Passing in Graph Neural Networks via Power Iteration Clustering
null
Neural Networks, 140, pp. 130- 135, 2021
10.1016/j.neunet.2021.02.025
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mechanism of message passing in graph neural networks (GNNs) is still mysterious. Apart from convolutional neural networks, no theoretical origin for GNNs has been proposed. To our surprise, message passing can be best understood in terms of power iteration. By fully or partly removing activation functions and layer weights of GNNs, we propose subspace power iteration clustering (SPIC) models that iteratively learn with only one aggregator. Experiments show that our models extend GNNs and enhance their capability to process random featured networks. Moreover, we demonstrate the redundancy of some state-of-the-art GNNs in design and define a lower limit for model evaluation by a random aggregator of message passing. Our findings push the boundaries of the theoretical understanding of neural networks.
[ { "created": "Sat, 30 May 2020 01:44:34 GMT", "version": "v1" }, { "created": "Tue, 2 Jun 2020 06:28:59 GMT", "version": "v2" }, { "created": "Mon, 11 Jan 2021 06:13:38 GMT", "version": "v3" } ]
2021-04-16
[ [ "Li", "Xue", "" ], [ "Cheng", "Yuanzhi", "" ] ]
The mechanism of message passing in graph neural networks (GNNs) is still mysterious. Apart from convolutional neural networks, no theoretical origin for GNNs has been proposed. To our surprise, message passing can be best understood in terms of power iteration. By fully or partly removing activation functions and layer weights of GNNs, we propose subspace power iteration clustering (SPIC) models that iteratively learn with only one aggregator. Experiments show that our models extend GNNs and enhance their capability to process random featured networks. Moreover, we demonstrate the redundancy of some state-of-the-art GNNs in design and define a lower limit for model evaluation by a random aggregator of message passing. Our findings push the boundaries of the theoretical understanding of neural networks.
1805.02085
Yicun Liu
Yicun Liu, Jimmy Ren, Jianbo Liu, Jiawei Zhang, Xiaohao Chen
Learning Selfie-Friendly Abstraction from Artistic Style Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artistic style transfer can be thought as a process to generate different versions of abstraction of the original image. However, most of the artistic style transfer operators are not optimized for human faces thus mainly suffers from two undesirable features when applying them to selfies. First, the edges of human faces may unpleasantly deviate from the ones in the original image. Second, the skin color is far from faithful to the original one which is usually problematic in producing quality selfies. In this paper, we take a different approach and formulate this abstraction process as a gradient domain learning problem. We aim to learn a type of abstraction which not only achieves the specified artistic style but also circumvents the two aforementioned drawbacks thus highly applicable to selfie photography. We also show that our method can be directly generalized to videos with high inter-frame consistency. Our method is also robust to non-selfie images, and the generalization to various kinds of real-life scenes is discussed. We will make our code publicly available.
[ { "created": "Sat, 5 May 2018 17:06:30 GMT", "version": "v1" }, { "created": "Mon, 21 May 2018 18:01:12 GMT", "version": "v2" } ]
2018-05-23
[ [ "Liu", "Yicun", "" ], [ "Ren", "Jimmy", "" ], [ "Liu", "Jianbo", "" ], [ "Zhang", "Jiawei", "" ], [ "Chen", "Xiaohao", "" ] ]
Artistic style transfer can be thought as a process to generate different versions of abstraction of the original image. However, most of the artistic style transfer operators are not optimized for human faces thus mainly suffers from two undesirable features when applying them to selfies. First, the edges of human faces may unpleasantly deviate from the ones in the original image. Second, the skin color is far from faithful to the original one which is usually problematic in producing quality selfies. In this paper, we take a different approach and formulate this abstraction process as a gradient domain learning problem. We aim to learn a type of abstraction which not only achieves the specified artistic style but also circumvents the two aforementioned drawbacks thus highly applicable to selfie photography. We also show that our method can be directly generalized to videos with high inter-frame consistency. Our method is also robust to non-selfie images, and the generalization to various kinds of real-life scenes is discussed. We will make our code publicly available.
cs/0207043
Wen Chen
W. Chen, M. Tanaka
A meshless, integration-free, and boundary-only RBF technique
null
Computers and Mathematics with Applications, 43, 379-391, 2002
null
null
cs.CE cs.CG
null
Based on the radial basis function (RBF), non-singular general solution and dual reciprocity method (DRM), this paper presents an inherently meshless, integration-free, boundary-only RBF collocation techniques for numerical solution of various partial differential equation systems. The basic ideas behind this methodology are very mathematically simple. In this study, the RBFs are employed to approximate the inhomogeneous terms via the DRM, while non-singular general solution leads to a boundary-only RBF formulation for homogenous solution. The present scheme is named as the boundary knot method (BKM) to differentiate it from the other numerical techniques. In particular, due to the use of nonsingular general solutions rather than singular fundamental solutions, the BKM is different from the method of fundamental solution in that the former does no require the artificial boundary and results in the symmetric system equations under certain conditions. The efficiency and utility of this new technique are validated through a number of typical numerical examples. Completeness concern of the BKM due to the only use of non-singular part of complete fundamental solution is also discussed.
[ { "created": "Thu, 11 Jul 2002 12:19:49 GMT", "version": "v1" } ]
2007-05-23
[ [ "Chen", "W.", "" ], [ "Tanaka", "M.", "" ] ]
Based on the radial basis function (RBF), non-singular general solution and dual reciprocity method (DRM), this paper presents an inherently meshless, integration-free, boundary-only RBF collocation techniques for numerical solution of various partial differential equation systems. The basic ideas behind this methodology are very mathematically simple. In this study, the RBFs are employed to approximate the inhomogeneous terms via the DRM, while non-singular general solution leads to a boundary-only RBF formulation for homogenous solution. The present scheme is named as the boundary knot method (BKM) to differentiate it from the other numerical techniques. In particular, due to the use of nonsingular general solutions rather than singular fundamental solutions, the BKM is different from the method of fundamental solution in that the former does no require the artificial boundary and results in the symmetric system equations under certain conditions. The efficiency and utility of this new technique are validated through a number of typical numerical examples. Completeness concern of the BKM due to the only use of non-singular part of complete fundamental solution is also discussed.
1410.4395
Alexander Setzer
Martina Eikel, Christian Scheideler, Alexander Setzer
Minimum Linear Arrangement of Series-Parallel Graphs
null
null
null
null
cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a factor $14D^2$ approximation algorithm for the minimum linear arrangement problem on series-parallel graphs, where $D$ is the maximum degree in the graph. Given a suitable decomposition of the graph, our algorithm runs in time $O(|E|)$ and is very easy to implement. Its divide-and-conquer approach allows for an effective parallelization. Note that a suitable decomposition can also be computed in time $O(|E|\log{|E|})$ (or even $O(\log{|E|}\log^*{|E|})$ on an EREW PRAM using $O(|E|)$ processors). For the proof of the approximation ratio, we use a sophisticated charging method that uses techniques similar to amortized analysis in advanced data structures. On general graphs, the minimum linear arrangement problem is known to be NP-hard. To the best of our knowledge, the minimum linear arrangement problem on series-parallel graphs has not been studied before.
[ { "created": "Thu, 16 Oct 2014 12:37:33 GMT", "version": "v1" } ]
2014-10-17
[ [ "Eikel", "Martina", "" ], [ "Scheideler", "Christian", "" ], [ "Setzer", "Alexander", "" ] ]
We present a factor $14D^2$ approximation algorithm for the minimum linear arrangement problem on series-parallel graphs, where $D$ is the maximum degree in the graph. Given a suitable decomposition of the graph, our algorithm runs in time $O(|E|)$ and is very easy to implement. Its divide-and-conquer approach allows for an effective parallelization. Note that a suitable decomposition can also be computed in time $O(|E|\log{|E|})$ (or even $O(\log{|E|}\log^*{|E|})$ on an EREW PRAM using $O(|E|)$ processors). For the proof of the approximation ratio, we use a sophisticated charging method that uses techniques similar to amortized analysis in advanced data structures. On general graphs, the minimum linear arrangement problem is known to be NP-hard. To the best of our knowledge, the minimum linear arrangement problem on series-parallel graphs has not been studied before.
2309.00508
Leyang Zhang
Leyang Zhang, Yaoyu Zhang, Tao Luo
Geometry and Local Recovery of Global Minima of Two-layer Neural Networks at Overparameterization
null
null
null
null
cs.LG math.DS
http://creativecommons.org/licenses/by-sa/4.0/
Under mild assumptions, we investigate the geometry of the loss landscape for two-layer neural networks in the vicinity of global minima. Utilizing novel techniques, we demonstrate: (i) how global minima with zero generalization error become geometrically separated from other global minima as the sample size grows; and (ii) the local convergence properties and rate of gradient flow dynamics. Our results indicate that two-layer neural networks can be locally recovered in the regime of overparameterization.
[ { "created": "Fri, 1 Sep 2023 14:53:51 GMT", "version": "v1" }, { "created": "Tue, 18 Jun 2024 12:29:30 GMT", "version": "v2" }, { "created": "Thu, 18 Jul 2024 01:09:59 GMT", "version": "v3" } ]
2024-07-19
[ [ "Zhang", "Leyang", "" ], [ "Zhang", "Yaoyu", "" ], [ "Luo", "Tao", "" ] ]
Under mild assumptions, we investigate the geometry of the loss landscape for two-layer neural networks in the vicinity of global minima. Utilizing novel techniques, we demonstrate: (i) how global minima with zero generalization error become geometrically separated from other global minima as the sample size grows; and (ii) the local convergence properties and rate of gradient flow dynamics. Our results indicate that two-layer neural networks can be locally recovered in the regime of overparameterization.
2006.00600
Oren Dean
Yakov Babichenko and Oren Dean and Moshe Tennenholtz
Incentive-Compatible Selection Mechanisms for Forests
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a directed forest-graph, a probabilistic \emph{selection mechanism} is a probability distribution over the vertex set. A selection mechanism is \emph{incentive-compatible} (IC), if the probability assigned to a vertex does not change when we alter its outgoing edge (or even remove it). The quality of a selection mechanism is the worst-case ratio between the expected progeny under the mechanism's distribution and the maximal progeny in the forest. In this paper we prove an upper bound of 4/5 and a lower bound of $ 1/\ln16\approx0.36 $ for the quality of any IC selection mechanism. The lower bound is achieved by two novel mechanisms and is a significant improvement to the results of Babichenko et al. (WWW '18). The first, simpler mechanism, has the nice feature of generating distributions which are fair (i.e., monotone and proportional). The downside of this mechanism is that it is not exact (i.e., the probabilities might sum-up to less than 1). Our second, more involved mechanism, is exact but not fair. We also prove an impossibility for an IC mechanism that is both exact and fair and has a positive quality.
[ { "created": "Sun, 31 May 2020 20:13:45 GMT", "version": "v1" } ]
2020-06-02
[ [ "Babichenko", "Yakov", "" ], [ "Dean", "Oren", "" ], [ "Tennenholtz", "Moshe", "" ] ]
Given a directed forest-graph, a probabilistic \emph{selection mechanism} is a probability distribution over the vertex set. A selection mechanism is \emph{incentive-compatible} (IC), if the probability assigned to a vertex does not change when we alter its outgoing edge (or even remove it). The quality of a selection mechanism is the worst-case ratio between the expected progeny under the mechanism's distribution and the maximal progeny in the forest. In this paper we prove an upper bound of 4/5 and a lower bound of $ 1/\ln16\approx0.36 $ for the quality of any IC selection mechanism. The lower bound is achieved by two novel mechanisms and is a significant improvement to the results of Babichenko et al. (WWW '18). The first, simpler mechanism, has the nice feature of generating distributions which are fair (i.e., monotone and proportional). The downside of this mechanism is that it is not exact (i.e., the probabilities might sum-up to less than 1). Our second, more involved mechanism, is exact but not fair. We also prove an impossibility for an IC mechanism that is both exact and fair and has a positive quality.
2011.05365
Guanghao Ye
Sally Dong, Yin Tat Lee and Guanghao Ye
A Nearly-Linear Time Algorithm for Linear Programs with Small Treewidth: A Multiscale Representation of Robust Central Path
null
null
null
null
cs.DS math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Arising from structural graph theory, treewidth has become a focus of study in fixed-parameter tractable algorithms in various communities including combinatorics, integer-linear programming, and numerical analysis. Many NP-hard problems are known to be solvable in $\widetilde{O}(n \cdot 2^{O(\mathrm{tw})})$ time, where $\mathrm{tw}$ is the treewidth of the input graph. Analogously, many problems in P should be solvable in $\widetilde{O}(n \cdot \mathrm{tw}^{O(1)})$ time; however, due to the lack of appropriate tools, only a few such results are currently known. [Fom+18] conjectured this to hold as broadly as all linear programs; in our paper, we show this is true: Given a linear program of the form $\min_{Ax=b,\ell \leq x\leq u} c^{\top} x$, and a width-$\tau$ tree decomposition of a graph $G_A$ related to $A$, we show how to solve it in time $$\widetilde{O}(n \cdot \tau^2 \log (1/\varepsilon)),$$ where $n$ is the number of variables and $\varepsilon$ is the relative accuracy. Combined with recent techniques in vertex-capacitated flow [BGS21], this leads to an algorithm with $\widetilde{O}(n^{1+o(1)} \cdot \mathrm{tw}^2 \log (1/\varepsilon))$ run-time. Besides being the first of its kind, our algorithm has run-time nearly matching the fastest run-time for solving the sub-problem $Ax=b$ (under the assumption that no fast matrix multiplication is used). We obtain these results by combining recent techniques in interior-point methods (IPMs), sketching, and a novel representation of the solution under a multiscale basis similar to the wavelet basis.
[ { "created": "Tue, 10 Nov 2020 19:35:02 GMT", "version": "v1" }, { "created": "Fri, 9 Jul 2021 08:45:31 GMT", "version": "v2" }, { "created": "Wed, 13 Sep 2023 17:54:27 GMT", "version": "v3" } ]
2023-09-14
[ [ "Dong", "Sally", "" ], [ "Lee", "Yin Tat", "" ], [ "Ye", "Guanghao", "" ] ]
Arising from structural graph theory, treewidth has become a focus of study in fixed-parameter tractable algorithms in various communities including combinatorics, integer-linear programming, and numerical analysis. Many NP-hard problems are known to be solvable in $\widetilde{O}(n \cdot 2^{O(\mathrm{tw})})$ time, where $\mathrm{tw}$ is the treewidth of the input graph. Analogously, many problems in P should be solvable in $\widetilde{O}(n \cdot \mathrm{tw}^{O(1)})$ time; however, due to the lack of appropriate tools, only a few such results are currently known. [Fom+18] conjectured this to hold as broadly as all linear programs; in our paper, we show this is true: Given a linear program of the form $\min_{Ax=b,\ell \leq x\leq u} c^{\top} x$, and a width-$\tau$ tree decomposition of a graph $G_A$ related to $A$, we show how to solve it in time $$\widetilde{O}(n \cdot \tau^2 \log (1/\varepsilon)),$$ where $n$ is the number of variables and $\varepsilon$ is the relative accuracy. Combined with recent techniques in vertex-capacitated flow [BGS21], this leads to an algorithm with $\widetilde{O}(n^{1+o(1)} \cdot \mathrm{tw}^2 \log (1/\varepsilon))$ run-time. Besides being the first of its kind, our algorithm has run-time nearly matching the fastest run-time for solving the sub-problem $Ax=b$ (under the assumption that no fast matrix multiplication is used). We obtain these results by combining recent techniques in interior-point methods (IPMs), sketching, and a novel representation of the solution under a multiscale basis similar to the wavelet basis.
1108.5025
Saeideh Parsaeifard
saeedeh parsaeefard, Mihaela van der Schaar, Ahmad R. Sharafat
Robust Stackelberg game in communication systems
null
null
null
null
cs.IT cs.GT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies multi-user communication systems with two groups of users: leaders which possess system information, and followers which have no system information using the formulation of Stackelberg games. In such games, the leaders play and choose their actions based on their information about the system and the followers choose their actions myopically according to their observations of the aggregate impact of other users. However, obtaining the exact value of these parameters is not practical in communication systems. To study the effect of uncertainty and preserve the players' utilities in these conditions, we introduce a robust equilibrium for Stackelberg games. In this framework, the leaders' information and the followers' observations are uncertain parameters, and the leaders and the followers choose their actions by solving the worst-case robust optimizations. We show that the followers' uncertain parameters always increase the leaders' utilities and decrease the followers' utilities. Conversely, the leaders' uncertain information reduces the leaders' utilities and increases the followers' utilities. We illustrate our theoretical results with the numerical results obtained based on the power control games in the interference channels.
[ { "created": "Thu, 25 Aug 2011 07:16:34 GMT", "version": "v1" } ]
2011-08-26
[ [ "parsaeefard", "saeedeh", "" ], [ "van der Schaar", "Mihaela", "" ], [ "Sharafat", "Ahmad R.", "" ] ]
This paper studies multi-user communication systems with two groups of users: leaders which possess system information, and followers which have no system information using the formulation of Stackelberg games. In such games, the leaders play and choose their actions based on their information about the system and the followers choose their actions myopically according to their observations of the aggregate impact of other users. However, obtaining the exact value of these parameters is not practical in communication systems. To study the effect of uncertainty and preserve the players' utilities in these conditions, we introduce a robust equilibrium for Stackelberg games. In this framework, the leaders' information and the followers' observations are uncertain parameters, and the leaders and the followers choose their actions by solving the worst-case robust optimizations. We show that the followers' uncertain parameters always increase the leaders' utilities and decrease the followers' utilities. Conversely, the leaders' uncertain information reduces the leaders' utilities and increases the followers' utilities. We illustrate our theoretical results with the numerical results obtained based on the power control games in the interference channels.
2010.07086
Muhammad Tahir
Abdul Wahab Muzaffar, Muhammad Tahir, Muhammad Waseem Anwar, Qaiser Chaudry, Shamaila Rasheed Mir, Yawar Rasheed
A Systematic Review of Online Exams Solutions in E-learning: Techniques, Tools and Global Adoption
41 pages, 7 figures, 13 tables
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
E-learning in higher education is exponentially increased during the past decade due to its inevitable benefits in critical situations like natural disasters, and pandemic. The reliable, fair, and seamless execution of online exams in E-learning is highly significant. Particularly, online exams are conducted on E-learning platforms without the physical presence of students and instructors at the same place. This poses several issues like integrity and security during online exams. To address such issues, researchers frequently proposed different techniques and tools. However, a study summarizing and analyzing latest developments, particularly in the area of online examination, is hard to find in the literature. In this article, an SLR for online examination is performed to select and analyze 53 studies published during the last five years. Subsequently, five leading online exams features targeted in the selected studies are identified and underlying development approaches for the implementation of online exams solutions are explored. Furthermore, 16 important techniques and 11 datasets are presented. In addition, 21 online exams tools proposed in the selected studies are identified. Additionally, 25 leading existing tools used in the selected studies are also presented. Finally, the participation of countries in online exam research is investigated. Key factors for the global adoption of online exams are identified and investigated. This facilitates the selection of right online exam system for a particular country on the basis of existing E-learning infrastructure and overall cost. To conclude, the findings of this article provide a solid platform for the researchers and practitioners of the domain to select appropriate features along with underlying development approaches, tools and techniques for the implementation of a particular online exams solution as per given requirements.
[ { "created": "Tue, 13 Oct 2020 14:45:56 GMT", "version": "v1" }, { "created": "Tue, 27 Oct 2020 19:47:26 GMT", "version": "v2" }, { "created": "Fri, 12 Feb 2021 23:18:46 GMT", "version": "v3" } ]
2021-02-16
[ [ "Muzaffar", "Abdul Wahab", "" ], [ "Tahir", "Muhammad", "" ], [ "Anwar", "Muhammad Waseem", "" ], [ "Chaudry", "Qaiser", "" ], [ "Mir", "Shamaila Rasheed", "" ], [ "Rasheed", "Yawar", "" ] ]
E-learning in higher education is exponentially increased during the past decade due to its inevitable benefits in critical situations like natural disasters, and pandemic. The reliable, fair, and seamless execution of online exams in E-learning is highly significant. Particularly, online exams are conducted on E-learning platforms without the physical presence of students and instructors at the same place. This poses several issues like integrity and security during online exams. To address such issues, researchers frequently proposed different techniques and tools. However, a study summarizing and analyzing latest developments, particularly in the area of online examination, is hard to find in the literature. In this article, an SLR for online examination is performed to select and analyze 53 studies published during the last five years. Subsequently, five leading online exams features targeted in the selected studies are identified and underlying development approaches for the implementation of online exams solutions are explored. Furthermore, 16 important techniques and 11 datasets are presented. In addition, 21 online exams tools proposed in the selected studies are identified. Additionally, 25 leading existing tools used in the selected studies are also presented. Finally, the participation of countries in online exam research is investigated. Key factors for the global adoption of online exams are identified and investigated. This facilitates the selection of right online exam system for a particular country on the basis of existing E-learning infrastructure and overall cost. To conclude, the findings of this article provide a solid platform for the researchers and practitioners of the domain to select appropriate features along with underlying development approaches, tools and techniques for the implementation of a particular online exams solution as per given requirements.
1006.0406
EPTCS
Yongcheng Wu (Nanjing University of Information Science and Technology)
Complete Multi-Representations of Sets in a Computable Measure Space
null
EPTCS 24, 2010, pp. 160-166
10.4204/EPTCS.24.20
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a recent paper, two multi-representations for the measurable sets in a computable measure space have been introduced, which prove to be topologically complete w.r.t. certain topological properties. In this contribution, we show them recursively complete w.r.t. computability of measure and set-theoretical operations.
[ { "created": "Wed, 2 Jun 2010 14:31:06 GMT", "version": "v1" } ]
2010-06-03
[ [ "Wu", "Yongcheng", "", "Nanjing University of Information Science and\n Technology" ] ]
In a recent paper, two multi-representations for the measurable sets in a computable measure space have been introduced, which prove to be topologically complete w.r.t. certain topological properties. In this contribution, we show them recursively complete w.r.t. computability of measure and set-theoretical operations.
2110.12490
Akash Pallath
Akash Pallath and Qiyang Zhang
Paperfetcher: A tool to automate handsearch for systematic reviews
null
null
10.1002/jrsm.1604
null
cs.IR stat.AP
http://creativecommons.org/licenses/by/4.0/
Handsearch is an important technique that contributes to thorough literature search in systematic reviews. Traditional handsearch requires reviewers to systematically browse through each issue of a curated list of field-specific journals and conference proceedings to find articles relevant to their review. This manual process is not only time-consuming, laborious, costly, and error-prone, but it also lacks replicability and cross-checking mechanisms. In an attempt to solve these problems, this paper presents a free and open-source Python package and an accompanying web-app, Paperfetcher, to automate handsearch for systematic reviews. With Paperfetcher's assistance, researchers can retrieve articles from designated journals within a specified time frame with just a few clicks. In addition to handsearch, this tool also incorporates snowballing in both directions. Paperfetcher allows researchers to download retrieved studies as a list of DOIs or as an RIS database to facilitate seamless import into citation management and systematic review screening software. To our knowledge, Paperfetcher is the first tool that automates handsearch with high usability and a multi-disciplinary focus.
[ { "created": "Sun, 24 Oct 2021 17:15:17 GMT", "version": "v1" }, { "created": "Wed, 27 Oct 2021 13:34:05 GMT", "version": "v2" }, { "created": "Fri, 7 Jan 2022 03:57:37 GMT", "version": "v3" } ]
2022-10-21
[ [ "Pallath", "Akash", "" ], [ "Zhang", "Qiyang", "" ] ]
Handsearch is an important technique that contributes to thorough literature search in systematic reviews. Traditional handsearch requires reviewers to systematically browse through each issue of a curated list of field-specific journals and conference proceedings to find articles relevant to their review. This manual process is not only time-consuming, laborious, costly, and error-prone, but it also lacks replicability and cross-checking mechanisms. In an attempt to solve these problems, this paper presents a free and open-source Python package and an accompanying web-app, Paperfetcher, to automate handsearch for systematic reviews. With Paperfetcher's assistance, researchers can retrieve articles from designated journals within a specified time frame with just a few clicks. In addition to handsearch, this tool also incorporates snowballing in both directions. Paperfetcher allows researchers to download retrieved studies as a list of DOIs or as an RIS database to facilitate seamless import into citation management and systematic review screening software. To our knowledge, Paperfetcher is the first tool that automates handsearch with high usability and a multi-disciplinary focus.
2205.01271
Yihan Wang
Yihan Wang, Muyang Li, Han Cai, Wei-Ming Chen, and Song Han
Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation
11 pages
IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) 2022
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Pose estimation plays a critical role in human-centered vision applications. However, it is difficult to deploy state-of-the-art HRNet-based pose estimation models on resource-constrained edge devices due to the high computational cost (more than 150 GMACs per frame). In this paper, we study efficient architecture design for real-time multi-person pose estimation on edge. We reveal that HRNet's high-resolution branches are redundant for models at the low-computation region via our gradual shrinking experiments. Removing them improves both efficiency and performance. Inspired by this finding, we design LitePose, an efficient single-branch architecture for pose estimation, and introduce two simple approaches to enhance the capacity of LitePose, including Fusion Deconv Head and Large Kernel Convs. Fusion Deconv Head removes the redundancy in high-resolution branches, allowing scale-aware feature fusion with low overhead. Large Kernel Convs significantly improve the model's capacity and receptive field while maintaining a low computational cost. With only 25% computation increment, 7x7 kernels achieve +14.0 mAP better than 3x3 kernels on the CrowdPose dataset. On mobile platforms, LitePose reduces the latency by up to 5.0x without sacrificing performance, compared with prior state-of-the-art efficient pose estimation models, pushing the frontier of real-time multi-person pose estimation on edge. Our code and pre-trained models are released at https://github.com/mit-han-lab/litepose.
[ { "created": "Tue, 3 May 2022 02:08:04 GMT", "version": "v1" }, { "created": "Fri, 20 May 2022 09:11:46 GMT", "version": "v2" }, { "created": "Sun, 3 Jul 2022 16:24:59 GMT", "version": "v3" }, { "created": "Mon, 11 Jul 2022 16:17:22 GMT", "version": "v4" } ]
2022-07-12
[ [ "Wang", "Yihan", "" ], [ "Li", "Muyang", "" ], [ "Cai", "Han", "" ], [ "Chen", "Wei-Ming", "" ], [ "Han", "Song", "" ] ]
Pose estimation plays a critical role in human-centered vision applications. However, it is difficult to deploy state-of-the-art HRNet-based pose estimation models on resource-constrained edge devices due to the high computational cost (more than 150 GMACs per frame). In this paper, we study efficient architecture design for real-time multi-person pose estimation on edge. We reveal that HRNet's high-resolution branches are redundant for models at the low-computation region via our gradual shrinking experiments. Removing them improves both efficiency and performance. Inspired by this finding, we design LitePose, an efficient single-branch architecture for pose estimation, and introduce two simple approaches to enhance the capacity of LitePose, including Fusion Deconv Head and Large Kernel Convs. Fusion Deconv Head removes the redundancy in high-resolution branches, allowing scale-aware feature fusion with low overhead. Large Kernel Convs significantly improve the model's capacity and receptive field while maintaining a low computational cost. With only 25% computation increment, 7x7 kernels achieve +14.0 mAP better than 3x3 kernels on the CrowdPose dataset. On mobile platforms, LitePose reduces the latency by up to 5.0x without sacrificing performance, compared with prior state-of-the-art efficient pose estimation models, pushing the frontier of real-time multi-person pose estimation on edge. Our code and pre-trained models are released at https://github.com/mit-han-lab/litepose.
2209.07716
Tianhao Wang
Jiachen T. Wang, Saeed Mahloujifar, Shouda Wang, Ruoxi Jia, Prateek Mittal
Renyi Differential Privacy of Propose-Test-Release and Applications to Private and Robust Machine Learning
NeurIPS 2022
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Propose-Test-Release (PTR) is a differential privacy framework that works with local sensitivity of functions, instead of their global sensitivity. This framework is typically used for releasing robust statistics such as median or trimmed mean in a differentially private manner. While PTR is a common framework introduced over a decade ago, using it in applications such as robust SGD where we need many adaptive robust queries is challenging. This is mainly due to the lack of Renyi Differential Privacy (RDP) analysis, an essential ingredient underlying the moments accountant approach for differentially private deep learning. In this work, we generalize the standard PTR and derive the first RDP bound for it when the target function has bounded global sensitivity. We show that our RDP bound for PTR yields tighter DP guarantees than the directly analyzed $(\eps, \delta)$-DP. We also derive the algorithm-specific privacy amplification bound of PTR under subsampling. We show that our bound is much tighter than the general upper bound and close to the lower bound. Our RDP bounds enable tighter privacy loss calculation for the composition of many adaptive runs of PTR. As an application of our analysis, we show that PTR and our theoretical results can be used to design differentially private variants for byzantine robust training algorithms that use robust statistics for gradients aggregation. We conduct experiments on the settings of label, feature, and gradient corruption across different datasets and architectures. We show that PTR-based private and robust training algorithm significantly improves the utility compared with the baseline.
[ { "created": "Fri, 16 Sep 2022 04:48:22 GMT", "version": "v1" } ]
2022-09-19
[ [ "Wang", "Jiachen T.", "" ], [ "Mahloujifar", "Saeed", "" ], [ "Wang", "Shouda", "" ], [ "Jia", "Ruoxi", "" ], [ "Mittal", "Prateek", "" ] ]
Propose-Test-Release (PTR) is a differential privacy framework that works with local sensitivity of functions, instead of their global sensitivity. This framework is typically used for releasing robust statistics such as median or trimmed mean in a differentially private manner. While PTR is a common framework introduced over a decade ago, using it in applications such as robust SGD where we need many adaptive robust queries is challenging. This is mainly due to the lack of Renyi Differential Privacy (RDP) analysis, an essential ingredient underlying the moments accountant approach for differentially private deep learning. In this work, we generalize the standard PTR and derive the first RDP bound for it when the target function has bounded global sensitivity. We show that our RDP bound for PTR yields tighter DP guarantees than the directly analyzed $(\eps, \delta)$-DP. We also derive the algorithm-specific privacy amplification bound of PTR under subsampling. We show that our bound is much tighter than the general upper bound and close to the lower bound. Our RDP bounds enable tighter privacy loss calculation for the composition of many adaptive runs of PTR. As an application of our analysis, we show that PTR and our theoretical results can be used to design differentially private variants for byzantine robust training algorithms that use robust statistics for gradients aggregation. We conduct experiments on the settings of label, feature, and gradient corruption across different datasets and architectures. We show that PTR-based private and robust training algorithm significantly improves the utility compared with the baseline.
1509.01023
Ibrahim Adeyanju
Ibrahim Adeyanju
Generating Weather Forecast Texts with Case Based Reasoning
6 pages
International Journal of Computer Applications 45(10) (2012) 35-40
null
null
cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several techniques have been used to generate weather forecast texts. In this paper, case based reasoning (CBR) is proposed for weather forecast text generation because similar weather conditions occur over time and should have similar forecast texts. CBR-METEO, a system for generating weather forecast texts was developed using a generic framework (jCOLIBRI) which provides modules for the standard components of the CBR architecture. The advantage in a CBR approach is that systems can be built in minimal time with far less human effort after initial consultation with experts. The approach depends heavily on the goodness of the retrieval and revision components of the CBR process. We evaluated CBRMETEO with NIST, an automated metric which has been shown to correlate well with human judgements for this domain. The system shows comparable performance with other NLG systems that perform the same task.
[ { "created": "Thu, 3 Sep 2015 10:21:16 GMT", "version": "v1" } ]
2015-09-04
[ [ "Adeyanju", "Ibrahim", "" ] ]
Several techniques have been used to generate weather forecast texts. In this paper, case based reasoning (CBR) is proposed for weather forecast text generation because similar weather conditions occur over time and should have similar forecast texts. CBR-METEO, a system for generating weather forecast texts was developed using a generic framework (jCOLIBRI) which provides modules for the standard components of the CBR architecture. The advantage in a CBR approach is that systems can be built in minimal time with far less human effort after initial consultation with experts. The approach depends heavily on the goodness of the retrieval and revision components of the CBR process. We evaluated CBRMETEO with NIST, an automated metric which has been shown to correlate well with human judgements for this domain. The system shows comparable performance with other NLG systems that perform the same task.
1403.1696
Giulio Coluccia
Giulio Coluccia, Aline Roumy and Enrico Magli
Exact Performance Analysis of the Oracle Receiver for Compressed Sensing Reconstruction
To be published in ICASSP 2014 proceedings
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A sparse or compressible signal can be recovered from a certain number of noisy random projections, smaller than what dictated by classic Shannon/Nyquist theory. In this paper, we derive the closed-form expression of the mean square error performance of the oracle receiver, knowing the sparsity pattern of the signal. With respect to existing bounds, our result is exact and does not depend on a particular realization of the sensing matrix. Moreover, our result holds irrespective of whether the noise affecting the measurements is white or correlated. Numerical results show a perfect match between equations and simulations, confirming the validity of the result.
[ { "created": "Fri, 7 Mar 2014 09:44:26 GMT", "version": "v1" } ]
2014-03-10
[ [ "Coluccia", "Giulio", "" ], [ "Roumy", "Aline", "" ], [ "Magli", "Enrico", "" ] ]
A sparse or compressible signal can be recovered from a certain number of noisy random projections, smaller than what dictated by classic Shannon/Nyquist theory. In this paper, we derive the closed-form expression of the mean square error performance of the oracle receiver, knowing the sparsity pattern of the signal. With respect to existing bounds, our result is exact and does not depend on a particular realization of the sensing matrix. Moreover, our result holds irrespective of whether the noise affecting the measurements is white or correlated. Numerical results show a perfect match between equations and simulations, confirming the validity of the result.
1602.04294
Cristian-Ioan Vasile
Cristian-Ioan Vasile, Derya Aksaray, Calin Belta
Time Window Temporal Logic
null
null
null
null
cs.FL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces time window temporal logic (TWTL), a rich expressivity language for describing various time bounded specifications. In particular, the syntax and semantics of TWTL enable the compact representation of serial tasks, which are typically seen in robotics and control applications. This paper also discusses the relaxation of TWTL formulae with respect to deadlines of tasks. Efficient automata-based frameworks to solve synthesis, verification and learning problems are also presented. The key ingredient to the presented solution is an algorithm to translate a TWTL formula to an annotated finite state automaton that encodes all possible temporal relaxations of the specification. Case studies illustrating the expressivity of the logic and the proposed algorithms are included.
[ { "created": "Sat, 13 Feb 2016 07:10:59 GMT", "version": "v1" } ]
2016-02-16
[ [ "Vasile", "Cristian-Ioan", "" ], [ "Aksaray", "Derya", "" ], [ "Belta", "Calin", "" ] ]
This paper introduces time window temporal logic (TWTL), a rich expressivity language for describing various time bounded specifications. In particular, the syntax and semantics of TWTL enable the compact representation of serial tasks, which are typically seen in robotics and control applications. This paper also discusses the relaxation of TWTL formulae with respect to deadlines of tasks. Efficient automata-based frameworks to solve synthesis, verification and learning problems are also presented. The key ingredient to the presented solution is an algorithm to translate a TWTL formula to an annotated finite state automaton that encodes all possible temporal relaxations of the specification. Case studies illustrating the expressivity of the logic and the proposed algorithms are included.
2303.09779
Daehan Kim
Daehan Kim, Minseok Seo, Kwanyong Park, Inkyu Shin, Sanghyun Woo, In-So Kweon, Dong-Geol Choi
Bidirectional Domain Mixup for Domain Adaptive Semantic Segmentation
10 pages, 3 figures, Accepted on AAAI 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Mixup provides interpolated training samples and allows the model to obtain smoother decision boundaries for better generalization. The idea can be naturally applied to the domain adaptation task, where we can mix the source and target samples to obtain domain-mixed samples for better adaptation. However, the extension of the idea from classification to segmentation (i.e., structured output) is nontrivial. This paper systematically studies the impact of mixup under the domain adaptaive semantic segmentation task and presents a simple yet effective mixup strategy called Bidirectional Domain Mixup (BDM). In specific, we achieve domain mixup in two-step: cut and paste. Given the warm-up model trained from any adaptation techniques, we forward the source and target samples and perform a simple threshold-based cut out of the unconfident regions (cut). After then, we fill-in the dropped regions with the other domain region patches (paste). In doing so, we jointly consider class distribution, spatial structure, and pseudo label confidence. Based on our analysis, we found that BDM leaves domain transferable regions by cutting, balances the dataset-level class distribution while preserving natural scene context by pasting. We coupled our proposal with various state-of-the-art adaptation models and observe significant improvement consistently. We also provide extensive ablation experiments to empirically verify our main components of the framework. Visit our project page with the code at https://sites.google.com/view/bidirectional-domain-mixup
[ { "created": "Fri, 17 Mar 2023 05:22:44 GMT", "version": "v1" } ]
2023-03-20
[ [ "Kim", "Daehan", "" ], [ "Seo", "Minseok", "" ], [ "Park", "Kwanyong", "" ], [ "Shin", "Inkyu", "" ], [ "Woo", "Sanghyun", "" ], [ "Kweon", "In-So", "" ], [ "Choi", "Dong-Geol", "" ] ]
Mixup provides interpolated training samples and allows the model to obtain smoother decision boundaries for better generalization. The idea can be naturally applied to the domain adaptation task, where we can mix the source and target samples to obtain domain-mixed samples for better adaptation. However, the extension of the idea from classification to segmentation (i.e., structured output) is nontrivial. This paper systematically studies the impact of mixup under the domain adaptaive semantic segmentation task and presents a simple yet effective mixup strategy called Bidirectional Domain Mixup (BDM). In specific, we achieve domain mixup in two-step: cut and paste. Given the warm-up model trained from any adaptation techniques, we forward the source and target samples and perform a simple threshold-based cut out of the unconfident regions (cut). After then, we fill-in the dropped regions with the other domain region patches (paste). In doing so, we jointly consider class distribution, spatial structure, and pseudo label confidence. Based on our analysis, we found that BDM leaves domain transferable regions by cutting, balances the dataset-level class distribution while preserving natural scene context by pasting. We coupled our proposal with various state-of-the-art adaptation models and observe significant improvement consistently. We also provide extensive ablation experiments to empirically verify our main components of the framework. Visit our project page with the code at https://sites.google.com/view/bidirectional-domain-mixup
1707.07146
Ying Cui
Sian Jin, Ying Cui, Hui Liu, Giuseppe Caire
Structural Properties of Uncoded Placement Optimization for Coded Delivery
34 pages, submitted to IEEE Trans. Inform. Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A centralized coded caching scheme has been proposed by Maddah-Ali and Niesen to reduce the worst-case load of a network consisting of a server with access to N files and connected through a shared link to K users, each equipped with a cache of size M. However, this centralized coded caching scheme is not able to take advantage of a non-uniform, possibly very skewed, file popularity distribution. In this work, we consider the same network setting but aim to reduce the average load under an arbitrary (known) file popularity distribution. First, we consider a class of centralized coded caching schemes utilizing general uncoded placement and a specific coded delivery strategy, which are specified by a general file partition parameter. Then, we formulate the coded caching design optimization problem over the considered class of schemes with 2^K2^N variables to minimize the average load by optimizing the file partition parameter under an arbitrary file popularity. Furthermore, we show that the optimization problem is convex, and the resulting optimal solution generally improves upon known schemes. Next, we analyze structural properties of the optimization problem to obtain design insights and reduce the complexity. Specifically, we obtain an equivalent linear optimization problem with (K+1)N variables under an arbitrary file popularity and an equivalent linear optimization problem with K+1 variables under the uniform file popularity. Under the uniform file popularity, we also obtain the closed form optimal solution, which corresponds to Maddah-Ali-Niesen's centralized coded caching scheme. Finally, we present an information-theoretic converse bound on the average load under an arbitrary file popularity.
[ { "created": "Sat, 22 Jul 2017 11:59:15 GMT", "version": "v1" } ]
2017-07-25
[ [ "Jin", "Sian", "" ], [ "Cui", "Ying", "" ], [ "Liu", "Hui", "" ], [ "Caire", "Giuseppe", "" ] ]
A centralized coded caching scheme has been proposed by Maddah-Ali and Niesen to reduce the worst-case load of a network consisting of a server with access to N files and connected through a shared link to K users, each equipped with a cache of size M. However, this centralized coded caching scheme is not able to take advantage of a non-uniform, possibly very skewed, file popularity distribution. In this work, we consider the same network setting but aim to reduce the average load under an arbitrary (known) file popularity distribution. First, we consider a class of centralized coded caching schemes utilizing general uncoded placement and a specific coded delivery strategy, which are specified by a general file partition parameter. Then, we formulate the coded caching design optimization problem over the considered class of schemes with 2^K2^N variables to minimize the average load by optimizing the file partition parameter under an arbitrary file popularity. Furthermore, we show that the optimization problem is convex, and the resulting optimal solution generally improves upon known schemes. Next, we analyze structural properties of the optimization problem to obtain design insights and reduce the complexity. Specifically, we obtain an equivalent linear optimization problem with (K+1)N variables under an arbitrary file popularity and an equivalent linear optimization problem with K+1 variables under the uniform file popularity. Under the uniform file popularity, we also obtain the closed form optimal solution, which corresponds to Maddah-Ali-Niesen's centralized coded caching scheme. Finally, we present an information-theoretic converse bound on the average load under an arbitrary file popularity.
1903.02516
Mauricio Matamoros
Mauricio Matamoros and Viktor Seib and Dietrich Paulus
Trends, Challenges and Adopted Strategies in RoboCup@Home
18 pages, 7 figures, 3 tables
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scientific competitions are crucial in the field of service robotics. They foster knowledge exchange and allow teams to test their research in unstandardized scenarios and compare result. Such is the case of RoboCup@Home. However, keeping track of all the technologies and solution approaches used by teams to solve the tests can be a challenge in itself. Moreover, after eleven years of competitions, it's easy to delve too much into the field, losing perspective and forgetting about the user's needs and long term goals. In this paper, we aim to tackle this problems by presenting a summary of the trending solutions and approaches used in RoboCup@Home, and discussing the attained achievements and challenges to overcome in relation with the progress required to fulfill the long-term goal of the league. Hence, considering the current capabilities of the robots and their limitations, we propose a set of milestones to address in upcoming competitions. With this work we lay the foundations towards the creation of roadmaps that can help to direct efforts in testing and benchmarking in robotics competitions.
[ { "created": "Wed, 6 Mar 2019 17:49:34 GMT", "version": "v1" } ]
2019-03-07
[ [ "Matamoros", "Mauricio", "" ], [ "Seib", "Viktor", "" ], [ "Paulus", "Dietrich", "" ] ]
Scientific competitions are crucial in the field of service robotics. They foster knowledge exchange and allow teams to test their research in unstandardized scenarios and compare result. Such is the case of RoboCup@Home. However, keeping track of all the technologies and solution approaches used by teams to solve the tests can be a challenge in itself. Moreover, after eleven years of competitions, it's easy to delve too much into the field, losing perspective and forgetting about the user's needs and long term goals. In this paper, we aim to tackle this problems by presenting a summary of the trending solutions and approaches used in RoboCup@Home, and discussing the attained achievements and challenges to overcome in relation with the progress required to fulfill the long-term goal of the league. Hence, considering the current capabilities of the robots and their limitations, we propose a set of milestones to address in upcoming competitions. With this work we lay the foundations towards the creation of roadmaps that can help to direct efforts in testing and benchmarking in robotics competitions.
2309.05184
Xihang Yu
Xihang Yu, Heng Yang
SIM-Sync: From Certifiably Optimal Synchronization over the 3D Similarity Group to Scene Reconstruction with Learned Depth
28 pages
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents SIM-Sync, a certifiably optimal algorithm that estimates camera trajectory and 3D scene structure directly from multiview image keypoints. SIM-Sync fills the gap between pose graph optimization and bundle adjustment; the former admits efficient global optimization but requires relative pose measurements and the latter directly consumes image keypoints but is difficult to optimize globally (due to camera projective geometry). The bridge to this gap is a pretrained depth prediction network. Given a graph with nodes representing monocular images taken at unknown camera poses and edges containing pairwise image keypoint correspondences, SIM-Sync first uses a pretrained depth prediction network to lift the 2D keypoints into 3D scaled point clouds, where the scaling of the per-image point cloud is unknown due to the scale ambiguity in monocular depth prediction. SIM-Sync then seeks to synchronize jointly the unknown camera poses and scaling factors (i.e., over the 3D similarity group). The SIM-Sync formulation, despite nonconvex, allows designing an efficient certifiably optimal solver that is almost identical to the SE-Sync algorithm. We demonstrate the tightness, robustness, and practical usefulness of SIM-Sync in both simulated and real experiments. In simulation, we show (i) SIM-Sync compares favorably with SE-Sync in scale-free synchronization, and (ii) SIM-Sync can be used together with robust estimators to tolerate a high amount of outliers. In real experiments, we show (a) SIM-Sync achieves similar performance as Ceres on bundle adjustment datasets, and (b) SIM-Sync performs on par with ORB-SLAM3 on the TUM dataset with zero-shot depth prediction.
[ { "created": "Mon, 11 Sep 2023 01:02:20 GMT", "version": "v1" } ]
2023-09-12
[ [ "Yu", "Xihang", "" ], [ "Yang", "Heng", "" ] ]
This paper presents SIM-Sync, a certifiably optimal algorithm that estimates camera trajectory and 3D scene structure directly from multiview image keypoints. SIM-Sync fills the gap between pose graph optimization and bundle adjustment; the former admits efficient global optimization but requires relative pose measurements and the latter directly consumes image keypoints but is difficult to optimize globally (due to camera projective geometry). The bridge to this gap is a pretrained depth prediction network. Given a graph with nodes representing monocular images taken at unknown camera poses and edges containing pairwise image keypoint correspondences, SIM-Sync first uses a pretrained depth prediction network to lift the 2D keypoints into 3D scaled point clouds, where the scaling of the per-image point cloud is unknown due to the scale ambiguity in monocular depth prediction. SIM-Sync then seeks to synchronize jointly the unknown camera poses and scaling factors (i.e., over the 3D similarity group). The SIM-Sync formulation, despite nonconvex, allows designing an efficient certifiably optimal solver that is almost identical to the SE-Sync algorithm. We demonstrate the tightness, robustness, and practical usefulness of SIM-Sync in both simulated and real experiments. In simulation, we show (i) SIM-Sync compares favorably with SE-Sync in scale-free synchronization, and (ii) SIM-Sync can be used together with robust estimators to tolerate a high amount of outliers. In real experiments, we show (a) SIM-Sync achieves similar performance as Ceres on bundle adjustment datasets, and (b) SIM-Sync performs on par with ORB-SLAM3 on the TUM dataset with zero-shot depth prediction.
2001.08883
Tugba Erpek
Yalin E. Sagduyu, Yi Shi, Tugba Erpek, William Headley, Bryse Flowers, George Stantchev, Zhuo Lu
When Wireless Security Meets Machine Learning: Motivation, Challenges, and Research Directions
null
null
null
null
cs.NI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless systems are vulnerable to various attacks such as jamming and eavesdropping due to the shared and broadcast nature of wireless medium. To support both attack and defense strategies, machine learning (ML) provides automated means to learn from and adapt to wireless communication characteristics that are hard to capture by hand-crafted features and models. This article discusses motivation, background, and scope of research efforts that bridge ML and wireless security. Motivated by research directions surveyed in the context of ML for wireless security, ML-based attack and defense solutions and emerging adversarial ML techniques in the wireless domain are identified along with a roadmap to foster research efforts in bridging ML and wireless security.
[ { "created": "Fri, 24 Jan 2020 05:07:39 GMT", "version": "v1" } ]
2020-01-27
[ [ "Sagduyu", "Yalin E.", "" ], [ "Shi", "Yi", "" ], [ "Erpek", "Tugba", "" ], [ "Headley", "William", "" ], [ "Flowers", "Bryse", "" ], [ "Stantchev", "George", "" ], [ "Lu", "Zhuo", "" ] ]
Wireless systems are vulnerable to various attacks such as jamming and eavesdropping due to the shared and broadcast nature of wireless medium. To support both attack and defense strategies, machine learning (ML) provides automated means to learn from and adapt to wireless communication characteristics that are hard to capture by hand-crafted features and models. This article discusses motivation, background, and scope of research efforts that bridge ML and wireless security. Motivated by research directions surveyed in the context of ML for wireless security, ML-based attack and defense solutions and emerging adversarial ML techniques in the wireless domain are identified along with a roadmap to foster research efforts in bridging ML and wireless security.
2105.09103
Guo-Wei Yang
Guo-Wei Yang, Wen-Yang Zhou, Hao-Yang Peng, Dun Liang, Tai-Jiang Mu, Shi-Min Hu
Recursive-NeRF: An Efficient and Dynamically Growing NeRF
11 pages, 12 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
View synthesis methods using implicit continuous shape representations learned from a set of images, such as the Neural Radiance Field (NeRF) method, have gained increasing attention due to their high quality imagery and scalability to high resolution. However, the heavy computation required by its volumetric approach prevents NeRF from being useful in practice; minutes are taken to render a single image of a few megapixels. Now, an image of a scene can be rendered in a level-of-detail manner, so we posit that a complicated region of the scene should be represented by a large neural network while a small neural network is capable of encoding a simple region, enabling a balance between efficiency and quality. Recursive-NeRF is our embodiment of this idea, providing an efficient and adaptive rendering and training approach for NeRF. The core of Recursive-NeRF learns uncertainties for query coordinates, representing the quality of the predicted color and volumetric intensity at each level. Only query coordinates with high uncertainties are forwarded to the next level to a bigger neural network with a more powerful representational capability. The final rendered image is a composition of results from neural networks of all levels. Our evaluation on three public datasets shows that Recursive-NeRF is more efficient than NeRF while providing state-of-the-art quality. The code will be available at https://github.com/Gword/Recursive-NeRF.
[ { "created": "Wed, 19 May 2021 12:51:54 GMT", "version": "v1" } ]
2021-05-20
[ [ "Yang", "Guo-Wei", "" ], [ "Zhou", "Wen-Yang", "" ], [ "Peng", "Hao-Yang", "" ], [ "Liang", "Dun", "" ], [ "Mu", "Tai-Jiang", "" ], [ "Hu", "Shi-Min", "" ] ]
View synthesis methods using implicit continuous shape representations learned from a set of images, such as the Neural Radiance Field (NeRF) method, have gained increasing attention due to their high quality imagery and scalability to high resolution. However, the heavy computation required by its volumetric approach prevents NeRF from being useful in practice; minutes are taken to render a single image of a few megapixels. Now, an image of a scene can be rendered in a level-of-detail manner, so we posit that a complicated region of the scene should be represented by a large neural network while a small neural network is capable of encoding a simple region, enabling a balance between efficiency and quality. Recursive-NeRF is our embodiment of this idea, providing an efficient and adaptive rendering and training approach for NeRF. The core of Recursive-NeRF learns uncertainties for query coordinates, representing the quality of the predicted color and volumetric intensity at each level. Only query coordinates with high uncertainties are forwarded to the next level to a bigger neural network with a more powerful representational capability. The final rendered image is a composition of results from neural networks of all levels. Our evaluation on three public datasets shows that Recursive-NeRF is more efficient than NeRF while providing state-of-the-art quality. The code will be available at https://github.com/Gword/Recursive-NeRF.
2406.04511
Shah Ariful Hoque Chowdhury
F. A. Mamun, S. A. H. Chowdhury, J. E. Giti, H. Sarker
Classification of Non-native Handwritten Characters Using Convolutional Neural Network
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The use of convolutional neural networks (CNNs) has accelerated the progress of handwritten character classification/recognition. Handwritten character recognition (HCR) has found applications in various domains, such as traffic signal detection, language translation, and document information extraction. However, the widespread use of existing HCR technology is yet to be seen as it does not provide reliable character recognition with outstanding accuracy. One of the reasons for unreliable HCR is that existing HCR methods do not take the handwriting styles of non-native writers into account. Hence, further improvement is needed to ensure the reliability and extensive deployment of character recognition technologies for critical tasks. In this work, the classification of English characters written by non-native users is performed by proposing a custom-tailored CNN model. We train this CNN with a new dataset called the handwritten isolated English character (HIEC) dataset. This dataset consists of 16,496 images collected from 260 persons. This paper also includes an ablation study of our CNN by adjusting hyperparameters to identify the best model for the HIEC dataset. The proposed model with five convolutional layers and one hidden layer outperforms state-of-the-art models in terms of character recognition accuracy and achieves an accuracy of $\mathbf{97.04}$%. Compared with the second-best model, the relative improvement of our model in terms of classification accuracy is $\mathbf{4.38}$%.
[ { "created": "Thu, 6 Jun 2024 21:08:07 GMT", "version": "v1" } ]
2024-06-10
[ [ "Mamun", "F. A.", "" ], [ "Chowdhury", "S. A. H.", "" ], [ "Giti", "J. E.", "" ], [ "Sarker", "H.", "" ] ]
The use of convolutional neural networks (CNNs) has accelerated the progress of handwritten character classification/recognition. Handwritten character recognition (HCR) has found applications in various domains, such as traffic signal detection, language translation, and document information extraction. However, the widespread use of existing HCR technology is yet to be seen as it does not provide reliable character recognition with outstanding accuracy. One of the reasons for unreliable HCR is that existing HCR methods do not take the handwriting styles of non-native writers into account. Hence, further improvement is needed to ensure the reliability and extensive deployment of character recognition technologies for critical tasks. In this work, the classification of English characters written by non-native users is performed by proposing a custom-tailored CNN model. We train this CNN with a new dataset called the handwritten isolated English character (HIEC) dataset. This dataset consists of 16,496 images collected from 260 persons. This paper also includes an ablation study of our CNN by adjusting hyperparameters to identify the best model for the HIEC dataset. The proposed model with five convolutional layers and one hidden layer outperforms state-of-the-art models in terms of character recognition accuracy and achieves an accuracy of $\mathbf{97.04}$%. Compared with the second-best model, the relative improvement of our model in terms of classification accuracy is $\mathbf{4.38}$%.
2008.06967
Yu Feng
Yu Feng, Boyuan Tian, Tiancheng Xu, Paul Whatmough, Yuhao Zhu
Mesorasi: Architecture Support for Point Cloud Analytics via Delayed-Aggregation
null
Proceedings of the 53nd (2020) Annual IEEE/ACM International Symposium on Microarchitecture
null
null
cs.CV cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point cloud analytics is poised to become a key workload on battery-powered embedded and mobile platforms in a wide range of emerging application domains, such as autonomous driving, robotics, and augmented reality, where efficiency is paramount. This paper proposes Mesorasi, an algorithm-architecture co-designed system that simultaneously improves the performance and energy efficiency of point cloud analytics while retaining its accuracy. Our extensive characterizations of state-of-the-art point cloud algorithms show that, while structurally reminiscent of convolutional neural networks (CNNs), point cloud algorithms exhibit inherent compute and memory inefficiencies due to the unique characteristics of point cloud data. We propose delayed-aggregation, a new algorithmic primitive for building efficient point cloud algorithms. Delayed-aggregation hides the performance bottlenecks and reduces the compute and memory redundancies by exploiting the approximately distributive property of key operations in point cloud algorithms. Delayed-aggregation let point cloud algorithms achieve 1.6x speedup and 51.1% energy reduction on a mobile GPU while retaining the accuracy (-0.9% loss to 1.2% gains). To maximize the algorithmic benefits, we propose minor extensions to contemporary CNN accelerators, which can be integrated into a mobile Systems-on-a-Chip (SoC) without modifying other SoC components. With additional hardware support, Mesorasi achieves up to 3.6x speedup.
[ { "created": "Sun, 16 Aug 2020 18:11:19 GMT", "version": "v1" } ]
2020-08-18
[ [ "Feng", "Yu", "" ], [ "Tian", "Boyuan", "" ], [ "Xu", "Tiancheng", "" ], [ "Whatmough", "Paul", "" ], [ "Zhu", "Yuhao", "" ] ]
Point cloud analytics is poised to become a key workload on battery-powered embedded and mobile platforms in a wide range of emerging application domains, such as autonomous driving, robotics, and augmented reality, where efficiency is paramount. This paper proposes Mesorasi, an algorithm-architecture co-designed system that simultaneously improves the performance and energy efficiency of point cloud analytics while retaining its accuracy. Our extensive characterizations of state-of-the-art point cloud algorithms show that, while structurally reminiscent of convolutional neural networks (CNNs), point cloud algorithms exhibit inherent compute and memory inefficiencies due to the unique characteristics of point cloud data. We propose delayed-aggregation, a new algorithmic primitive for building efficient point cloud algorithms. Delayed-aggregation hides the performance bottlenecks and reduces the compute and memory redundancies by exploiting the approximately distributive property of key operations in point cloud algorithms. Delayed-aggregation let point cloud algorithms achieve 1.6x speedup and 51.1% energy reduction on a mobile GPU while retaining the accuracy (-0.9% loss to 1.2% gains). To maximize the algorithmic benefits, we propose minor extensions to contemporary CNN accelerators, which can be integrated into a mobile Systems-on-a-Chip (SoC) without modifying other SoC components. With additional hardware support, Mesorasi achieves up to 3.6x speedup.
2404.10034
Shakeeb Murtaza
Shakeeb Murtaza, Soufiane Belharbi, Marco Pedersoli, Eric Granger
A Realistic Protocol for Evaluation of Weakly Supervised Object Localization
13 pages, 5 figures
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Weakly Supervised Object Localization (WSOL) allows training deep learning models for classification and localization (LOC) using only global class-level labels. The absence of bounding box (bbox) supervision during training raises challenges in the literature for hyper-parameter tuning, model selection, and evaluation. WSOL methods rely on a validation set with bbox annotations for model selection, and a test set with bbox annotations for threshold estimation for producing bboxes from localization maps. This approach, however, is not aligned with the WSOL setting as these annotations are typically unavailable in real-world scenarios. Our initial empirical analysis shows a significant decline in LOC performance when model selection and threshold estimation rely solely on class labels and the image itself, respectively, compared to using manual bbox annotations. This highlights the importance of incorporating bbox labels for optimal model performance. In this paper, a new WSOL evaluation protocol is proposed that provides LOC information without the need for manual bbox annotations. In particular, we generated noisy pseudo-boxes from a pretrained off-the-shelf region proposal method such as Selective Search, CLIP, and RPN for model selection. These bboxes are also employed to estimate the threshold from LOC maps, circumventing the need for test-set bbox annotations. Our experiments with several WSOL methods on ILSVRC and CUB datasets show that using the proposed pseudo-bboxes for validation facilitates the model selection and threshold estimation, with LOC performance comparable to those selected using GT bboxes on the validation set and threshold estimation on the test set. It also outperforms models selected using class-level labels, and then dynamically thresholded based solely on LOC maps.
[ { "created": "Mon, 15 Apr 2024 17:25:21 GMT", "version": "v1" }, { "created": "Mon, 12 Aug 2024 00:39:01 GMT", "version": "v2" } ]
2024-08-13
[ [ "Murtaza", "Shakeeb", "" ], [ "Belharbi", "Soufiane", "" ], [ "Pedersoli", "Marco", "" ], [ "Granger", "Eric", "" ] ]
Weakly Supervised Object Localization (WSOL) allows training deep learning models for classification and localization (LOC) using only global class-level labels. The absence of bounding box (bbox) supervision during training raises challenges in the literature for hyper-parameter tuning, model selection, and evaluation. WSOL methods rely on a validation set with bbox annotations for model selection, and a test set with bbox annotations for threshold estimation for producing bboxes from localization maps. This approach, however, is not aligned with the WSOL setting as these annotations are typically unavailable in real-world scenarios. Our initial empirical analysis shows a significant decline in LOC performance when model selection and threshold estimation rely solely on class labels and the image itself, respectively, compared to using manual bbox annotations. This highlights the importance of incorporating bbox labels for optimal model performance. In this paper, a new WSOL evaluation protocol is proposed that provides LOC information without the need for manual bbox annotations. In particular, we generated noisy pseudo-boxes from a pretrained off-the-shelf region proposal method such as Selective Search, CLIP, and RPN for model selection. These bboxes are also employed to estimate the threshold from LOC maps, circumventing the need for test-set bbox annotations. Our experiments with several WSOL methods on ILSVRC and CUB datasets show that using the proposed pseudo-bboxes for validation facilitates the model selection and threshold estimation, with LOC performance comparable to those selected using GT bboxes on the validation set and threshold estimation on the test set. It also outperforms models selected using class-level labels, and then dynamically thresholded based solely on LOC maps.
1907.04041
Benjamin Kiessling
Benjamin Kiessling, Daniel St\"okl Ben Ezra, Matthew Thomas Miller
BADAM: A Public Dataset for Baseline Detection in Arabic-script Manuscripts
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The application of handwritten text recognition to historical works is highly dependant on accurate text line retrieval. A number of systems utilizing a robust baseline detection paradigm have emerged recently but the advancement of layout analysis methods for challenging scripts is held back by the lack of well-established datasets including works in non-Latin scripts. We present a dataset of 400 annotated document images from different domains and time periods. A short elaboration on the particular challenges posed by handwriting in Arabic script for layout analysis and subsequent processing steps is given. Lastly, we propose a method based on a fully convolutional encoder-decoder network to extract arbitrarily shaped text line images from manuscripts.
[ { "created": "Tue, 9 Jul 2019 08:30:56 GMT", "version": "v1" } ]
2019-07-10
[ [ "Kiessling", "Benjamin", "" ], [ "Ezra", "Daniel Stökl Ben", "" ], [ "Miller", "Matthew Thomas", "" ] ]
The application of handwritten text recognition to historical works is highly dependant on accurate text line retrieval. A number of systems utilizing a robust baseline detection paradigm have emerged recently but the advancement of layout analysis methods for challenging scripts is held back by the lack of well-established datasets including works in non-Latin scripts. We present a dataset of 400 annotated document images from different domains and time periods. A short elaboration on the particular challenges posed by handwriting in Arabic script for layout analysis and subsequent processing steps is given. Lastly, we propose a method based on a fully convolutional encoder-decoder network to extract arbitrarily shaped text line images from manuscripts.
2205.10635
Shreshth Tuli
Shreshth Tuli and Giuliano Casale and Nicholas R. Jennings
SplitPlace: AI Augmented Splitting and Placement of Large-Scale Neural Networks in Mobile Edge Environments
Accepted in IEEE Transactions on Mobile Computing
null
null
null
cs.DC cs.AI cs.PF
http://creativecommons.org/licenses/by/4.0/
In recent years, deep learning models have become ubiquitous in industry and academia alike. Deep neural networks can solve some of the most complex pattern-recognition problems today, but come with the price of massive compute and memory requirements. This makes the problem of deploying such large-scale neural networks challenging in resource-constrained mobile edge computing platforms, specifically in mission-critical domains like surveillance and healthcare. To solve this, a promising solution is to split resource-hungry neural networks into lightweight disjoint smaller components for pipelined distributed processing. At present, there are two main approaches to do this: semantic and layer-wise splitting. The former partitions a neural network into parallel disjoint models that produce a part of the result, whereas the latter partitions into sequential models that produce intermediate results. However, there is no intelligent algorithm that decides which splitting strategy to use and places such modular splits to edge nodes for optimal performance. To combat this, this work proposes a novel AI-driven online policy, SplitPlace, that uses Multi-Armed-Bandits to intelligently decide between layer and semantic splitting strategies based on the input task's service deadline demands. SplitPlace places such neural network split fragments on mobile edge devices using decision-aware reinforcement learning for efficient and scalable computing. Moreover, SplitPlace fine-tunes its placement engine to adapt to volatile environments. Our experiments on physical mobile-edge environments with real-world workloads show that SplitPlace can significantly improve the state-of-the-art in terms of average response time, deadline violation rate, inference accuracy, and total reward by up to 46, 69, 3 and 12 percent respectively.
[ { "created": "Sat, 21 May 2022 16:24:47 GMT", "version": "v1" } ]
2022-05-24
[ [ "Tuli", "Shreshth", "" ], [ "Casale", "Giuliano", "" ], [ "Jennings", "Nicholas R.", "" ] ]
In recent years, deep learning models have become ubiquitous in industry and academia alike. Deep neural networks can solve some of the most complex pattern-recognition problems today, but come with the price of massive compute and memory requirements. This makes the problem of deploying such large-scale neural networks challenging in resource-constrained mobile edge computing platforms, specifically in mission-critical domains like surveillance and healthcare. To solve this, a promising solution is to split resource-hungry neural networks into lightweight disjoint smaller components for pipelined distributed processing. At present, there are two main approaches to do this: semantic and layer-wise splitting. The former partitions a neural network into parallel disjoint models that produce a part of the result, whereas the latter partitions into sequential models that produce intermediate results. However, there is no intelligent algorithm that decides which splitting strategy to use and places such modular splits to edge nodes for optimal performance. To combat this, this work proposes a novel AI-driven online policy, SplitPlace, that uses Multi-Armed-Bandits to intelligently decide between layer and semantic splitting strategies based on the input task's service deadline demands. SplitPlace places such neural network split fragments on mobile edge devices using decision-aware reinforcement learning for efficient and scalable computing. Moreover, SplitPlace fine-tunes its placement engine to adapt to volatile environments. Our experiments on physical mobile-edge environments with real-world workloads show that SplitPlace can significantly improve the state-of-the-art in terms of average response time, deadline violation rate, inference accuracy, and total reward by up to 46, 69, 3 and 12 percent respectively.
2302.12562
Sungho Suh
Jwalin Bhatt, Yaroslav Zharov, Sungho Suh, Tilo Baumbach, Vincent Heuveline, Paul Lukowicz
A Knowledge Distillation framework for Multi-Organ Segmentation of Medaka Fish in Tomographic Image
Accepted at IEEE International Symposium on Biomedical Imaging 2023 (ISBI 2023)
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Morphological atlases are an important tool in organismal studies, and modern high-throughput Computed Tomography (CT) facilities can produce hundreds of full-body high-resolution volumetric images of organisms. However, creating an atlas from these volumes requires accurate organ segmentation. In the last decade, machine learning approaches have achieved incredible results in image segmentation tasks, but they require large amounts of annotated data for training. In this paper, we propose a self-training framework for multi-organ segmentation in tomographic images of Medaka fish. We utilize the pseudo-labeled data from a pretrained Teacher model and adopt a Quality Classifier to refine the pseudo-labeled data. Then, we introduce a pixel-wise knowledge distillation method to prevent overfitting to the pseudo-labeled data and improve the segmentation performance. The experimental results demonstrate that our method improves mean Intersection over Union (IoU) by 5.9% on the full dataset and enables keeping the quality while using three times less markup.
[ { "created": "Fri, 24 Feb 2023 10:31:29 GMT", "version": "v1" } ]
2023-02-27
[ [ "Bhatt", "Jwalin", "" ], [ "Zharov", "Yaroslav", "" ], [ "Suh", "Sungho", "" ], [ "Baumbach", "Tilo", "" ], [ "Heuveline", "Vincent", "" ], [ "Lukowicz", "Paul", "" ] ]
Morphological atlases are an important tool in organismal studies, and modern high-throughput Computed Tomography (CT) facilities can produce hundreds of full-body high-resolution volumetric images of organisms. However, creating an atlas from these volumes requires accurate organ segmentation. In the last decade, machine learning approaches have achieved incredible results in image segmentation tasks, but they require large amounts of annotated data for training. In this paper, we propose a self-training framework for multi-organ segmentation in tomographic images of Medaka fish. We utilize the pseudo-labeled data from a pretrained Teacher model and adopt a Quality Classifier to refine the pseudo-labeled data. Then, we introduce a pixel-wise knowledge distillation method to prevent overfitting to the pseudo-labeled data and improve the segmentation performance. The experimental results demonstrate that our method improves mean Intersection over Union (IoU) by 5.9% on the full dataset and enables keeping the quality while using three times less markup.
1710.03701
Boris Galkin Mr
Boris Galkin, Jacek Kibi{\l}da, Luiz A. DaSilva
A Stochastic Geometry Model of Backhaul and User Coverage in Urban UAV Networks
Under submission. arXiv admin note: text overlap with arXiv:1704.06214
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless access points on unmanned aerial vehicles (UAVs) are being considered for mobile service provisioning in commercial networks. To be able to efficiently use these devices in cellular networks it is necessary to first have a qualitative and quantitative understanding of how their design parameters reflect on the service quality experienced by the end user. In this paper we model a network of UAVs operating at a certain height above ground to provide wireless service within coverage areas shaped by their directional antennas, with the UAVs using the existing terrestrial base station network for wireless backhaul. We provide an analytical expression for the coverage probability experienced by a typical user as a function of the UAV parameters. Using our derivations we demonstrate the existence of an optimum UAV height which maximises the end user coverage probability. We then explore a scenario where the UAVs adjust their individual heights to meet their backhaul requirements while at the same time attempting to maximise the coverage probability of the end user on the ground.
[ { "created": "Mon, 9 Oct 2017 09:29:28 GMT", "version": "v1" }, { "created": "Wed, 11 Oct 2017 16:26:34 GMT", "version": "v2" } ]
2017-10-13
[ [ "Galkin", "Boris", "" ], [ "Kibiłda", "Jacek", "" ], [ "DaSilva", "Luiz A.", "" ] ]
Wireless access points on unmanned aerial vehicles (UAVs) are being considered for mobile service provisioning in commercial networks. To be able to efficiently use these devices in cellular networks it is necessary to first have a qualitative and quantitative understanding of how their design parameters reflect on the service quality experienced by the end user. In this paper we model a network of UAVs operating at a certain height above ground to provide wireless service within coverage areas shaped by their directional antennas, with the UAVs using the existing terrestrial base station network for wireless backhaul. We provide an analytical expression for the coverage probability experienced by a typical user as a function of the UAV parameters. Using our derivations we demonstrate the existence of an optimum UAV height which maximises the end user coverage probability. We then explore a scenario where the UAVs adjust their individual heights to meet their backhaul requirements while at the same time attempting to maximise the coverage probability of the end user on the ground.
2310.14166
Zhen Hao Wong
Zhen Hao Wong, Ling Yue, Quanming Yao
Ensemble Learning for Graph Neural Networks
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Neural Networks (GNNs) have shown success in various fields for learning from graph-structured data. This paper investigates the application of ensemble learning techniques to improve the performance and robustness of Graph Neural Networks (GNNs). By training multiple GNN models with diverse initializations or architectures, we create an ensemble model named ELGNN that captures various aspects of the data and uses the Tree-Structured Parzen Estimator algorithm to determine the ensemble weights. Combining the predictions of these models enhances overall accuracy, reduces bias and variance, and mitigates the impact of noisy data. Our findings demonstrate the efficacy of ensemble learning in enhancing GNN capabilities for analyzing complex graph-structured data. The code is public at https://github.com/wongzhenhao/ELGNN.
[ { "created": "Sun, 22 Oct 2023 03:55:13 GMT", "version": "v1" } ]
2023-10-24
[ [ "Wong", "Zhen Hao", "" ], [ "Yue", "Ling", "" ], [ "Yao", "Quanming", "" ] ]
Graph Neural Networks (GNNs) have shown success in various fields for learning from graph-structured data. This paper investigates the application of ensemble learning techniques to improve the performance and robustness of Graph Neural Networks (GNNs). By training multiple GNN models with diverse initializations or architectures, we create an ensemble model named ELGNN that captures various aspects of the data and uses the Tree-Structured Parzen Estimator algorithm to determine the ensemble weights. Combining the predictions of these models enhances overall accuracy, reduces bias and variance, and mitigates the impact of noisy data. Our findings demonstrate the efficacy of ensemble learning in enhancing GNN capabilities for analyzing complex graph-structured data. The code is public at https://github.com/wongzhenhao/ELGNN.
2104.05982
Bertrand Jouve
Mehdi Djellabi and Bertrand Jouve
Clustering of temporal nodes profiles in dynamic networks of contacts
null
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
Stream graphs are a very useful mode of representation for temporal network data, whose richness offers a wide range of possible approaches. The various methods aimed at generalising the classical approaches applied to static networks are constantly being improved. In this paper, we describe a framework that extend to stream graphs iterative weighted-rich-clubs characterisation for static networks proposed in [1]. The general principle is that we no longer consider the membership of a node to one of the weighted-rich-clubs for the whole time period, but each node is associated with a temporal profile which is the concatenation of the successive memberships of the node to the weighted-rich-clubs that appear, disappear and change all along the period. A clustering of these profiles gives the possibility to establish a reduced list of typical temporal profiles and so a more in-depth understanding of the temporal structure of the network. This approach is tested on real world data produced by recording the interactions between different students within their respective schools. [1] M. Djellabi, B. Jouve, and F. Amblard. Dense and sparse vertex connectivity in networks. Journal of Complex Networks, 8(3), 2020.
[ { "created": "Tue, 13 Apr 2021 07:30:11 GMT", "version": "v1" } ]
2021-04-14
[ [ "Djellabi", "Mehdi", "" ], [ "Jouve", "Bertrand", "" ] ]
Stream graphs are a very useful mode of representation for temporal network data, whose richness offers a wide range of possible approaches. The various methods aimed at generalising the classical approaches applied to static networks are constantly being improved. In this paper, we describe a framework that extend to stream graphs iterative weighted-rich-clubs characterisation for static networks proposed in [1]. The general principle is that we no longer consider the membership of a node to one of the weighted-rich-clubs for the whole time period, but each node is associated with a temporal profile which is the concatenation of the successive memberships of the node to the weighted-rich-clubs that appear, disappear and change all along the period. A clustering of these profiles gives the possibility to establish a reduced list of typical temporal profiles and so a more in-depth understanding of the temporal structure of the network. This approach is tested on real world data produced by recording the interactions between different students within their respective schools. [1] M. Djellabi, B. Jouve, and F. Amblard. Dense and sparse vertex connectivity in networks. Journal of Complex Networks, 8(3), 2020.
2308.05937
Siddharth Agarwal
Siddharth Agarwal, Maria A. Rodriguez and Rajkumar Buyya
A Deep Recurrent-Reinforcement Learning Method for Intelligent AutoScaling of Serverless Functions
12 pages, 13 figures, 4 tables
null
null
null
cs.DC cs.AI cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Function-as-a-Service (FaaS) introduces a lightweight, function-based cloud execution model that finds its relevance in applications like IoT-edge data processing and anomaly detection. While CSP offer a near-infinite function elasticity, these applications often experience fluctuating workloads and stricter performance constraints. A typical CSP strategy is to empirically determine and adjust desired function instances, "autoscaling", based on monitoring-based thresholds such as CPU or memory, to cope with demand and performance. However, threshold configuration either requires expert knowledge, historical data or a complete view of environment, making autoscaling a performance bottleneck lacking an adaptable solution.RL algorithms are proven to be beneficial in analysing complex cloud environments and result in an adaptable policy that maximizes the expected objectives. Most realistic cloud environments usually involve operational interference and have limited visibility, making them partially observable. A general solution to tackle observability in highly dynamic settings is to integrate Recurrent units with model-free RL algorithms and model a decision process as a POMDP. Therefore, in this paper, we investigate a model-free Recurrent RL agent for function autoscaling and compare it against the model-free Proximal Policy Optimisation (PPO) algorithm. We explore the integration of a LSTM network with the state-of-the-art PPO algorithm to find that under our experimental and evaluation settings, recurrent policies were able to capture the environment parameters and show promising results for function autoscaling. We further compare a PPO-based autoscaling agent with commercially used threshold-based function autoscaling and posit that a LSTM-based autoscaling agent is able to improve throughput by 18%, function execution by 13% and account for 8.4% more function instances.
[ { "created": "Fri, 11 Aug 2023 04:41:19 GMT", "version": "v1" } ]
2023-08-14
[ [ "Agarwal", "Siddharth", "" ], [ "Rodriguez", "Maria A.", "" ], [ "Buyya", "Rajkumar", "" ] ]
Function-as-a-Service (FaaS) introduces a lightweight, function-based cloud execution model that finds its relevance in applications like IoT-edge data processing and anomaly detection. While CSP offer a near-infinite function elasticity, these applications often experience fluctuating workloads and stricter performance constraints. A typical CSP strategy is to empirically determine and adjust desired function instances, "autoscaling", based on monitoring-based thresholds such as CPU or memory, to cope with demand and performance. However, threshold configuration either requires expert knowledge, historical data or a complete view of environment, making autoscaling a performance bottleneck lacking an adaptable solution.RL algorithms are proven to be beneficial in analysing complex cloud environments and result in an adaptable policy that maximizes the expected objectives. Most realistic cloud environments usually involve operational interference and have limited visibility, making them partially observable. A general solution to tackle observability in highly dynamic settings is to integrate Recurrent units with model-free RL algorithms and model a decision process as a POMDP. Therefore, in this paper, we investigate a model-free Recurrent RL agent for function autoscaling and compare it against the model-free Proximal Policy Optimisation (PPO) algorithm. We explore the integration of a LSTM network with the state-of-the-art PPO algorithm to find that under our experimental and evaluation settings, recurrent policies were able to capture the environment parameters and show promising results for function autoscaling. We further compare a PPO-based autoscaling agent with commercially used threshold-based function autoscaling and posit that a LSTM-based autoscaling agent is able to improve throughput by 18%, function execution by 13% and account for 8.4% more function instances.
2203.04566
Brijen Thananjeyan
Brijen Thananjeyan, Justin Kerr, Huang Huang, Joseph E. Gonzalez, Ken Goldberg
All You Need is LUV: Unsupervised Collection of Labeled Images using Invisible UV Fluorescent Indicators
null
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale semantic image annotation is a significant challenge for learning-based perception systems in robotics. Current approaches often rely on human labelers, which can be expensive, or simulation data, which can visually or physically differ from real data. This paper proposes Labels from UltraViolet (LUV), a novel framework that enables rapid, labeled data collection in real manipulation environments without human labeling. LUV uses transparent, ultraviolet-fluorescent paint with programmable ultraviolet LEDs to collect paired images of a scene in standard lighting and UV lighting to autonomously extract segmentation masks and keypoints via color segmentation. We apply LUV to a suite of diverse robot perception tasks to evaluate its labeling quality, flexibility, and data collection rate. Results suggest that LUV is 180-2500 times faster than a human labeler across the tasks. We show that LUV provides labels consistent with human annotations on unpainted test images. The networks trained on these labels are used to smooth and fold crumpled towels with 83% success rate and achieve 1.7mm position error with respect to human labels on a surgical needle pose estimation task. The low cost of LUV makes it ideal as a lightweight replacement for human labeling systems, with the one-time setup costs at $300 equivalent to the cost of collecting around 200 semantic segmentation labels on Amazon Mechanical Turk. Code, datasets, visualizations, and supplementary material can be found at https://sites.google.com/berkeley.edu/luv
[ { "created": "Wed, 9 Mar 2022 08:03:07 GMT", "version": "v1" }, { "created": "Sun, 13 Mar 2022 07:51:46 GMT", "version": "v2" } ]
2022-03-15
[ [ "Thananjeyan", "Brijen", "" ], [ "Kerr", "Justin", "" ], [ "Huang", "Huang", "" ], [ "Gonzalez", "Joseph E.", "" ], [ "Goldberg", "Ken", "" ] ]
Large-scale semantic image annotation is a significant challenge for learning-based perception systems in robotics. Current approaches often rely on human labelers, which can be expensive, or simulation data, which can visually or physically differ from real data. This paper proposes Labels from UltraViolet (LUV), a novel framework that enables rapid, labeled data collection in real manipulation environments without human labeling. LUV uses transparent, ultraviolet-fluorescent paint with programmable ultraviolet LEDs to collect paired images of a scene in standard lighting and UV lighting to autonomously extract segmentation masks and keypoints via color segmentation. We apply LUV to a suite of diverse robot perception tasks to evaluate its labeling quality, flexibility, and data collection rate. Results suggest that LUV is 180-2500 times faster than a human labeler across the tasks. We show that LUV provides labels consistent with human annotations on unpainted test images. The networks trained on these labels are used to smooth and fold crumpled towels with 83% success rate and achieve 1.7mm position error with respect to human labels on a surgical needle pose estimation task. The low cost of LUV makes it ideal as a lightweight replacement for human labeling systems, with the one-time setup costs at $300 equivalent to the cost of collecting around 200 semantic segmentation labels on Amazon Mechanical Turk. Code, datasets, visualizations, and supplementary material can be found at https://sites.google.com/berkeley.edu/luv
2106.09575
C. Lawrence Zitnick
Muhammed Shuaibi, Adeesh Kolluru, Abhishek Das, Aditya Grover, Anuroop Sriram, Zachary Ulissi, C. Lawrence Zitnick
Rotation Invariant Graph Neural Networks using Spin Convolutions
13 pages
null
null
null
cs.LG cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Progress towards the energy breakthroughs needed to combat climate change can be significantly accelerated through the efficient simulation of atomic systems. Simulation techniques based on first principles, such as Density Functional Theory (DFT), are limited in their practical use due to their high computational expense. Machine learning approaches have the potential to approximate DFT in a computationally efficient manner, which could dramatically increase the impact of computational simulations on real-world problems. Approximating DFT poses several challenges. These include accurately modeling the subtle changes in the relative positions and angles between atoms, and enforcing constraints such as rotation invariance or energy conservation. We introduce a novel approach to modeling angular information between sets of neighboring atoms in a graph neural network. Rotation invariance is achieved for the network's edge messages through the use of a per-edge local coordinate frame and a novel spin convolution over the remaining degree of freedom. Two model variants are proposed for the applications of structure relaxation and molecular dynamics. State-of-the-art results are demonstrated on the large-scale Open Catalyst 2020 dataset. Comparisons are also performed on the MD17 and QM9 datasets.
[ { "created": "Thu, 17 Jun 2021 14:59:34 GMT", "version": "v1" } ]
2021-06-18
[ [ "Shuaibi", "Muhammed", "" ], [ "Kolluru", "Adeesh", "" ], [ "Das", "Abhishek", "" ], [ "Grover", "Aditya", "" ], [ "Sriram", "Anuroop", "" ], [ "Ulissi", "Zachary", "" ], [ "Zitnick", "C. Lawrence", "" ] ]
Progress towards the energy breakthroughs needed to combat climate change can be significantly accelerated through the efficient simulation of atomic systems. Simulation techniques based on first principles, such as Density Functional Theory (DFT), are limited in their practical use due to their high computational expense. Machine learning approaches have the potential to approximate DFT in a computationally efficient manner, which could dramatically increase the impact of computational simulations on real-world problems. Approximating DFT poses several challenges. These include accurately modeling the subtle changes in the relative positions and angles between atoms, and enforcing constraints such as rotation invariance or energy conservation. We introduce a novel approach to modeling angular information between sets of neighboring atoms in a graph neural network. Rotation invariance is achieved for the network's edge messages through the use of a per-edge local coordinate frame and a novel spin convolution over the remaining degree of freedom. Two model variants are proposed for the applications of structure relaxation and molecular dynamics. State-of-the-art results are demonstrated on the large-scale Open Catalyst 2020 dataset. Comparisons are also performed on the MD17 and QM9 datasets.
2302.12695
Charlotte Pouw
Charlotte Pouw, Nora Hollenstein, Lisa Beinborn
Cross-Lingual Transfer of Cognitive Processing Complexity
Accepted at Findings of EACL 2023
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
When humans read a text, their eye movements are influenced by the structural complexity of the input sentences. This cognitive phenomenon holds across languages and recent studies indicate that multilingual language models utilize structural similarities between languages to facilitate cross-lingual transfer. We use sentence-level eye-tracking patterns as a cognitive indicator for structural complexity and show that the multilingual model XLM-RoBERTa can successfully predict varied patterns for 13 typologically diverse languages, despite being fine-tuned only on English data. We quantify the sensitivity of the model to structural complexity and distinguish a range of complexity characteristics. Our results indicate that the model develops a meaningful bias towards sentence length but also integrates cross-lingual differences. We conduct a control experiment with randomized word order and find that the model seems to additionally capture more complex structural information.
[ { "created": "Fri, 24 Feb 2023 15:48:23 GMT", "version": "v1" }, { "created": "Mon, 27 Feb 2023 10:58:12 GMT", "version": "v2" } ]
2023-02-28
[ [ "Pouw", "Charlotte", "" ], [ "Hollenstein", "Nora", "" ], [ "Beinborn", "Lisa", "" ] ]
When humans read a text, their eye movements are influenced by the structural complexity of the input sentences. This cognitive phenomenon holds across languages and recent studies indicate that multilingual language models utilize structural similarities between languages to facilitate cross-lingual transfer. We use sentence-level eye-tracking patterns as a cognitive indicator for structural complexity and show that the multilingual model XLM-RoBERTa can successfully predict varied patterns for 13 typologically diverse languages, despite being fine-tuned only on English data. We quantify the sensitivity of the model to structural complexity and distinguish a range of complexity characteristics. Our results indicate that the model develops a meaningful bias towards sentence length but also integrates cross-lingual differences. We conduct a control experiment with randomized word order and find that the model seems to additionally capture more complex structural information.
2302.05575
Benjamin Merlin Bumpus
Ernst Althaus, Benjamin Merlin Bumpus, James Fairbanks, Daniel Rosiak
Compositional Algorithms on Compositional Data: Deciding Sheaves on Presheaves
Revised and simplified notation and improved exposition. The companion code can be found here: https://github.com/AlgebraicJulia/StructuredDecompositions.jl
null
null
null
cs.CC math.CO math.CT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Algorithmicists are well-aware that fast dynamic programming algorithms are very often the correct choice when computing on compositional (or even recursive) graphs. Here we initiate the study of how to generalize this folklore intuition to mathematical structures writ large. We achieve this horizontal generality by adopting a categorial perspective which allows us to show that: (1) structured decompositions (a recent, abstract generalization of many graph decompositions) define Grothendieck topologies on categories of data (adhesive categories) and that (2) any computational problem which can be represented as a sheaf with respect to these topologies can be decided in linear time on classes of inputs which admit decompositions of bounded width and whose decomposition shapes have bounded feedback vertex number. This immediately leads to algorithms on objects of any C-set category; these include -- to name but a few examples -- structures such as: symmetric graphs, directed graphs, directed multigraphs, hypergraphs, directed hypergraphs, databases, simplicial complexes, circular port graphs and half-edge graphs. Thus we initiate the bridging of tools from sheaf theory, structural graph theory and parameterized complexity theory; we believe this to be a very fruitful approach for a general, algebraic theory of dynamic programming algorithms. Finally we pair our theoretical results with concrete implementations of our main algorithmic contribution in the AlgebraicJulia ecosystem.
[ { "created": "Sat, 11 Feb 2023 02:28:20 GMT", "version": "v1" }, { "created": "Fri, 9 Jun 2023 17:21:30 GMT", "version": "v2" }, { "created": "Tue, 3 Oct 2023 19:28:07 GMT", "version": "v3" } ]
2023-10-05
[ [ "Althaus", "Ernst", "" ], [ "Bumpus", "Benjamin Merlin", "" ], [ "Fairbanks", "James", "" ], [ "Rosiak", "Daniel", "" ] ]
Algorithmicists are well-aware that fast dynamic programming algorithms are very often the correct choice when computing on compositional (or even recursive) graphs. Here we initiate the study of how to generalize this folklore intuition to mathematical structures writ large. We achieve this horizontal generality by adopting a categorial perspective which allows us to show that: (1) structured decompositions (a recent, abstract generalization of many graph decompositions) define Grothendieck topologies on categories of data (adhesive categories) and that (2) any computational problem which can be represented as a sheaf with respect to these topologies can be decided in linear time on classes of inputs which admit decompositions of bounded width and whose decomposition shapes have bounded feedback vertex number. This immediately leads to algorithms on objects of any C-set category; these include -- to name but a few examples -- structures such as: symmetric graphs, directed graphs, directed multigraphs, hypergraphs, directed hypergraphs, databases, simplicial complexes, circular port graphs and half-edge graphs. Thus we initiate the bridging of tools from sheaf theory, structural graph theory and parameterized complexity theory; we believe this to be a very fruitful approach for a general, algebraic theory of dynamic programming algorithms. Finally we pair our theoretical results with concrete implementations of our main algorithmic contribution in the AlgebraicJulia ecosystem.
2208.01695
Kartik Lakhotia
Kartik Lakhotia, Maciej Besta, Laura Monroe, Kelly Isham, Patrick Iff, Torsten Hoefler, Fabrizio Petrini
PolarFly: A Cost-Effective and Flexible Low-Diameter Topology
In Proceedings of International Conference for High Performance Computing, Networking, Storage, and Analysis (SC) 2022
null
10.1109/SC41404.2022.00017
null
cs.NI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present PolarFly, a diameter-2 network topology based on the Erdos-Renyi family of polarity graphs from finite geometry. This is a highly scalable low-diameter topology that asymptotically reaches the Moore bound on the number of nodes for a given network degree and diameter PolarFly achieves high Moore bound efficiency even for the moderate radixes commonly seen in current and near-future routers, reaching more than 96% of the theoretical peak. It also offers more feasible router degrees than the state-of-the-art solutions, greatly adding to the selection of scalable diameter-2 networks. PolarFly enjoys many other topological properties highly relevant in practice, such as a modular design and expandability that allow incremental growth in network size without rewiring the whole network. Our evaluation shows that PolarFly outperforms competitive networks in terms of scalability, cost and performance for various traffic patterns.
[ { "created": "Tue, 2 Aug 2022 18:55:37 GMT", "version": "v1" }, { "created": "Thu, 29 Sep 2022 05:49:34 GMT", "version": "v2" }, { "created": "Fri, 14 Oct 2022 03:34:42 GMT", "version": "v3" }, { "created": "Tue, 2 May 2023 19:48:14 GMT", "version": "v4" } ]
2023-05-04
[ [ "Lakhotia", "Kartik", "" ], [ "Besta", "Maciej", "" ], [ "Monroe", "Laura", "" ], [ "Isham", "Kelly", "" ], [ "Iff", "Patrick", "" ], [ "Hoefler", "Torsten", "" ], [ "Petrini", "Fabrizio", "" ] ]
In this paper we present PolarFly, a diameter-2 network topology based on the Erdos-Renyi family of polarity graphs from finite geometry. This is a highly scalable low-diameter topology that asymptotically reaches the Moore bound on the number of nodes for a given network degree and diameter PolarFly achieves high Moore bound efficiency even for the moderate radixes commonly seen in current and near-future routers, reaching more than 96% of the theoretical peak. It also offers more feasible router degrees than the state-of-the-art solutions, greatly adding to the selection of scalable diameter-2 networks. PolarFly enjoys many other topological properties highly relevant in practice, such as a modular design and expandability that allow incremental growth in network size without rewiring the whole network. Our evaluation shows that PolarFly outperforms competitive networks in terms of scalability, cost and performance for various traffic patterns.
2403.13808
Fabio Pizzati
Hasan Abed Al Kader Hammoud, Tuhin Das, Fabio Pizzati, Philip Torr, Adel Bibi, Bernard Ghanem
On Pretraining Data Diversity for Self-Supervised Learning
ECCV 2024
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
We explore the impact of training with more diverse datasets, characterized by the number of unique samples, on the performance of self-supervised learning (SSL) under a fixed computational budget. Our findings consistently demonstrate that increasing pretraining data diversity enhances SSL performance, albeit only when the distribution distance to the downstream data is minimal. Notably, even with an exceptionally large pretraining data diversity achieved through methods like web crawling or diffusion-generated data, among other ways, the distribution shift remains a challenge. Our experiments are comprehensive with seven SSL methods using large-scale datasets such as ImageNet and YFCC100M amounting to over 200 GPU days. Code and trained models are available at https://github.com/hammoudhasan/DiversitySSL
[ { "created": "Wed, 20 Mar 2024 17:59:58 GMT", "version": "v1" }, { "created": "Fri, 5 Apr 2024 18:22:02 GMT", "version": "v2" }, { "created": "Thu, 18 Jul 2024 09:15:00 GMT", "version": "v3" } ]
2024-07-19
[ [ "Hammoud", "Hasan Abed Al Kader", "" ], [ "Das", "Tuhin", "" ], [ "Pizzati", "Fabio", "" ], [ "Torr", "Philip", "" ], [ "Bibi", "Adel", "" ], [ "Ghanem", "Bernard", "" ] ]
We explore the impact of training with more diverse datasets, characterized by the number of unique samples, on the performance of self-supervised learning (SSL) under a fixed computational budget. Our findings consistently demonstrate that increasing pretraining data diversity enhances SSL performance, albeit only when the distribution distance to the downstream data is minimal. Notably, even with an exceptionally large pretraining data diversity achieved through methods like web crawling or diffusion-generated data, among other ways, the distribution shift remains a challenge. Our experiments are comprehensive with seven SSL methods using large-scale datasets such as ImageNet and YFCC100M amounting to over 200 GPU days. Code and trained models are available at https://github.com/hammoudhasan/DiversitySSL
1606.03784
Guido Zarrella
Guido Zarrella and Amy Marsh
MITRE at SemEval-2016 Task 6: Transfer Learning for Stance Detection
International Workshop on Semantic Evaluation 2016
null
null
null
cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe MITRE's submission to the SemEval-2016 Task 6, Detecting Stance in Tweets. This effort achieved the top score in Task A on supervised stance detection, producing an average F1 score of 67.8 when assessing whether a tweet author was in favor or against a topic. We employed a recurrent neural network initialized with features learned via distant supervision on two large unlabeled datasets. We trained embeddings of words and phrases with the word2vec skip-gram method, then used those features to learn sentence representations via a hashtag prediction auxiliary task. These sentence vectors were then fine-tuned for stance detection on several hundred labeled examples. The result was a high performing system that used transfer learning to maximize the value of the available training data.
[ { "created": "Mon, 13 Jun 2016 00:12:49 GMT", "version": "v1" } ]
2016-06-14
[ [ "Zarrella", "Guido", "" ], [ "Marsh", "Amy", "" ] ]
We describe MITRE's submission to the SemEval-2016 Task 6, Detecting Stance in Tweets. This effort achieved the top score in Task A on supervised stance detection, producing an average F1 score of 67.8 when assessing whether a tweet author was in favor or against a topic. We employed a recurrent neural network initialized with features learned via distant supervision on two large unlabeled datasets. We trained embeddings of words and phrases with the word2vec skip-gram method, then used those features to learn sentence representations via a hashtag prediction auxiliary task. These sentence vectors were then fine-tuned for stance detection on several hundred labeled examples. The result was a high performing system that used transfer learning to maximize the value of the available training data.
2101.03696
Somaiyeh MahmoudZadeh
Amin Abbasi, Somaiyeh MahmoudZadeh, Amirmehdi Yazdani
A Cooperative Dynamic Task Assignment Framework for COTSBot AUVs
null
IEEE Transactions on Automation Science and Engineering 2020
10.1109/TASE.2020.3044155
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a cooperative dynamic task assignment framework for a certain class of Autonomous Underwater Vehicles (AUVs) employed to control outbreak of Crown-Of-Thorns Starfish (COTS) in Australia's Great Barrier Reef. The problem of monitoring and controlling the COTS is transcribed into a constrained task assignment problem in which eradicating clusters of COTS, by the injection system of COTSbot AUVs, is considered as a task. A probabilistic map of the operating environment including seabed terrain, clusters of COTS, and coastlines is constructed. Then, a novel heuristic algorithm called Heuristic Fleet Cooperation (HFC) is developed to provide a cooperative injection of the COTSbot AUVs to the maximum possible COTS in an assigned mission time. Extensive simulation studies together with quantitative performance analysis are conducted to demonstrate the effectiveness and robustness of the proposed cooperative task assignment algorithm in eradicating the COTS in the Great Barrier Reef.
[ { "created": "Mon, 11 Jan 2021 04:28:49 GMT", "version": "v1" } ]
2021-01-12
[ [ "Abbasi", "Amin", "" ], [ "MahmoudZadeh", "Somaiyeh", "" ], [ "Yazdani", "Amirmehdi", "" ] ]
This paper presents a cooperative dynamic task assignment framework for a certain class of Autonomous Underwater Vehicles (AUVs) employed to control outbreak of Crown-Of-Thorns Starfish (COTS) in Australia's Great Barrier Reef. The problem of monitoring and controlling the COTS is transcribed into a constrained task assignment problem in which eradicating clusters of COTS, by the injection system of COTSbot AUVs, is considered as a task. A probabilistic map of the operating environment including seabed terrain, clusters of COTS, and coastlines is constructed. Then, a novel heuristic algorithm called Heuristic Fleet Cooperation (HFC) is developed to provide a cooperative injection of the COTSbot AUVs to the maximum possible COTS in an assigned mission time. Extensive simulation studies together with quantitative performance analysis are conducted to demonstrate the effectiveness and robustness of the proposed cooperative task assignment algorithm in eradicating the COTS in the Great Barrier Reef.
1903.03831
Ioanna Mitsioni
Ioanna Mitsioni, Yiannis Karayiannidis, Johannes A. Stork and Danica Kragic
Data-Driven Model Predictive Control for Food-Cutting
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modelling of contact-rich tasks is challenging and cannot be entirely solved using classical control approaches due to the difficulty of constructing an analytic description of the contact dynamics. Additionally, in a manipulation task like food-cutting, purely learning-based methods such as Reinforcement Learning, require either a vast amount of data that is expensive to collect on a real robot, or a highly realistic simulation environment, which is currently not available. This paper presents a data-driven control approach that employs a recurrent neural network to model the dynamics for a Model Predictive Controller. We build upon earlier work limited to torque-controlled robots and redefine it for velocity controlled ones. We incorporate force/torque sensor measurements, reformulate and further extend the control problem formulation. We evaluate the performance on objects used for training, as well as on unknown objects, by means of the cutting rates achieved and demonstrate that the method can efficiently treat different cases with only one dynamic model. Finally we investigate the behavior of the system during force-critical instances of cutting and illustrate its adaptive behavior in difficult cases.
[ { "created": "Sat, 9 Mar 2019 17:34:16 GMT", "version": "v1" }, { "created": "Thu, 26 Sep 2019 12:55:35 GMT", "version": "v2" } ]
2019-09-27
[ [ "Mitsioni", "Ioanna", "" ], [ "Karayiannidis", "Yiannis", "" ], [ "Stork", "Johannes A.", "" ], [ "Kragic", "Danica", "" ] ]
Modelling of contact-rich tasks is challenging and cannot be entirely solved using classical control approaches due to the difficulty of constructing an analytic description of the contact dynamics. Additionally, in a manipulation task like food-cutting, purely learning-based methods such as Reinforcement Learning, require either a vast amount of data that is expensive to collect on a real robot, or a highly realistic simulation environment, which is currently not available. This paper presents a data-driven control approach that employs a recurrent neural network to model the dynamics for a Model Predictive Controller. We build upon earlier work limited to torque-controlled robots and redefine it for velocity controlled ones. We incorporate force/torque sensor measurements, reformulate and further extend the control problem formulation. We evaluate the performance on objects used for training, as well as on unknown objects, by means of the cutting rates achieved and demonstrate that the method can efficiently treat different cases with only one dynamic model. Finally we investigate the behavior of the system during force-critical instances of cutting and illustrate its adaptive behavior in difficult cases.
2108.06742
Archana Patel
Archana Patel and Narayan C Debnath
Development of the InBan_CIDO Ontology by Reusing the Concepts along with Detecting Overlapping Information
3rd International Conference on Inventive Computation and Information Technologies (ICICIT 2021)
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The covid19 pandemic is a global emergency that badly impacted the economies of various countries. Covid19 hit India when the growth rate of the country was at the lowest in the last 10 years. To semantically analyze the impact of this pandemic on the economy, it is curial to have an ontology. CIDO ontology is a well standardized ontology that is specially designed to assess the impact of coronavirus disease and utilize its results for future decision forecasting for the government, industry experts, and professionals in the field of various domains like research, medical advancement, technical innovative adoptions, and so on. However, this ontology does not analyze the impact of the Covid19 pandemic on the Indian banking sector. On the other side, Covid19IBO ontology has been developed to analyze the impact of the Covid19 pandemic on the Indian banking sector but this ontology does not reflect complete information of Covid19 data. Resultantly, users cannot get all the relevant information about Covid19 and its impact on the Indian economy. This article aims to extend the CIDO ontology to show the impact of Covid19 on the Indian economy sector by reusing the concepts from other data sources. We also provide a simplified schema matching approach that detects the overlapping information among the ontologies. The experimental analysis proves that the proposed approach has reasonable results.
[ { "created": "Sun, 15 Aug 2021 13:37:29 GMT", "version": "v1" } ]
2021-08-17
[ [ "Patel", "Archana", "" ], [ "Debnath", "Narayan C", "" ] ]
The covid19 pandemic is a global emergency that badly impacted the economies of various countries. Covid19 hit India when the growth rate of the country was at the lowest in the last 10 years. To semantically analyze the impact of this pandemic on the economy, it is curial to have an ontology. CIDO ontology is a well standardized ontology that is specially designed to assess the impact of coronavirus disease and utilize its results for future decision forecasting for the government, industry experts, and professionals in the field of various domains like research, medical advancement, technical innovative adoptions, and so on. However, this ontology does not analyze the impact of the Covid19 pandemic on the Indian banking sector. On the other side, Covid19IBO ontology has been developed to analyze the impact of the Covid19 pandemic on the Indian banking sector but this ontology does not reflect complete information of Covid19 data. Resultantly, users cannot get all the relevant information about Covid19 and its impact on the Indian economy. This article aims to extend the CIDO ontology to show the impact of Covid19 on the Indian economy sector by reusing the concepts from other data sources. We also provide a simplified schema matching approach that detects the overlapping information among the ontologies. The experimental analysis proves that the proposed approach has reasonable results.
2301.10616
Novanto Yudistira
Akhmad Dimitri Baihaqi, Novanto Yudistira, Edy Santoso
Prediction of COVID-19 by Its Variants using Multivariate Data-driven Deep Learning Models
null
null
null
null
cs.CE cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
The Coronavirus Disease 2019 or the COVID-19 pandemic has swept almost all parts of the world since the first case was found in Wuhan, China, in December 2019. With the increasing number of COVID-19 cases in the world, SARS-CoV-2 has mutated into various variants. Given the increasingly dangerous conditions of the pandemic, it is crucial to know when the pandemic will stop by predicting confirmed cases of COVID-19. Therefore, many studies have raised COVID-19 as a case study to overcome the ongoing pandemic using the Deep Learning method, namely LSTM, with reasonably accurate results and small error values. LSTM training is used to predict confirmed cases of COVID-19 based on variants that have been identified using ECDC's COVID-19 dataset containing confirmed cases of COVID-19 that have been identified from 30 countries in Europe. Tests were conducted using the LSTM and BiLSTM models with the addition of RNN as comparisons on hidden size and layer size. The obtained result showed that in testing hidden sizes 25, 50, 75 to 100, the RNN model provided better results, with the minimum MSE value of 0.01 and the RMSE value of 0.012 for B.1.427/B.1.429 variant with hidden size 100. In further testing of layer sizes 2, 3, 4, and 5, the result shows that the BiLSTM model provided better results, with minimum MSE value of 0.01 and the RMSE of 0.01 for the B.1.427/B.1.429 variant with hidden size 100 and layer size 2.
[ { "created": "Wed, 25 Jan 2023 14:52:34 GMT", "version": "v1" }, { "created": "Sat, 28 Jan 2023 02:06:21 GMT", "version": "v2" } ]
2023-01-31
[ [ "Baihaqi", "Akhmad Dimitri", "" ], [ "Yudistira", "Novanto", "" ], [ "Santoso", "Edy", "" ] ]
The Coronavirus Disease 2019 or the COVID-19 pandemic has swept almost all parts of the world since the first case was found in Wuhan, China, in December 2019. With the increasing number of COVID-19 cases in the world, SARS-CoV-2 has mutated into various variants. Given the increasingly dangerous conditions of the pandemic, it is crucial to know when the pandemic will stop by predicting confirmed cases of COVID-19. Therefore, many studies have raised COVID-19 as a case study to overcome the ongoing pandemic using the Deep Learning method, namely LSTM, with reasonably accurate results and small error values. LSTM training is used to predict confirmed cases of COVID-19 based on variants that have been identified using ECDC's COVID-19 dataset containing confirmed cases of COVID-19 that have been identified from 30 countries in Europe. Tests were conducted using the LSTM and BiLSTM models with the addition of RNN as comparisons on hidden size and layer size. The obtained result showed that in testing hidden sizes 25, 50, 75 to 100, the RNN model provided better results, with the minimum MSE value of 0.01 and the RMSE value of 0.012 for B.1.427/B.1.429 variant with hidden size 100. In further testing of layer sizes 2, 3, 4, and 5, the result shows that the BiLSTM model provided better results, with minimum MSE value of 0.01 and the RMSE of 0.01 for the B.1.427/B.1.429 variant with hidden size 100 and layer size 2.
1806.10293
Alexander Irpan
Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, Sergey Levine
QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
CoRL 2018 camera ready. 23 pages, 14 figures
null
null
null
cs.LG cs.AI cs.CV cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the problem of learning vision-based dynamic manipulation skills using a scalable reinforcement learning approach. We study this problem in the context of grasping, a longstanding challenge in robotic manipulation. In contrast to static learning behaviors that choose a grasp point and then execute the desired grasp, our method enables closed-loop vision-based control, whereby the robot continuously updates its grasp strategy based on the most recent observations to optimize long-horizon grasp success. To that end, we introduce QT-Opt, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real-world grasping that generalizes to 96% grasp success on unseen objects. Aside from attaining a very high success rate, our method exhibits behaviors that are quite distinct from more standard grasping systems: using only RGB vision-based perception from an over-the-shoulder camera, our method automatically learns regrasping strategies, probes objects to find the most effective grasps, learns to reposition objects and perform other non-prehensile pre-grasp manipulations, and responds dynamically to disturbances and perturbations.
[ { "created": "Wed, 27 Jun 2018 04:34:30 GMT", "version": "v1" }, { "created": "Mon, 2 Jul 2018 19:08:00 GMT", "version": "v2" }, { "created": "Wed, 28 Nov 2018 02:40:54 GMT", "version": "v3" } ]
2018-11-29
[ [ "Kalashnikov", "Dmitry", "" ], [ "Irpan", "Alex", "" ], [ "Pastor", "Peter", "" ], [ "Ibarz", "Julian", "" ], [ "Herzog", "Alexander", "" ], [ "Jang", "Eric", "" ], [ "Quillen", "Deirdre", "" ], [ "Holly", "Ethan", "" ], [ "Kalakrishnan", "Mrinal", "" ], [ "Vanhoucke", "Vincent", "" ], [ "Levine", "Sergey", "" ] ]
In this paper, we study the problem of learning vision-based dynamic manipulation skills using a scalable reinforcement learning approach. We study this problem in the context of grasping, a longstanding challenge in robotic manipulation. In contrast to static learning behaviors that choose a grasp point and then execute the desired grasp, our method enables closed-loop vision-based control, whereby the robot continuously updates its grasp strategy based on the most recent observations to optimize long-horizon grasp success. To that end, we introduce QT-Opt, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real-world grasping that generalizes to 96% grasp success on unseen objects. Aside from attaining a very high success rate, our method exhibits behaviors that are quite distinct from more standard grasping systems: using only RGB vision-based perception from an over-the-shoulder camera, our method automatically learns regrasping strategies, probes objects to find the most effective grasps, learns to reposition objects and perform other non-prehensile pre-grasp manipulations, and responds dynamically to disturbances and perturbations.
1505.02740
Rasmus Dalgas Kongskov
Rasmus Dalgas Kongskov, Jakob Sauer J{\o}rgensen, Henning Friis Poulsen, Per Christian Hansen
Noise Robustness of a Combined Phase Retrieval and Reconstruction Method for Phase-Contrast Tomography
null
null
null
null
cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. (Opt. Express, 21, 12185, 2013) and preliminary results demonstrating improved reconstruction compared to a two-stage method given. Using simulated free-space propagation experiments with a single sample-detector distance, we thoroughly compare the novel method with the two-stage method to address limitations of the preliminary results. We demonstrate that the novel method is substantially more robust towards noise; our simulations point to a possible reduction in counting times by an order of magnitude.
[ { "created": "Mon, 11 May 2015 19:17:33 GMT", "version": "v1" }, { "created": "Fri, 4 Sep 2015 13:30:49 GMT", "version": "v2" }, { "created": "Mon, 7 Sep 2015 14:42:20 GMT", "version": "v3" } ]
2015-09-08
[ [ "Kongskov", "Rasmus Dalgas", "" ], [ "Jørgensen", "Jakob Sauer", "" ], [ "Poulsen", "Henning Friis", "" ], [ "Hansen", "Per Christian", "" ] ]
Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. (Opt. Express, 21, 12185, 2013) and preliminary results demonstrating improved reconstruction compared to a two-stage method given. Using simulated free-space propagation experiments with a single sample-detector distance, we thoroughly compare the novel method with the two-stage method to address limitations of the preliminary results. We demonstrate that the novel method is substantially more robust towards noise; our simulations point to a possible reduction in counting times by an order of magnitude.
2211.10000
Onuralp Soylemez
Onuralp Soylemez and Pablo Cordero
Protein language model rescue mutations highlight variant effects and structure in clinically relevant genes
NeurIPS 2022, Workshop on Learning Meaningful Representations of Life
null
null
null
cs.LG q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Despite being self-supervised, protein language models have shown remarkable performance in fundamental biological tasks such as predicting impact of genetic variation on protein structure and function. The effectiveness of these models on diverse set of tasks suggests that they learn meaningful representations of fitness landscape that can be useful for downstream clinical applications. Here, we interrogate the use of these language models in characterizing known pathogenic mutations in curated, medically actionable genes through an exhaustive search of putative compensatory mutations on each variant's genetic background. Systematic analysis of the predicted effects of these compensatory mutations reveal unappreciated structural features of proteins that are missed by other structure predictors like AlphaFold. While deep mutational scan experiments provide an unbiased estimate of the mutational landscape, we encourage the community to generate and curate rescue mutation experiments to inform the design of more sophisticated co-masking strategies and leverage large language models more effectively for downstream clinical prediction tasks.
[ { "created": "Fri, 18 Nov 2022 03:00:52 GMT", "version": "v1" } ]
2022-11-21
[ [ "Soylemez", "Onuralp", "" ], [ "Cordero", "Pablo", "" ] ]
Despite being self-supervised, protein language models have shown remarkable performance in fundamental biological tasks such as predicting impact of genetic variation on protein structure and function. The effectiveness of these models on diverse set of tasks suggests that they learn meaningful representations of fitness landscape that can be useful for downstream clinical applications. Here, we interrogate the use of these language models in characterizing known pathogenic mutations in curated, medically actionable genes through an exhaustive search of putative compensatory mutations on each variant's genetic background. Systematic analysis of the predicted effects of these compensatory mutations reveal unappreciated structural features of proteins that are missed by other structure predictors like AlphaFold. While deep mutational scan experiments provide an unbiased estimate of the mutational landscape, we encourage the community to generate and curate rescue mutation experiments to inform the design of more sophisticated co-masking strategies and leverage large language models more effectively for downstream clinical prediction tasks.