id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1405.3137
Jean-Marc Kelif
Jean-Marc Kelif and Olivier Simon
Impact of Directional Receiving Antennas on Wireless Networks
6 pages, 7 figures, submitted to VTC 2014
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are interested in high data rates internet access, by the mean of LTE based wireless networks. In the aim to improve performance of wireless networks, we propose an approach focused on the use of UE equipped by directional receiving antennas. Indeed, these antennas allow to mitigate the interference and to improve the link budget. Therefore, the Signal to Interference plus Noise Ratio (SINR) can be improved, and consequently the performance and quality of service (QoS), too. We establish the analytical expression of the SINR reached by a user with directional antenna, whatever its location. This expression shows that directional antennas allow an improvement of the SINR, and to quantify it. We develop different scenarios to compare the use of directional antennas instead of omnidirectional ones. They allow to quantify the impact of directional antennas in terms of performance and QoS.
[ { "created": "Tue, 13 May 2014 13:00:44 GMT", "version": "v1" } ]
2014-05-14
[ [ "Kelif", "Jean-Marc", "" ], [ "Simon", "Olivier", "" ] ]
We are interested in high data rates internet access, by the mean of LTE based wireless networks. In the aim to improve performance of wireless networks, we propose an approach focused on the use of UE equipped by directional receiving antennas. Indeed, these antennas allow to mitigate the interference and to improve the link budget. Therefore, the Signal to Interference plus Noise Ratio (SINR) can be improved, and consequently the performance and quality of service (QoS), too. We establish the analytical expression of the SINR reached by a user with directional antenna, whatever its location. This expression shows that directional antennas allow an improvement of the SINR, and to quantify it. We develop different scenarios to compare the use of directional antennas instead of omnidirectional ones. They allow to quantify the impact of directional antennas in terms of performance and QoS.
1006.5381
Catalin Anghel Mr
Catalin Anghel
Cresterea securitatii sistemelor informatice si de comunicatii prin criptografia cuantica
24 pag
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Catch 22 of cryptography - "Before two parties can communicate in secret, they must first communicate in secret". The weakness of classical cryptographic communication systems is that secret communication can only take place after a key is communicated in secret over a totally secure communication channel. Here comes quantum key distribution which takes advantage of certain phenomena that occur at the subatomic level, so that any attempt by an enemy to obtain the bits in a key not only fails, but gets detected as well.
[ { "created": "Mon, 28 Jun 2010 15:45:25 GMT", "version": "v1" } ]
2010-06-29
[ [ "Anghel", "Catalin", "" ] ]
Catch 22 of cryptography - "Before two parties can communicate in secret, they must first communicate in secret". The weakness of classical cryptographic communication systems is that secret communication can only take place after a key is communicated in secret over a totally secure communication channel. Here comes quantum key distribution which takes advantage of certain phenomena that occur at the subatomic level, so that any attempt by an enemy to obtain the bits in a key not only fails, but gets detected as well.
2403.08368
Lorenzo Papa
L. Papa, P. Russo, and I. Amerini
METER: a mobile vision transformer architecture for monocular depth estimation
null
IEEE Transactions on Circuits and Systems for Video Technology, 2023
10.1109/TCSVT.2023.3260310
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Depth estimation is a fundamental knowledge for autonomous systems that need to assess their own state and perceive the surrounding environment. Deep learning algorithms for depth estimation have gained significant interest in recent years, owing to the potential benefits of this methodology in overcoming the limitations of active depth sensing systems. Moreover, due to the low cost and size of monocular cameras, researchers have focused their attention on monocular depth estimation (MDE), which consists in estimating a dense depth map from a single RGB video frame. State of the art MDE models typically rely on vision transformers (ViT) architectures that are highly deep and complex, making them unsuitable for fast inference on devices with hardware constraints. Purposely, in this paper, we address the problem of exploiting ViT in MDE on embedded devices. Those systems are usually characterized by limited memory capabilities and low-power CPU/GPU. We propose METER, a novel lightweight vision transformer architecture capable of achieving state of the art estimations and low latency inference performances on the considered embedded hardwares: NVIDIA Jetson TX1 and NVIDIA Jetson Nano. We provide a solution consisting of three alternative configurations of METER, a novel loss function to balance pixel estimation and reconstruction of image details, and a new data augmentation strategy to improve the overall final predictions. The proposed method outperforms previous lightweight works over the two benchmark datasets: the indoor NYU Depth v2 and the outdoor KITTI.
[ { "created": "Wed, 13 Mar 2024 09:30:08 GMT", "version": "v1" } ]
2024-03-14
[ [ "Papa", "L.", "" ], [ "Russo", "P.", "" ], [ "Amerini", "I.", "" ] ]
Depth estimation is a fundamental knowledge for autonomous systems that need to assess their own state and perceive the surrounding environment. Deep learning algorithms for depth estimation have gained significant interest in recent years, owing to the potential benefits of this methodology in overcoming the limitations of active depth sensing systems. Moreover, due to the low cost and size of monocular cameras, researchers have focused their attention on monocular depth estimation (MDE), which consists in estimating a dense depth map from a single RGB video frame. State of the art MDE models typically rely on vision transformers (ViT) architectures that are highly deep and complex, making them unsuitable for fast inference on devices with hardware constraints. Purposely, in this paper, we address the problem of exploiting ViT in MDE on embedded devices. Those systems are usually characterized by limited memory capabilities and low-power CPU/GPU. We propose METER, a novel lightweight vision transformer architecture capable of achieving state of the art estimations and low latency inference performances on the considered embedded hardwares: NVIDIA Jetson TX1 and NVIDIA Jetson Nano. We provide a solution consisting of three alternative configurations of METER, a novel loss function to balance pixel estimation and reconstruction of image details, and a new data augmentation strategy to improve the overall final predictions. The proposed method outperforms previous lightweight works over the two benchmark datasets: the indoor NYU Depth v2 and the outdoor KITTI.
1204.6453
Oktay Arslan Oktay Arslan
Oktay Arslan and Panagiotis Tsiotras
The Role of Vertex Consistency in Sampling-based Algorithms for Optimal Motion Planning
26 pages
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motion planning problems have been studied by both the robotics and the controls research communities for a long time, and many algorithms have been developed for their solution. Among them, incremental sampling-based motion planning algorithms, such as the Rapidly-exploring Random Trees (RRTs), and the Probabilistic Road Maps (PRMs) have become very popular recently, owing to their implementation simplicity and their advantages in handling high-dimensional problems. Although these algorithms work very well in practice, the quality of the computed solution is often not good, i.e., the solution can be far from the optimal one. A recent variation of RRT, namely the RRT* algorithm, bypasses this drawback of the traditional RRT algorithm, by ensuring asymptotic optimality as the number of samples tends to infinity. Nonetheless, the convergence rate to the optimal solution may still be slow. This paper presents a new incremental sampling-based motion planning algorithm based on Rapidly-exploring Random Graphs (RRG), denoted RRT# (RRT "sharp") which also guarantees asymptotic optimality but, in addition, it also ensures that the constructed spanning tree of the geometric graph is consistent after each iteration. In consistent trees, the vertices which have the potential to be part of the optimal solution have the minimum cost-come-value. This implies that the best possible solution is readily computed if there are some vertices in the current graph that are already in the goal region. Numerical results compare with the RRT* algorithm.
[ { "created": "Sun, 29 Apr 2012 04:24:44 GMT", "version": "v1" } ]
2012-05-01
[ [ "Arslan", "Oktay", "" ], [ "Tsiotras", "Panagiotis", "" ] ]
Motion planning problems have been studied by both the robotics and the controls research communities for a long time, and many algorithms have been developed for their solution. Among them, incremental sampling-based motion planning algorithms, such as the Rapidly-exploring Random Trees (RRTs), and the Probabilistic Road Maps (PRMs) have become very popular recently, owing to their implementation simplicity and their advantages in handling high-dimensional problems. Although these algorithms work very well in practice, the quality of the computed solution is often not good, i.e., the solution can be far from the optimal one. A recent variation of RRT, namely the RRT* algorithm, bypasses this drawback of the traditional RRT algorithm, by ensuring asymptotic optimality as the number of samples tends to infinity. Nonetheless, the convergence rate to the optimal solution may still be slow. This paper presents a new incremental sampling-based motion planning algorithm based on Rapidly-exploring Random Graphs (RRG), denoted RRT# (RRT "sharp") which also guarantees asymptotic optimality but, in addition, it also ensures that the constructed spanning tree of the geometric graph is consistent after each iteration. In consistent trees, the vertices which have the potential to be part of the optimal solution have the minimum cost-come-value. This implies that the best possible solution is readily computed if there are some vertices in the current graph that are already in the goal region. Numerical results compare with the RRT* algorithm.
1509.03853
Sergey Slavnov A
Sergey Slavnov
On Banach spaces of sequences and free linear logic exponential modality
null
Math. Struct. Comp. Sci. 29 (2019) 215-242
10.1017/S0960129517000251
null
cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a category of vector spaces modelling full propositional linear logic, similar to probabilistic coherence spaces and to Koethe sequences spaces. Its objects are {\it rigged sequences spaces}, Banach spaces of sequences, with norms defined from pairing with finite sequences, and morphisms are bounded linear maps, continuous in a suitable topology. The main interest of the work is that our model gives a realization of the free linear logic exponentials construction.
[ { "created": "Sun, 13 Sep 2015 14:44:00 GMT", "version": "v1" }, { "created": "Wed, 23 Nov 2016 14:46:22 GMT", "version": "v2" } ]
2019-02-20
[ [ "Slavnov", "Sergey", "" ] ]
We introduce a category of vector spaces modelling full propositional linear logic, similar to probabilistic coherence spaces and to Koethe sequences spaces. Its objects are {\it rigged sequences spaces}, Banach spaces of sequences, with norms defined from pairing with finite sequences, and morphisms are bounded linear maps, continuous in a suitable topology. The main interest of the work is that our model gives a realization of the free linear logic exponentials construction.
2301.12634
Diego Misseroni
Samantha Mora, Nicola M.Pugno, Diego Misseroni
3D printed architected lattice structures by material jetting
33 pages, 12 figures
Materials Today, 59, 107-132, 2022
10.1016/j.mattod.2022.05.008
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-precision 3D printing technology opens to almost endless opportunities to design complex shapes present in tailored architected materials. The scope of this work is to review the latest studies regarding 3D printed lattice structures that involve the use of photopolymers fabricated by Material Jetting (MJ), with a focus on the widely used Polyjet and MultiJet techniques. The main aspects governing this printing process are introduced to determine their influence during the fabrication of 3D printed lattices. Performed experimental studies, considered assumptions, and constitutive models for the respective numerical simulations are analyzed. Furthermore, an overview of the latest extensively studied 3D printed architected lattice materials is exposed by emphasizing their achieved mechanical performances through the use of Ashby plots. Then, we highlight the advantages, limitations, and challenges of the material jetting technology to manufacture tunable architected materials for innovative devices, oriented to several engineering applications. Finally, possible approaches for future works and gaps to be covered by further research are indicated, including cost and environmental-related issues.
[ { "created": "Mon, 30 Jan 2023 03:33:16 GMT", "version": "v1" } ]
2023-01-31
[ [ "Mora", "Samantha", "" ], [ "Pugno", "Nicola M.", "" ], [ "Misseroni", "Diego", "" ] ]
High-precision 3D printing technology opens to almost endless opportunities to design complex shapes present in tailored architected materials. The scope of this work is to review the latest studies regarding 3D printed lattice structures that involve the use of photopolymers fabricated by Material Jetting (MJ), with a focus on the widely used Polyjet and MultiJet techniques. The main aspects governing this printing process are introduced to determine their influence during the fabrication of 3D printed lattices. Performed experimental studies, considered assumptions, and constitutive models for the respective numerical simulations are analyzed. Furthermore, an overview of the latest extensively studied 3D printed architected lattice materials is exposed by emphasizing their achieved mechanical performances through the use of Ashby plots. Then, we highlight the advantages, limitations, and challenges of the material jetting technology to manufacture tunable architected materials for innovative devices, oriented to several engineering applications. Finally, possible approaches for future works and gaps to be covered by further research are indicated, including cost and environmental-related issues.
2202.13903
Tristan Deleu
Tristan Deleu, Ant\'onio G\'ois, Chris Emezue, Mansi Rankawat, Simon Lacoste-Julien, Stefan Bauer, Yoshua Bengio
Bayesian Structure Learning with Generative Flow Networks
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
In Bayesian structure learning, we are interested in inferring a distribution over the directed acyclic graph (DAG) structure of Bayesian networks, from data. Defining such a distribution is very challenging, due to the combinatorially large sample space, and approximations based on MCMC are often required. Recently, a novel class of probabilistic models, called Generative Flow Networks (GFlowNets), have been introduced as a general framework for generative modeling of discrete and composite objects, such as graphs. In this work, we propose to use a GFlowNet as an alternative to MCMC for approximating the posterior distribution over the structure of Bayesian networks, given a dataset of observations. Generating a sample DAG from this approximate distribution is viewed as a sequential decision problem, where the graph is constructed one edge at a time, based on learned transition probabilities. Through evaluation on both simulated and real data, we show that our approach, called DAG-GFlowNet, provides an accurate approximation of the posterior over DAGs, and it compares favorably against other methods based on MCMC or variational inference.
[ { "created": "Mon, 28 Feb 2022 15:53:10 GMT", "version": "v1" }, { "created": "Tue, 28 Jun 2022 18:08:32 GMT", "version": "v2" } ]
2022-06-30
[ [ "Deleu", "Tristan", "" ], [ "Góis", "António", "" ], [ "Emezue", "Chris", "" ], [ "Rankawat", "Mansi", "" ], [ "Lacoste-Julien", "Simon", "" ], [ "Bauer", "Stefan", "" ], [ "Bengio", "Yoshua", "" ] ]
In Bayesian structure learning, we are interested in inferring a distribution over the directed acyclic graph (DAG) structure of Bayesian networks, from data. Defining such a distribution is very challenging, due to the combinatorially large sample space, and approximations based on MCMC are often required. Recently, a novel class of probabilistic models, called Generative Flow Networks (GFlowNets), have been introduced as a general framework for generative modeling of discrete and composite objects, such as graphs. In this work, we propose to use a GFlowNet as an alternative to MCMC for approximating the posterior distribution over the structure of Bayesian networks, given a dataset of observations. Generating a sample DAG from this approximate distribution is viewed as a sequential decision problem, where the graph is constructed one edge at a time, based on learned transition probabilities. Through evaluation on both simulated and real data, we show that our approach, called DAG-GFlowNet, provides an accurate approximation of the posterior over DAGs, and it compares favorably against other methods based on MCMC or variational inference.
2007.16011
Nagender Aneja
Sandhya Aneja and Siti Nur Afikah Bte Abdul Mazid and Nagender Aneja
Neural Machine Translation model for University Email Application
International Conference on Natural Language Processing (ICNLP 2020), July 11-13, 2020
International Conference on Natural Language Processing (ICNLP 2020), July 11-13, 2020
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine translation has many applications such as news translation, email translation, official letter translation etc. Commercial translators, e.g. Google Translation lags in regional vocabulary and are unable to learn the bilingual text in the source and target languages within the input. In this paper, a regional vocabulary-based application-oriented Neural Machine Translation (NMT) model is proposed over the data set of emails used at the University for communication over a period of three years. A state-of-the-art Sequence-to-Sequence Neural Network for ML -> EN and EN -> ML translations is compared with Google Translate using Gated Recurrent Unit Recurrent Neural Network machine translation model with attention decoder. The low BLEU score of Google Translation in comparison to our model indicates that the application based regional models are better. The low BLEU score of EN -> ML of our model and Google Translation indicates that the Malay Language has complex language features corresponding to English.
[ { "created": "Mon, 20 Jul 2020 15:05:16 GMT", "version": "v1" } ]
2020-08-04
[ [ "Aneja", "Sandhya", "" ], [ "Mazid", "Siti Nur Afikah Bte Abdul", "" ], [ "Aneja", "Nagender", "" ] ]
Machine translation has many applications such as news translation, email translation, official letter translation etc. Commercial translators, e.g. Google Translation lags in regional vocabulary and are unable to learn the bilingual text in the source and target languages within the input. In this paper, a regional vocabulary-based application-oriented Neural Machine Translation (NMT) model is proposed over the data set of emails used at the University for communication over a period of three years. A state-of-the-art Sequence-to-Sequence Neural Network for ML -> EN and EN -> ML translations is compared with Google Translate using Gated Recurrent Unit Recurrent Neural Network machine translation model with attention decoder. The low BLEU score of Google Translation in comparison to our model indicates that the application based regional models are better. The low BLEU score of EN -> ML of our model and Google Translation indicates that the Malay Language has complex language features corresponding to English.
2312.01104
Yumeng Li
Yumeng Li, Yaoxiang Ding, Zhong Ren, Kun Zhou
QPoser: Quantized Explicit Pose Prior Modeling for Controllable Pose Generation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explicit pose prior models compress human poses into latent representations for using in pose-related downstream tasks. A desirable explicit pose prior model should satisfy three desirable abilities: 1) correctness, i.e. ensuring to generate physically possible poses; 2) expressiveness, i.e. ensuring to preserve details in generation; 3) controllability, meaning that generation from reference poses and explicit instructions should be convenient. Existing explicit pose prior models fail to achieve all of three properties, in special controllability. To break this situation, we propose QPoser, a highly controllable explicit pose prior model which guarantees correctness and expressiveness. In QPoser, a multi-head vector quantized autoencoder (MS-VQVAE) is proposed for obtaining expressive and distributed pose representations. Furthermore, a global-local feature integration mechanism (GLIF-AE) is utilized to disentangle the latent representation and integrate full-body information into local-joint features. Experimental results show that QPoser significantly outperforms state-of-the-art approaches in representing expressive and correct poses, meanwhile is easily to be used for detailed conditional generation from reference poses and prompting instructions.
[ { "created": "Sat, 2 Dec 2023 10:44:34 GMT", "version": "v1" } ]
2023-12-05
[ [ "Li", "Yumeng", "" ], [ "Ding", "Yaoxiang", "" ], [ "Ren", "Zhong", "" ], [ "Zhou", "Kun", "" ] ]
Explicit pose prior models compress human poses into latent representations for using in pose-related downstream tasks. A desirable explicit pose prior model should satisfy three desirable abilities: 1) correctness, i.e. ensuring to generate physically possible poses; 2) expressiveness, i.e. ensuring to preserve details in generation; 3) controllability, meaning that generation from reference poses and explicit instructions should be convenient. Existing explicit pose prior models fail to achieve all of three properties, in special controllability. To break this situation, we propose QPoser, a highly controllable explicit pose prior model which guarantees correctness and expressiveness. In QPoser, a multi-head vector quantized autoencoder (MS-VQVAE) is proposed for obtaining expressive and distributed pose representations. Furthermore, a global-local feature integration mechanism (GLIF-AE) is utilized to disentangle the latent representation and integrate full-body information into local-joint features. Experimental results show that QPoser significantly outperforms state-of-the-art approaches in representing expressive and correct poses, meanwhile is easily to be used for detailed conditional generation from reference poses and prompting instructions.
2401.04956
Hongyu Zhu
Huafeng Qin, Hongyu Zhu, Xin Jin, Qun Song, Mounim A. El-Yacoubi, and Xinbo Gao
EmMixformer: Mix transformer for eye movement recognition
null
null
null
null
cs.CV cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Eye movement (EM) is a new highly secure biometric behavioral modality that has received increasing attention in recent years. Although deep neural networks, such as convolutional neural network (CNN), have recently achieved promising performance, current solutions fail to capture local and global temporal dependencies within eye movement data. To overcome this problem, we propose in this paper a mixed transformer termed EmMixformer to extract time and frequency domain information for eye movement recognition. To this end, we propose a mixed block consisting of three modules, transformer, attention Long short-term memory (attention LSTM), and Fourier transformer. We are the first to attempt leveraging transformer to learn long temporal dependencies within eye movement. Second, we incorporate the attention mechanism into LSTM to propose attention LSTM with the aim to learn short temporal dependencies. Third, we perform self attention in the frequency domain to learn global features. As the three modules provide complementary feature representations in terms of local and global dependencies, the proposed EmMixformer is capable of improving recognition accuracy. The experimental results on our eye movement dataset and two public eye movement datasets show that the proposed EmMixformer outperforms the state of the art by achieving the lowest verification error.
[ { "created": "Wed, 10 Jan 2024 06:45:37 GMT", "version": "v1" }, { "created": "Thu, 9 May 2024 05:33:03 GMT", "version": "v2" } ]
2024-05-10
[ [ "Qin", "Huafeng", "" ], [ "Zhu", "Hongyu", "" ], [ "Jin", "Xin", "" ], [ "Song", "Qun", "" ], [ "El-Yacoubi", "Mounim A.", "" ], [ "Gao", "Xinbo", "" ] ]
Eye movement (EM) is a new highly secure biometric behavioral modality that has received increasing attention in recent years. Although deep neural networks, such as convolutional neural network (CNN), have recently achieved promising performance, current solutions fail to capture local and global temporal dependencies within eye movement data. To overcome this problem, we propose in this paper a mixed transformer termed EmMixformer to extract time and frequency domain information for eye movement recognition. To this end, we propose a mixed block consisting of three modules, transformer, attention Long short-term memory (attention LSTM), and Fourier transformer. We are the first to attempt leveraging transformer to learn long temporal dependencies within eye movement. Second, we incorporate the attention mechanism into LSTM to propose attention LSTM with the aim to learn short temporal dependencies. Third, we perform self attention in the frequency domain to learn global features. As the three modules provide complementary feature representations in terms of local and global dependencies, the proposed EmMixformer is capable of improving recognition accuracy. The experimental results on our eye movement dataset and two public eye movement datasets show that the proposed EmMixformer outperforms the state of the art by achieving the lowest verification error.
1807.00975
Jie Liu
Jie Liu, Cheng Sun, Xiang Xu, Baomin Xu, Shuangyuan Yu
A Spatial and Temporal Features Mixture Model with Body Parts for Video-based Person Re-Identification
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The video-based person re-identification is to recognize a person under different cameras, which is a crucial task applied in visual surveillance system. Most previous methods mainly focused on the feature of full body in the frame. In this paper we propose a novel Spatial and Temporal Features Mixture Model (STFMM) based on convolutional neural network (CNN) and recurrent neural network (RNN), in which the human body is split into $N$ parts in horizontal direction so that we can obtain more specific features. The proposed method skillfully integrates features of each part to achieve more expressive representation of each person. We first split the video sequence into $N$ part sequences which include the information of head, waist, legs and so on. Then the features are extracted by STFMM whose $2N$ inputs are obtained from the developed Siamese network, and these features are combined into a discriminative representation for one person. Experiments are conducted on the iLIDS-VID and PRID-2011 datasets. The results demonstrate that our approach outperforms existing methods for video-based person re-identification. It achieves a rank-1 CMC accuracy of 74\% on the iLIDS-VID dataset, exceeding the the most recently developed method ASTPN by 12\%. For the cross-data testing, our method achieves a rank-1 CMC accuracy of 48\% exceeding the ASTPN method by 18\%, which shows that our model has significant stability.
[ { "created": "Tue, 3 Jul 2018 04:33:22 GMT", "version": "v1" } ]
2018-07-04
[ [ "Liu", "Jie", "" ], [ "Sun", "Cheng", "" ], [ "Xu", "Xiang", "" ], [ "Xu", "Baomin", "" ], [ "Yu", "Shuangyuan", "" ] ]
The video-based person re-identification is to recognize a person under different cameras, which is a crucial task applied in visual surveillance system. Most previous methods mainly focused on the feature of full body in the frame. In this paper we propose a novel Spatial and Temporal Features Mixture Model (STFMM) based on convolutional neural network (CNN) and recurrent neural network (RNN), in which the human body is split into $N$ parts in horizontal direction so that we can obtain more specific features. The proposed method skillfully integrates features of each part to achieve more expressive representation of each person. We first split the video sequence into $N$ part sequences which include the information of head, waist, legs and so on. Then the features are extracted by STFMM whose $2N$ inputs are obtained from the developed Siamese network, and these features are combined into a discriminative representation for one person. Experiments are conducted on the iLIDS-VID and PRID-2011 datasets. The results demonstrate that our approach outperforms existing methods for video-based person re-identification. It achieves a rank-1 CMC accuracy of 74\% on the iLIDS-VID dataset, exceeding the the most recently developed method ASTPN by 12\%. For the cross-data testing, our method achieves a rank-1 CMC accuracy of 48\% exceeding the ASTPN method by 18\%, which shows that our model has significant stability.
2010.11871
Hideyuki Tachibana
Hideyuki Tachibana
Towards Listening to 10 People Simultaneously: An Efficient Permutation Invariant Training of Audio Source Separation Using Sinkhorn's Algorithm
5 pages, 8 figures, IEEE ICASSP 2021
Proc. ICASSP (2021)
10.1109/ICASSP39728.2021.9414508
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In neural network-based monaural speech separation techniques, it has been recently common to evaluate the loss using the permutation invariant training (PIT) loss. However, the ordinary PIT requires to try all $N!$ permutations between $N$ ground truths and $N$ estimates. Since the factorial complexity explodes very rapidly as $N$ increases, a PIT-based training works only when the number of source signals is small, such as $N = 2$ or $3$. To overcome this limitation, this paper proposes a SinkPIT, a novel variant of the PIT losses, which is much more efficient than the ordinary PIT loss when $N$ is large. The SinkPIT is based on Sinkhorn's matrix balancing algorithm, which efficiently finds a doubly stochastic matrix which approximates the best permutation in a differentiable manner. The author conducted an experiment to train a neural network model to decompose a single-channel mixture into 10 sources using the SinkPIT, and obtained promising results.
[ { "created": "Thu, 22 Oct 2020 17:08:17 GMT", "version": "v1" }, { "created": "Sun, 16 May 2021 13:40:26 GMT", "version": "v2" } ]
2021-05-18
[ [ "Tachibana", "Hideyuki", "" ] ]
In neural network-based monaural speech separation techniques, it has been recently common to evaluate the loss using the permutation invariant training (PIT) loss. However, the ordinary PIT requires to try all $N!$ permutations between $N$ ground truths and $N$ estimates. Since the factorial complexity explodes very rapidly as $N$ increases, a PIT-based training works only when the number of source signals is small, such as $N = 2$ or $3$. To overcome this limitation, this paper proposes a SinkPIT, a novel variant of the PIT losses, which is much more efficient than the ordinary PIT loss when $N$ is large. The SinkPIT is based on Sinkhorn's matrix balancing algorithm, which efficiently finds a doubly stochastic matrix which approximates the best permutation in a differentiable manner. The author conducted an experiment to train a neural network model to decompose a single-channel mixture into 10 sources using the SinkPIT, and obtained promising results.
2306.04859
Dake Chen
Dake Chen, Christine Goins, Maxwell Waugaman, Georgios D. Dimou, Peter A. Beerel
Island-based Random Dynamic Voltage Scaling vs ML-Enhanced Power Side-Channel Attacks
null
Proceedings of the Great Lakes Symposium on VLSI 2023
10.1145/3583781.3590266
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
In this paper, we describe and analyze an island-based random dynamic voltage scaling (iRDVS) approach to thwart power side-channel attacks. We first analyze the impact of the number of independent voltage islands on the resulting signal-to-noise ratio and trace misalignment. As part of our analysis of misalignment, we propose a novel unsupervised machine learning (ML) based attack that is effective on systems with three or fewer independent voltages. Our results show that iRDVS with four voltage islands, however, cannot be broken with 200k encryption traces, suggesting that iRDVS can be effective. We finish the talk by describing an iRDVS test chip in a 12nm FinFet process that incorporates three variants of an AES-256 accelerator, all originating from the same RTL. This included a synchronous core, an asynchronous core with no protection, and a core employing the iRDVS technique using asynchronous logic. Lab measurements from the chips indicated that both unprotected variants failed the test vector leakage assessment (TVLA) security metric test, while the iRDVS was proven secure in a variety of configurations.
[ { "created": "Thu, 8 Jun 2023 01:12:19 GMT", "version": "v1" }, { "created": "Tue, 13 Jun 2023 18:16:03 GMT", "version": "v2" } ]
2023-06-16
[ [ "Chen", "Dake", "" ], [ "Goins", "Christine", "" ], [ "Waugaman", "Maxwell", "" ], [ "Dimou", "Georgios D.", "" ], [ "Beerel", "Peter A.", "" ] ]
In this paper, we describe and analyze an island-based random dynamic voltage scaling (iRDVS) approach to thwart power side-channel attacks. We first analyze the impact of the number of independent voltage islands on the resulting signal-to-noise ratio and trace misalignment. As part of our analysis of misalignment, we propose a novel unsupervised machine learning (ML) based attack that is effective on systems with three or fewer independent voltages. Our results show that iRDVS with four voltage islands, however, cannot be broken with 200k encryption traces, suggesting that iRDVS can be effective. We finish the talk by describing an iRDVS test chip in a 12nm FinFet process that incorporates three variants of an AES-256 accelerator, all originating from the same RTL. This included a synchronous core, an asynchronous core with no protection, and a core employing the iRDVS technique using asynchronous logic. Lab measurements from the chips indicated that both unprotected variants failed the test vector leakage assessment (TVLA) security metric test, while the iRDVS was proven secure in a variety of configurations.
2203.04051
Snehal Jauhri
Snehal Jauhri, Jan Peters, Georgia Chalvatzaki
Robot Learning of Mobile Manipulation with Reachability Behavior Priors
Accepted: RA-L & IROS 2022
null
10.1109/LRA.2022.3188109
null
cs.RO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile Manipulation (MM) systems are ideal candidates for taking up the role of a personal assistant in unstructured real-world environments. Among other challenges, MM requires effective coordination of the robot's embodiments for executing tasks that require both mobility and manipulation. Reinforcement Learning (RL) holds the promise of endowing robots with adaptive behaviors, but most methods require prohibitively large amounts of data for learning a useful control policy. In this work, we study the integration of robotic reachability priors in actor-critic RL methods for accelerating the learning of MM for reaching and fetching tasks. Namely, we consider the problem of optimal base placement and the subsequent decision of whether to activate the arm for reaching a 6D target. For this, we devise a novel Hybrid RL method that handles discrete and continuous actions jointly, resorting to the Gumbel-Softmax reparameterization. Next, we train a reachability prior using data from the operational robot workspace, inspired by classical methods. Subsequently, we derive Boosted Hybrid RL (BHyRL), a novel algorithm for learning Q-functions by modeling them as a sum of residual approximators. Every time a new task needs to be learned, we can transfer our learned residuals and learn the component of the Q-function that is task-specific, hence, maintaining the task structure from prior behaviors. Moreover, we find that regularizing the target policy with a prior policy yields more expressive behaviors. We evaluate our method in simulation in reaching and fetching tasks of increasing difficulty, and we show the superior performance of BHyRL against baseline methods. Finally, we zero-transfer our learned 6D fetching policy with BHyRL to our MM robot TIAGo++. For more details and code release, please refer to our project site: irosalab.com/rlmmbp
[ { "created": "Tue, 8 Mar 2022 12:44:42 GMT", "version": "v1" }, { "created": "Fri, 20 May 2022 11:52:33 GMT", "version": "v2" }, { "created": "Wed, 6 Jul 2022 19:26:50 GMT", "version": "v3" } ]
2022-10-20
[ [ "Jauhri", "Snehal", "" ], [ "Peters", "Jan", "" ], [ "Chalvatzaki", "Georgia", "" ] ]
Mobile Manipulation (MM) systems are ideal candidates for taking up the role of a personal assistant in unstructured real-world environments. Among other challenges, MM requires effective coordination of the robot's embodiments for executing tasks that require both mobility and manipulation. Reinforcement Learning (RL) holds the promise of endowing robots with adaptive behaviors, but most methods require prohibitively large amounts of data for learning a useful control policy. In this work, we study the integration of robotic reachability priors in actor-critic RL methods for accelerating the learning of MM for reaching and fetching tasks. Namely, we consider the problem of optimal base placement and the subsequent decision of whether to activate the arm for reaching a 6D target. For this, we devise a novel Hybrid RL method that handles discrete and continuous actions jointly, resorting to the Gumbel-Softmax reparameterization. Next, we train a reachability prior using data from the operational robot workspace, inspired by classical methods. Subsequently, we derive Boosted Hybrid RL (BHyRL), a novel algorithm for learning Q-functions by modeling them as a sum of residual approximators. Every time a new task needs to be learned, we can transfer our learned residuals and learn the component of the Q-function that is task-specific, hence, maintaining the task structure from prior behaviors. Moreover, we find that regularizing the target policy with a prior policy yields more expressive behaviors. We evaluate our method in simulation in reaching and fetching tasks of increasing difficulty, and we show the superior performance of BHyRL against baseline methods. Finally, we zero-transfer our learned 6D fetching policy with BHyRL to our MM robot TIAGo++. For more details and code release, please refer to our project site: irosalab.com/rlmmbp
2204.05186
Pratyusha Sharma
Pratyusha Sharma, Balakumar Sundaralingam, Valts Blukis, Chris Paxton, Tucker Hermans, Antonio Torralba, Jacob Andreas, Dieter Fox
Correcting Robot Plans with Natural Language Feedback
10 pages, 13 figures
null
null
null
cs.RO cs.AI cs.CL cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
When humans design cost or goal specifications for robots, they often produce specifications that are ambiguous, underspecified, or beyond planners' ability to solve. In these cases, corrections provide a valuable tool for human-in-the-loop robot control. Corrections might take the form of new goal specifications, new constraints (e.g. to avoid specific objects), or hints for planning algorithms (e.g. to visit specific waypoints). Existing correction methods (e.g. using a joystick or direct manipulation of an end effector) require full teleoperation or real-time interaction. In this paper, we explore natural language as an expressive and flexible tool for robot correction. We describe how to map from natural language sentences to transformations of cost functions. We show that these transformations enable users to correct goals, update robot motions to accommodate additional user preferences, and recover from planning errors. These corrections can be leveraged to get 81% and 93% success rates on tasks where the original planner failed, with either one or two language corrections. Our method makes it possible to compose multiple constraints and generalizes to unseen scenes, objects, and sentences in simulated environments and real-world environments.
[ { "created": "Mon, 11 Apr 2022 15:22:43 GMT", "version": "v1" } ]
2022-04-12
[ [ "Sharma", "Pratyusha", "" ], [ "Sundaralingam", "Balakumar", "" ], [ "Blukis", "Valts", "" ], [ "Paxton", "Chris", "" ], [ "Hermans", "Tucker", "" ], [ "Torralba", "Antonio", "" ], [ "Andreas", "Jacob", "" ], [ "Fox", "Dieter", "" ] ]
When humans design cost or goal specifications for robots, they often produce specifications that are ambiguous, underspecified, or beyond planners' ability to solve. In these cases, corrections provide a valuable tool for human-in-the-loop robot control. Corrections might take the form of new goal specifications, new constraints (e.g. to avoid specific objects), or hints for planning algorithms (e.g. to visit specific waypoints). Existing correction methods (e.g. using a joystick or direct manipulation of an end effector) require full teleoperation or real-time interaction. In this paper, we explore natural language as an expressive and flexible tool for robot correction. We describe how to map from natural language sentences to transformations of cost functions. We show that these transformations enable users to correct goals, update robot motions to accommodate additional user preferences, and recover from planning errors. These corrections can be leveraged to get 81% and 93% success rates on tasks where the original planner failed, with either one or two language corrections. Our method makes it possible to compose multiple constraints and generalizes to unseen scenes, objects, and sentences in simulated environments and real-world environments.
1701.07807
Hua Sun
Hua Sun and Syed A. Jafar
Private Information Retrieval from MDS Coded Data with Colluding Servers: Settling a Conjecture by Freij-Hollanti et al.
null
null
null
null
cs.IT cs.CR cs.IR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A $(K, N, T, K_c)$ instance of the MDS-TPIR problem is comprised of $K$ messages and $N$ distributed servers. Each message is separately encoded through a $(K_c, N)$ MDS storage code. A user wishes to retrieve one message, as efficiently as possible, while revealing no information about the desired message index to any colluding set of up to $T$ servers. The fundamental limit on the efficiency of retrieval, i.e., the capacity of MDS-TPIR is known only at the extremes where either $T$ or $K_c$ belongs to $\{1,N\}$. The focus of this work is a recent conjecture by Freij-Hollanti, Gnilke, Hollanti and Karpuk which offers a general capacity expression for MDS-TPIR. We prove that the conjecture is false by presenting as a counterexample a PIR scheme for the setting $(K, N, T, K_c) = (2,4,2,2)$, which achieves the rate $3/5$, exceeding the conjectured capacity, $4/7$. Insights from the counterexample lead us to capacity characterizations for various instances of MDS-TPIR including all cases with $(K, N, T, K_c) = (2,N,T,N-1)$, where $N$ and $T$ can be arbitrary.
[ { "created": "Thu, 26 Jan 2017 18:35:53 GMT", "version": "v1" }, { "created": "Mon, 30 Jan 2017 17:13:14 GMT", "version": "v2" } ]
2017-01-31
[ [ "Sun", "Hua", "" ], [ "Jafar", "Syed A.", "" ] ]
A $(K, N, T, K_c)$ instance of the MDS-TPIR problem is comprised of $K$ messages and $N$ distributed servers. Each message is separately encoded through a $(K_c, N)$ MDS storage code. A user wishes to retrieve one message, as efficiently as possible, while revealing no information about the desired message index to any colluding set of up to $T$ servers. The fundamental limit on the efficiency of retrieval, i.e., the capacity of MDS-TPIR is known only at the extremes where either $T$ or $K_c$ belongs to $\{1,N\}$. The focus of this work is a recent conjecture by Freij-Hollanti, Gnilke, Hollanti and Karpuk which offers a general capacity expression for MDS-TPIR. We prove that the conjecture is false by presenting as a counterexample a PIR scheme for the setting $(K, N, T, K_c) = (2,4,2,2)$, which achieves the rate $3/5$, exceeding the conjectured capacity, $4/7$. Insights from the counterexample lead us to capacity characterizations for various instances of MDS-TPIR including all cases with $(K, N, T, K_c) = (2,N,T,N-1)$, where $N$ and $T$ can be arbitrary.
2405.05175
Eugene Bagdasaryan
Eugene Bagdasaryan, Ren Yi, Sahra Ghalebikesabi, Peter Kairouz, Marco Gruteser, Sewoong Oh, Borja Balle, Daniel Ramage
Air Gap: Protecting Privacy-Conscious Conversational Agents
null
null
null
null
cs.CR cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growing use of large language model (LLM)-based conversational agents to manage sensitive user data raises significant privacy concerns. While these agents excel at understanding and acting on context, this capability can be exploited by malicious actors. We introduce a novel threat model where adversarial third-party apps manipulate the context of interaction to trick LLM-based agents into revealing private information not relevant to the task at hand. Grounded in the framework of contextual integrity, we introduce AirGapAgent, a privacy-conscious agent designed to prevent unintended data leakage by restricting the agent's access to only the data necessary for a specific task. Extensive experiments using Gemini, GPT, and Mistral models as agents validate our approach's effectiveness in mitigating this form of context hijacking while maintaining core agent functionality. For example, we show that a single-query context hijacking attack on a Gemini Ultra agent reduces its ability to protect user data from 94% to 45%, while an AirGapAgent achieves 97% protection, rendering the same attack ineffective.
[ { "created": "Wed, 8 May 2024 16:12:45 GMT", "version": "v1" } ]
2024-05-09
[ [ "Bagdasaryan", "Eugene", "" ], [ "Yi", "Ren", "" ], [ "Ghalebikesabi", "Sahra", "" ], [ "Kairouz", "Peter", "" ], [ "Gruteser", "Marco", "" ], [ "Oh", "Sewoong", "" ], [ "Balle", "Borja", "" ], [ "Ramage", "Daniel", "" ] ]
The growing use of large language model (LLM)-based conversational agents to manage sensitive user data raises significant privacy concerns. While these agents excel at understanding and acting on context, this capability can be exploited by malicious actors. We introduce a novel threat model where adversarial third-party apps manipulate the context of interaction to trick LLM-based agents into revealing private information not relevant to the task at hand. Grounded in the framework of contextual integrity, we introduce AirGapAgent, a privacy-conscious agent designed to prevent unintended data leakage by restricting the agent's access to only the data necessary for a specific task. Extensive experiments using Gemini, GPT, and Mistral models as agents validate our approach's effectiveness in mitigating this form of context hijacking while maintaining core agent functionality. For example, we show that a single-query context hijacking attack on a Gemini Ultra agent reduces its ability to protect user data from 94% to 45%, while an AirGapAgent achieves 97% protection, rendering the same attack ineffective.
2401.11496
Sibel Kurt Toplu
Sibel Kurt Toplu, Talha Arikan, Pinar Aydo\u{g}Du and O\u{g}Uz Yayla
On a Group Under Which Symmetric Reed-Muller Codes are Invariant
17 pages
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Reed-Muller codes are a family of error-correcting codes that have been widely studied in coding theory. In 2020, Wei Yan and Sian-Jheng Lin introduced a variant of Reed-Muller codes so called symmetric Reed-Muller codes. We investigate linear maps of the automorphism group of symmetric Reed-Muller codes and show that the set of these linear maps forms a subgroup of the general linear group, which is the automorphism group of punctured Reed-Muller codes. We provide a method to determine all the automorphisms in this subgroup explicitly for some special cases.
[ { "created": "Sun, 21 Jan 2024 14:00:23 GMT", "version": "v1" } ]
2024-01-23
[ [ "Toplu", "Sibel Kurt", "" ], [ "Arikan", "Talha", "" ], [ "AydoğDu", "Pinar", "" ], [ "Yayla", "OğUz", "" ] ]
The Reed-Muller codes are a family of error-correcting codes that have been widely studied in coding theory. In 2020, Wei Yan and Sian-Jheng Lin introduced a variant of Reed-Muller codes so called symmetric Reed-Muller codes. We investigate linear maps of the automorphism group of symmetric Reed-Muller codes and show that the set of these linear maps forms a subgroup of the general linear group, which is the automorphism group of punctured Reed-Muller codes. We provide a method to determine all the automorphisms in this subgroup explicitly for some special cases.
1703.07348
Dajiang Zhou
Xushen Han, Dajiang Zhou, Shihao Wang, and Shinji Kimura
CNN-MERP: An FPGA-Based Memory-Efficient Reconfigurable Processor for Forward and Backward Propagation of Convolutional Neural Networks
null
ICCD 2016
null
null
cs.LG cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale deep convolutional neural networks (CNNs) are widely used in machine learning applications. While CNNs involve huge complexity, VLSI (ASIC and FPGA) chips that deliver high-density integration of computational resources are regarded as a promising platform for CNN's implementation. At massive parallelism of computational units, however, the external memory bandwidth, which is constrained by the pin count of the VLSI chip, becomes the system bottleneck. Moreover, VLSI solutions are usually regarded as a lack of the flexibility to be reconfigured for the various parameters of CNNs. This paper presents CNN-MERP to address these issues. CNN-MERP incorporates an efficient memory hierarchy that significantly reduces the bandwidth requirements from multiple optimizations including on/off-chip data allocation, data flow optimization and data reuse. The proposed 2-level reconfigurability is utilized to enable fast and efficient reconfiguration, which is based on the control logic and the multiboot feature of FPGA. As a result, an external memory bandwidth requirement of 1.94MB/GFlop is achieved, which is 55% lower than prior arts. Under limited DRAM bandwidth, a system throughput of 1244GFlop/s is achieved at the Vertex UltraScale platform, which is 5.48 times higher than the state-of-the-art FPGA implementations.
[ { "created": "Wed, 22 Mar 2017 01:31:23 GMT", "version": "v1" } ]
2017-03-23
[ [ "Han", "Xushen", "" ], [ "Zhou", "Dajiang", "" ], [ "Wang", "Shihao", "" ], [ "Kimura", "Shinji", "" ] ]
Large-scale deep convolutional neural networks (CNNs) are widely used in machine learning applications. While CNNs involve huge complexity, VLSI (ASIC and FPGA) chips that deliver high-density integration of computational resources are regarded as a promising platform for CNN's implementation. At massive parallelism of computational units, however, the external memory bandwidth, which is constrained by the pin count of the VLSI chip, becomes the system bottleneck. Moreover, VLSI solutions are usually regarded as a lack of the flexibility to be reconfigured for the various parameters of CNNs. This paper presents CNN-MERP to address these issues. CNN-MERP incorporates an efficient memory hierarchy that significantly reduces the bandwidth requirements from multiple optimizations including on/off-chip data allocation, data flow optimization and data reuse. The proposed 2-level reconfigurability is utilized to enable fast and efficient reconfiguration, which is based on the control logic and the multiboot feature of FPGA. As a result, an external memory bandwidth requirement of 1.94MB/GFlop is achieved, which is 55% lower than prior arts. Under limited DRAM bandwidth, a system throughput of 1244GFlop/s is achieved at the Vertex UltraScale platform, which is 5.48 times higher than the state-of-the-art FPGA implementations.
2310.07217
Matteo Risso
Alessio Burrello, Matteo Risso, Beatrice Alessandra Motetti, Enrico Macii, Luca Benini, Daniele Jahier Pagliari
Enhancing Neural Architecture Search with Multiple Hardware Constraints for Deep Learning Model Deployment on Tiny IoT Devices
Accepted for publication at the IEEE Transactions on Emerging Topics in Computing
null
10.1109/TETC.2023.3322033
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
The rapid proliferation of computing domains relying on Internet of Things (IoT) devices has created a pressing need for efficient and accurate deep-learning (DL) models that can run on low-power devices. However, traditional DL models tend to be too complex and computationally intensive for typical IoT end-nodes. To address this challenge, Neural Architecture Search (NAS) has emerged as a popular design automation technique for co-optimizing the accuracy and complexity of deep neural networks. Nevertheless, existing NAS techniques require many iterations to produce a network that adheres to specific hardware constraints, such as the maximum memory available on the hardware or the maximum latency allowed by the target application. In this work, we propose a novel approach to incorporate multiple constraints into so-called Differentiable NAS optimization methods, which allows the generation, in a single shot, of a model that respects user-defined constraints on both memory and latency in a time comparable to a single standard training. The proposed approach is evaluated on five IoT-relevant benchmarks, including the MLPerf Tiny suite and Tiny ImageNet, demonstrating that, with a single search, it is possible to reduce memory and latency by 87.4% and 54.2%, respectively (as defined by our targets), while ensuring non-inferior accuracy on state-of-the-art hand-tuned deep neural networks for TinyML.
[ { "created": "Wed, 11 Oct 2023 06:09:14 GMT", "version": "v1" } ]
2023-10-12
[ [ "Burrello", "Alessio", "" ], [ "Risso", "Matteo", "" ], [ "Motetti", "Beatrice Alessandra", "" ], [ "Macii", "Enrico", "" ], [ "Benini", "Luca", "" ], [ "Pagliari", "Daniele Jahier", "" ] ]
The rapid proliferation of computing domains relying on Internet of Things (IoT) devices has created a pressing need for efficient and accurate deep-learning (DL) models that can run on low-power devices. However, traditional DL models tend to be too complex and computationally intensive for typical IoT end-nodes. To address this challenge, Neural Architecture Search (NAS) has emerged as a popular design automation technique for co-optimizing the accuracy and complexity of deep neural networks. Nevertheless, existing NAS techniques require many iterations to produce a network that adheres to specific hardware constraints, such as the maximum memory available on the hardware or the maximum latency allowed by the target application. In this work, we propose a novel approach to incorporate multiple constraints into so-called Differentiable NAS optimization methods, which allows the generation, in a single shot, of a model that respects user-defined constraints on both memory and latency in a time comparable to a single standard training. The proposed approach is evaluated on five IoT-relevant benchmarks, including the MLPerf Tiny suite and Tiny ImageNet, demonstrating that, with a single search, it is possible to reduce memory and latency by 87.4% and 54.2%, respectively (as defined by our targets), while ensuring non-inferior accuracy on state-of-the-art hand-tuned deep neural networks for TinyML.
2009.06823
Zhenglun Kong
Wei Niu, Zhenglun Kong, Geng Yuan, Weiwen Jiang, Jiexiong Guan, Caiwen Ding, Pu Zhao, Sijia Liu, Bin Ren, Yanzhi Wang
Real-Time Execution of Large-scale Language Models on Mobile
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-trained large-scale language models have increasingly demonstrated high accuracy on many natural language processing (NLP) tasks. However, the limited weight storage and computational speed on hardware platforms have impeded the popularity of pre-trained models, especially in the era of edge computing. In this paper, we seek to find the best model structure of BERT for a given computation size to match specific devices. We propose the first compiler-aware neural architecture optimization framework. Our framework can guarantee the identified model to meet both resource and real-time specifications of mobile devices, thus achieving real-time execution of large transformer-based models like BERT variants. We evaluate our model on several NLP tasks, achieving competitive results on well-known benchmarks with lower latency on mobile devices. Specifically, our model is 5.2x faster on CPU and 4.1x faster on GPU with 0.5-2% accuracy loss compared with BERT-base. Our overall framework achieves up to 7.8x speedup compared with TensorFlow-Lite with only minor accuracy loss.
[ { "created": "Tue, 15 Sep 2020 01:59:17 GMT", "version": "v1" }, { "created": "Thu, 22 Oct 2020 17:53:07 GMT", "version": "v2" } ]
2020-10-23
[ [ "Niu", "Wei", "" ], [ "Kong", "Zhenglun", "" ], [ "Yuan", "Geng", "" ], [ "Jiang", "Weiwen", "" ], [ "Guan", "Jiexiong", "" ], [ "Ding", "Caiwen", "" ], [ "Zhao", "Pu", "" ], [ "Liu", "Sijia", "" ], [ "Ren", "Bin", "" ], [ "Wang", "Yanzhi", "" ] ]
Pre-trained large-scale language models have increasingly demonstrated high accuracy on many natural language processing (NLP) tasks. However, the limited weight storage and computational speed on hardware platforms have impeded the popularity of pre-trained models, especially in the era of edge computing. In this paper, we seek to find the best model structure of BERT for a given computation size to match specific devices. We propose the first compiler-aware neural architecture optimization framework. Our framework can guarantee the identified model to meet both resource and real-time specifications of mobile devices, thus achieving real-time execution of large transformer-based models like BERT variants. We evaluate our model on several NLP tasks, achieving competitive results on well-known benchmarks with lower latency on mobile devices. Specifically, our model is 5.2x faster on CPU and 4.1x faster on GPU with 0.5-2% accuracy loss compared with BERT-base. Our overall framework achieves up to 7.8x speedup compared with TensorFlow-Lite with only minor accuracy loss.
2112.04178
Qiaoyong Zhong
Kailin Xu, Fanfan Ye, Qiaoyong Zhong, Di Xie
Topology-aware Convolutional Neural Network for Efficient Skeleton-based Action Recognition
Accepted by AAAI 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the context of skeleton-based action recognition, graph convolutional networks (GCNs) have been rapidly developed, whereas convolutional neural networks (CNNs) have received less attention. One reason is that CNNs are considered poor in modeling the irregular skeleton topology. To alleviate this limitation, we propose a pure CNN architecture named Topology-aware CNN (Ta-CNN) in this paper. In particular, we develop a novel cross-channel feature augmentation module, which is a combo of map-attend-group-map operations. By applying the module to the coordinate level and the joint level subsequently, the topology feature is effectively enhanced. Notably, we theoretically prove that graph convolution is a special case of normal convolution when the joint dimension is treated as channels. This confirms that the topology modeling power of GCNs can also be implemented by using a CNN. Moreover, we creatively design a SkeletonMix strategy which mixes two persons in a unique manner and further boosts the performance. Extensive experiments are conducted on four widely used datasets, i.e. N-UCLA, SBU, NTU RGB+D and NTU RGB+D 120 to verify the effectiveness of Ta-CNN. We surpass existing CNN-based methods significantly. Compared with leading GCN-based methods, we achieve comparable performance with much less complexity in terms of the required GFLOPs and parameters.
[ { "created": "Wed, 8 Dec 2021 09:02:50 GMT", "version": "v1" }, { "created": "Thu, 9 Dec 2021 02:42:44 GMT", "version": "v2" } ]
2021-12-10
[ [ "Xu", "Kailin", "" ], [ "Ye", "Fanfan", "" ], [ "Zhong", "Qiaoyong", "" ], [ "Xie", "Di", "" ] ]
In the context of skeleton-based action recognition, graph convolutional networks (GCNs) have been rapidly developed, whereas convolutional neural networks (CNNs) have received less attention. One reason is that CNNs are considered poor in modeling the irregular skeleton topology. To alleviate this limitation, we propose a pure CNN architecture named Topology-aware CNN (Ta-CNN) in this paper. In particular, we develop a novel cross-channel feature augmentation module, which is a combo of map-attend-group-map operations. By applying the module to the coordinate level and the joint level subsequently, the topology feature is effectively enhanced. Notably, we theoretically prove that graph convolution is a special case of normal convolution when the joint dimension is treated as channels. This confirms that the topology modeling power of GCNs can also be implemented by using a CNN. Moreover, we creatively design a SkeletonMix strategy which mixes two persons in a unique manner and further boosts the performance. Extensive experiments are conducted on four widely used datasets, i.e. N-UCLA, SBU, NTU RGB+D and NTU RGB+D 120 to verify the effectiveness of Ta-CNN. We surpass existing CNN-based methods significantly. Compared with leading GCN-based methods, we achieve comparable performance with much less complexity in terms of the required GFLOPs and parameters.
1303.5720
David Heckerman
David Heckerman, Eric J. Horvitz, Blackford Middleton
An Approximate Nonmyopic Computation for Value of Information
Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991)
null
null
UAI-P-1991-PG-135-141
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Value-of-information analyses provide a straightforward means for selecting the best next observation to make, and for determining whether it is better to gather additional information or to act immediately. Determining the next best test to perform, given a state of uncertainty about the world, requires a consideration of the value of making all possible sequences of observations. In practice, decision analysts and expert-system designers have avoided the intractability of exact computation of the value of information by relying on a myopic approximation. Myopic analyses are based on the assumption that only one additional test will be performed, even when there is an opportunity to make a large number of observations. We present a nonmyopic approximation for value of information that bypasses the traditional myopic analyses by exploiting the statistical properties of large samples.
[ { "created": "Wed, 20 Mar 2013 15:30:51 GMT", "version": "v1" }, { "created": "Sat, 16 May 2015 23:55:05 GMT", "version": "v2" } ]
2015-05-19
[ [ "Heckerman", "David", "" ], [ "Horvitz", "Eric J.", "" ], [ "Middleton", "Blackford", "" ] ]
Value-of-information analyses provide a straightforward means for selecting the best next observation to make, and for determining whether it is better to gather additional information or to act immediately. Determining the next best test to perform, given a state of uncertainty about the world, requires a consideration of the value of making all possible sequences of observations. In practice, decision analysts and expert-system designers have avoided the intractability of exact computation of the value of information by relying on a myopic approximation. Myopic analyses are based on the assumption that only one additional test will be performed, even when there is an opportunity to make a large number of observations. We present a nonmyopic approximation for value of information that bypasses the traditional myopic analyses by exploiting the statistical properties of large samples.
2402.15570
Vinu Sankar Sadasivan
Vinu Sankar Sadasivan, Shoumik Saha, Gaurang Sriramanan, Priyatham Kattakinda, Atoosa Chegini, Soheil Feizi
Fast Adversarial Attacks on Language Models In One GPU Minute
null
null
null
null
cs.CR cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
In this paper, we introduce a novel class of fast, beam search-based adversarial attack (BEAST) for Language Models (LMs). BEAST employs interpretable parameters, enabling attackers to balance between attack speed, success rate, and the readability of adversarial prompts. The computational efficiency of BEAST facilitates us to investigate its applications on LMs for jailbreaking, eliciting hallucinations, and privacy attacks. Our gradient-free targeted attack can jailbreak aligned LMs with high attack success rates within one minute. For instance, BEAST can jailbreak Vicuna-7B-v1.5 under one minute with a success rate of 89% when compared to a gradient-based baseline that takes over an hour to achieve 70% success rate using a single Nvidia RTX A6000 48GB GPU. Additionally, we discover a unique outcome wherein our untargeted attack induces hallucinations in LM chatbots. Through human evaluations, we find that our untargeted attack causes Vicuna-7B-v1.5 to produce ~15% more incorrect outputs when compared to LM outputs in the absence of our attack. We also learn that 22% of the time, BEAST causes Vicuna to generate outputs that are not relevant to the original prompt. Further, we use BEAST to generate adversarial prompts in a few seconds that can boost the performance of existing membership inference attacks for LMs. We believe that our fast attack, BEAST, has the potential to accelerate research in LM security and privacy. Our codebase is publicly available at https://github.com/vinusankars/BEAST.
[ { "created": "Fri, 23 Feb 2024 19:12:53 GMT", "version": "v1" } ]
2024-02-27
[ [ "Sadasivan", "Vinu Sankar", "" ], [ "Saha", "Shoumik", "" ], [ "Sriramanan", "Gaurang", "" ], [ "Kattakinda", "Priyatham", "" ], [ "Chegini", "Atoosa", "" ], [ "Feizi", "Soheil", "" ] ]
In this paper, we introduce a novel class of fast, beam search-based adversarial attack (BEAST) for Language Models (LMs). BEAST employs interpretable parameters, enabling attackers to balance between attack speed, success rate, and the readability of adversarial prompts. The computational efficiency of BEAST facilitates us to investigate its applications on LMs for jailbreaking, eliciting hallucinations, and privacy attacks. Our gradient-free targeted attack can jailbreak aligned LMs with high attack success rates within one minute. For instance, BEAST can jailbreak Vicuna-7B-v1.5 under one minute with a success rate of 89% when compared to a gradient-based baseline that takes over an hour to achieve 70% success rate using a single Nvidia RTX A6000 48GB GPU. Additionally, we discover a unique outcome wherein our untargeted attack induces hallucinations in LM chatbots. Through human evaluations, we find that our untargeted attack causes Vicuna-7B-v1.5 to produce ~15% more incorrect outputs when compared to LM outputs in the absence of our attack. We also learn that 22% of the time, BEAST causes Vicuna to generate outputs that are not relevant to the original prompt. Further, we use BEAST to generate adversarial prompts in a few seconds that can boost the performance of existing membership inference attacks for LMs. We believe that our fast attack, BEAST, has the potential to accelerate research in LM security and privacy. Our codebase is publicly available at https://github.com/vinusankars/BEAST.
2110.07141
Peihao Wang
Peihao Wang, Yuehao Wang, Hua Lin, Jianbo Shi
SoGCN: Second-Order Graph Convolutional Networks
15 pages, 7 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Convolutional Networks (GCN) with multi-hop aggregation is more expressive than one-hop GCN but suffers from higher model complexity. Finding the shortest aggregation range that achieves comparable expressiveness and minimizes this side effect remains an open question. We answer this question by showing that multi-layer second-order graph convolution (SoGC) is sufficient to attain the ability of expressing polynomial spectral filters with arbitrary coefficients. Compared to models with one-hop aggregation, multi-hop propagation, and jump connections, SoGC possesses filter representational completeness while being lightweight, efficient, and easy to implement. Thereby, we suggest that SoGC is a simple design capable of forming the basic building block of GCNs, playing the same role as $3 \times 3$ kernels in CNNs. We build our Second-Order Graph Convolutional Networks (SoGCN) with SoGC and design a synthetic dataset to verify its filter fitting capability to validate these points. For real-world tasks, we present the state-of-the-art performance of SoGCN on the benchmark of node classification, graph classification, and graph regression datasets.
[ { "created": "Thu, 14 Oct 2021 03:56:34 GMT", "version": "v1" } ]
2021-10-15
[ [ "Wang", "Peihao", "" ], [ "Wang", "Yuehao", "" ], [ "Lin", "Hua", "" ], [ "Shi", "Jianbo", "" ] ]
Graph Convolutional Networks (GCN) with multi-hop aggregation is more expressive than one-hop GCN but suffers from higher model complexity. Finding the shortest aggregation range that achieves comparable expressiveness and minimizes this side effect remains an open question. We answer this question by showing that multi-layer second-order graph convolution (SoGC) is sufficient to attain the ability of expressing polynomial spectral filters with arbitrary coefficients. Compared to models with one-hop aggregation, multi-hop propagation, and jump connections, SoGC possesses filter representational completeness while being lightweight, efficient, and easy to implement. Thereby, we suggest that SoGC is a simple design capable of forming the basic building block of GCNs, playing the same role as $3 \times 3$ kernels in CNNs. We build our Second-Order Graph Convolutional Networks (SoGCN) with SoGC and design a synthetic dataset to verify its filter fitting capability to validate these points. For real-world tasks, we present the state-of-the-art performance of SoGCN on the benchmark of node classification, graph classification, and graph regression datasets.
1901.01815
Varun Mathur
Varun Mathur
Literature Review: Smart Contract Semantics
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This review presents and evaluates various formalisms for the purpose of modelling the semantics of financial derivatives contracts. The formalism proposed by Lee is selected as the best candidate among those initially reviewed. Further examination and evaluation of this formalism is done.
[ { "created": "Sat, 22 Dec 2018 07:36:49 GMT", "version": "v1" } ]
2019-01-08
[ [ "Mathur", "Varun", "" ] ]
This review presents and evaluates various formalisms for the purpose of modelling the semantics of financial derivatives contracts. The formalism proposed by Lee is selected as the best candidate among those initially reviewed. Further examination and evaluation of this formalism is done.
1105.2800
Afzal Godil
Afzal Godil, Sandy Ressler
Retrieval and Clustering from a 3D Human Database based on Body and Head Shape
Published in Proceedings of the 2006 Digital Human Modeling for Design and Engineering Conference, July 2006, Lyon, FRANCE, Session: Advanced Size/Shape Analysis Paper Number: 2006-01-2355 http://papers.sae.org/2006-01-2355
null
10.4271/2006-01-2355
null
cs.CV cs.CG
http://creativecommons.org/licenses/publicdomain/
In this paper, we describe a framework for similarity based retrieval and clustering from a 3D human database. Our technique is based on both body and head shape representation and the retrieval is based on similarity of both of them. The 3D human database used in our study is the CAESAR anthropometric database which contains approximately 5000 bodies. We have developed a web-based interface for specifying the queries to interact with the retrieval system. Our approach performs the similarity based retrieval in a reasonable amount of time and is a practical approach.
[ { "created": "Fri, 13 May 2011 18:38:36 GMT", "version": "v1" } ]
2011-05-16
[ [ "Godil", "Afzal", "" ], [ "Ressler", "Sandy", "" ] ]
In this paper, we describe a framework for similarity based retrieval and clustering from a 3D human database. Our technique is based on both body and head shape representation and the retrieval is based on similarity of both of them. The 3D human database used in our study is the CAESAR anthropometric database which contains approximately 5000 bodies. We have developed a web-based interface for specifying the queries to interact with the retrieval system. Our approach performs the similarity based retrieval in a reasonable amount of time and is a practical approach.
2301.08325
Heeyoul Choi
Namjin Seo, DongNyeong Heo, Heeyoul Choi
Advanced Scaling Methods for VNF deployment with Reinforcement Learning
27 pages
null
null
null
cs.NI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network function virtualization (NFV) and software-defined network (SDN) have become emerging network paradigms, allowing virtualized network function (VNF) deployment at a low cost. Even though VNF deployment can be flexible, it is still challenging to optimize VNF deployment due to its high complexity. Several studies have approached the task as dynamic programming, e.g., integer linear programming (ILP). However, optimizing VNF deployment for highly complex networks remains a challenge. Alternatively, reinforcement learning (RL) based approaches have been proposed to optimize this task, especially to employ a scaling action-based method which can deploy VNFs within less computational time. However, the model architecture can be improved further to generalize to the different networking settings. In this paper, we propose an enhanced model which can be adapted to more general network settings. We adopt the improved GNN architecture and a few techniques to obtain a better node representation for the VNF deployment task. Furthermore, we apply a recently proposed RL method, phasic policy gradient (PPG), to leverage the shared representation of the service function chain (SFC) generation model from the value function. We evaluate the proposed method in various scenarios, achieving a better QoS with minimum resource utilization compared to the previous methods. Finally, as a qualitative evaluation, we analyze our proposed encoder's representation for the nodes, which shows a more disentangled representation.
[ { "created": "Thu, 19 Jan 2023 21:31:23 GMT", "version": "v1" } ]
2023-01-23
[ [ "Seo", "Namjin", "" ], [ "Heo", "DongNyeong", "" ], [ "Choi", "Heeyoul", "" ] ]
Network function virtualization (NFV) and software-defined network (SDN) have become emerging network paradigms, allowing virtualized network function (VNF) deployment at a low cost. Even though VNF deployment can be flexible, it is still challenging to optimize VNF deployment due to its high complexity. Several studies have approached the task as dynamic programming, e.g., integer linear programming (ILP). However, optimizing VNF deployment for highly complex networks remains a challenge. Alternatively, reinforcement learning (RL) based approaches have been proposed to optimize this task, especially to employ a scaling action-based method which can deploy VNFs within less computational time. However, the model architecture can be improved further to generalize to the different networking settings. In this paper, we propose an enhanced model which can be adapted to more general network settings. We adopt the improved GNN architecture and a few techniques to obtain a better node representation for the VNF deployment task. Furthermore, we apply a recently proposed RL method, phasic policy gradient (PPG), to leverage the shared representation of the service function chain (SFC) generation model from the value function. We evaluate the proposed method in various scenarios, achieving a better QoS with minimum resource utilization compared to the previous methods. Finally, as a qualitative evaluation, we analyze our proposed encoder's representation for the nodes, which shows a more disentangled representation.
2304.01517
Xu Chen
Xu Chen, Zhiyong Feng, Zhiqing Wei, Ping Zhang, and Xin Yuan
Code-Division OFDM Joint Communication and Sensing System for 6G Machine-type Communication
13 pages,16 figures
IEEE Internet of Things Journal, vol. 8, no. 15, pp. 12 093-12 105, Feb. 2021
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
The joint communication and sensing (JCS) system can provide higher spectrum efficiency and load-saving for 6G machine-type communication (MTC) applications by merging necessary communication and sensing abilities with unified spectrum and transceivers. In order to suppress the mutual interference between the communication and radar sensing signals to improve the communication reliability and radar sensing accuracy, we propose a novel code-division orthogonal frequency division multiplex (CD-OFDM) JCS MTC system, where MTC users can simultaneously and continuously conduct communication and sensing with each other. {\color{black} We propose a novel CD-OFDM JCS signal and corresponding successive-interference-cancellation (SIC) based signal processing technique that obtains code-division multiplex (CDM) gain, which is compatible with the prevalent orthogonal frequency division multiplex (OFDM) communication system.} To model the unified JCS signal transmission and reception process, we propose a novel unified JCS channel model. Finally, the simulation and numerical results are shown to verify the feasibility of the CD-OFDM JCS MTC system {\color{black} and the error propagation performance}. We show that the CD-OFDM JCS MTC system can achieve not only more reliable communication but also comparably robust radar sensing compared with the precedent OFDM JCS system, especially in low signal-to-interference-and-noise ratio (SINR) regime.
[ { "created": "Tue, 4 Apr 2023 04:00:37 GMT", "version": "v1" } ]
2023-04-05
[ [ "Chen", "Xu", "" ], [ "Feng", "Zhiyong", "" ], [ "Wei", "Zhiqing", "" ], [ "Zhang", "Ping", "" ], [ "Yuan", "Xin", "" ] ]
The joint communication and sensing (JCS) system can provide higher spectrum efficiency and load-saving for 6G machine-type communication (MTC) applications by merging necessary communication and sensing abilities with unified spectrum and transceivers. In order to suppress the mutual interference between the communication and radar sensing signals to improve the communication reliability and radar sensing accuracy, we propose a novel code-division orthogonal frequency division multiplex (CD-OFDM) JCS MTC system, where MTC users can simultaneously and continuously conduct communication and sensing with each other. {\color{black} We propose a novel CD-OFDM JCS signal and corresponding successive-interference-cancellation (SIC) based signal processing technique that obtains code-division multiplex (CDM) gain, which is compatible with the prevalent orthogonal frequency division multiplex (OFDM) communication system.} To model the unified JCS signal transmission and reception process, we propose a novel unified JCS channel model. Finally, the simulation and numerical results are shown to verify the feasibility of the CD-OFDM JCS MTC system {\color{black} and the error propagation performance}. We show that the CD-OFDM JCS MTC system can achieve not only more reliable communication but also comparably robust radar sensing compared with the precedent OFDM JCS system, especially in low signal-to-interference-and-noise ratio (SINR) regime.
2403.14510
Ahmed El Gazzar
Ahmed ElGazzar and Marcel van Gerven
Universal Differential Equations as a Common Modeling Language for Neuroscience
23 pages, 3 figures
null
null
null
cs.CE
http://creativecommons.org/licenses/by/4.0/
The unprecedented availability of large-scale datasets in neuroscience has spurred the exploration of artificial deep neural networks (DNNs) both as empirical tools and as models of natural neural systems. Their appeal lies in their ability to approximate arbitrary functions directly from observations, circumventing the need for cumbersome mechanistic modeling. However, without appropriate constraints, DNNs risk producing implausible models, diminishing their scientific value. Moreover, the interpretability of DNNs poses a significant challenge, particularly with the adoption of more complex expressive architectures. In this perspective, we argue for universal differential equations (UDEs) as a unifying approach for model development and validation in neuroscience. UDEs view differential equations as parameterizable, differentiable mathematical objects that can be augmented and trained with scalable deep learning techniques. This synergy facilitates the integration of decades of extensive literature in calculus, numerical analysis, and neural modeling with emerging advancements in AI into a potent framework. We provide a primer on this burgeoning topic in scientific machine learning and demonstrate how UDEs fill in a critical gap between mechanistic, phenomenological, and data-driven models in neuroscience. We outline a flexible recipe for modeling neural systems with UDEs and discuss how they can offer principled solutions to inherent challenges across diverse neuroscience applications such as understanding neural computation, controlling neural systems, neural decoding, and normative modeling.
[ { "created": "Thu, 21 Mar 2024 16:07:30 GMT", "version": "v1" } ]
2024-03-22
[ [ "ElGazzar", "Ahmed", "" ], [ "van Gerven", "Marcel", "" ] ]
The unprecedented availability of large-scale datasets in neuroscience has spurred the exploration of artificial deep neural networks (DNNs) both as empirical tools and as models of natural neural systems. Their appeal lies in their ability to approximate arbitrary functions directly from observations, circumventing the need for cumbersome mechanistic modeling. However, without appropriate constraints, DNNs risk producing implausible models, diminishing their scientific value. Moreover, the interpretability of DNNs poses a significant challenge, particularly with the adoption of more complex expressive architectures. In this perspective, we argue for universal differential equations (UDEs) as a unifying approach for model development and validation in neuroscience. UDEs view differential equations as parameterizable, differentiable mathematical objects that can be augmented and trained with scalable deep learning techniques. This synergy facilitates the integration of decades of extensive literature in calculus, numerical analysis, and neural modeling with emerging advancements in AI into a potent framework. We provide a primer on this burgeoning topic in scientific machine learning and demonstrate how UDEs fill in a critical gap between mechanistic, phenomenological, and data-driven models in neuroscience. We outline a flexible recipe for modeling neural systems with UDEs and discuss how they can offer principled solutions to inherent challenges across diverse neuroscience applications such as understanding neural computation, controlling neural systems, neural decoding, and normative modeling.
2203.04045
Ruijie Yan
Ruijie Yan, Shuang Peng, Haitao Mi, Liang Jiang, Shihui Yang, Yuchi Zhang, Jiajun Li, Liangrui Peng, Yongliang Wang, Zujie Wen
Towards Generalized Models for Task-oriented Dialogue Modeling on Spoken Conversations
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Building robust and general dialogue models for spoken conversations is challenging due to the gap in distributions of spoken and written data. This paper presents our approach to build generalized models for the Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations Challenge of DSTC-10. In order to mitigate the discrepancies between spoken and written text, we mainly employ extensive data augmentation strategies on written data, including artificial error injection and round-trip text-speech transformation. To train robust models for spoken conversations, we improve pre-trained language models, and apply ensemble algorithms for each sub-task. Typically, for the detection task, we fine-tune \roberta and ELECTRA, and run an error-fixing ensemble algorithm. For the selection task, we adopt a two-stage framework that consists of entity tracking and knowledge ranking, and propose a multi-task learning method to learn multi-level semantic information by domain classification and entity selection. For the generation task, we adopt a cross-validation data process to improve pre-trained generative language models, followed by a consensus decoding algorithm, which can add arbitrary features like relative \rouge metric, and tune associated feature weights toward \bleu directly. Our approach ranks third on the objective evaluation and second on the final official human evaluation.
[ { "created": "Tue, 8 Mar 2022 12:26:57 GMT", "version": "v1" } ]
2022-03-09
[ [ "Yan", "Ruijie", "" ], [ "Peng", "Shuang", "" ], [ "Mi", "Haitao", "" ], [ "Jiang", "Liang", "" ], [ "Yang", "Shihui", "" ], [ "Zhang", "Yuchi", "" ], [ "Li", "Jiajun", "" ], [ "Peng", "Liangrui", "" ], [ "Wang", "Yongliang", "" ], [ "Wen", "Zujie", "" ] ]
Building robust and general dialogue models for spoken conversations is challenging due to the gap in distributions of spoken and written data. This paper presents our approach to build generalized models for the Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations Challenge of DSTC-10. In order to mitigate the discrepancies between spoken and written text, we mainly employ extensive data augmentation strategies on written data, including artificial error injection and round-trip text-speech transformation. To train robust models for spoken conversations, we improve pre-trained language models, and apply ensemble algorithms for each sub-task. Typically, for the detection task, we fine-tune \roberta and ELECTRA, and run an error-fixing ensemble algorithm. For the selection task, we adopt a two-stage framework that consists of entity tracking and knowledge ranking, and propose a multi-task learning method to learn multi-level semantic information by domain classification and entity selection. For the generation task, we adopt a cross-validation data process to improve pre-trained generative language models, followed by a consensus decoding algorithm, which can add arbitrary features like relative \rouge metric, and tune associated feature weights toward \bleu directly. Our approach ranks third on the objective evaluation and second on the final official human evaluation.
1612.04463
Xiaohu Ge
Jiaqi Chen, Fen Bin, Xiaohu Ge, Qiang Li, Cheng-Xiang Wang
A Dual-Directional Path-loss Model in 5G Wireless Fractal Small Cell Networks
null
null
null
null
cs.NI cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
With the anticipated increase in the number of low power base stations (BSs) deployed in small cell networks, blockage effects becoming more sensitive on wireless transmissions over high spectrums, variable propagation fading scenarios make it hard to describe coverage of small cell networks. In this paper, we propose a dual-directional path loss model cooperating with Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) transmissions for the fifth generation (5G) fractal small cell networks. Based on the proposed path loss model, a LoS transmission probability is derived as a function of the coordinate azimuth of the BS and the distance between the mobile user (MU) and the BS. Moreover, the coverage probability and the average achievable rate are analyzed for 5G fractal small cell networks. Numerical results imply that the minimum intensity of blockages and the maximum intensity of BSs can not guarantee the maximum average achievable rate in 5G fractal small cell networks. Our results explore the relationship between the anisotropic path loss fading and the small cell coverage in 5G fractal small cell networks.
[ { "created": "Wed, 14 Dec 2016 02:42:12 GMT", "version": "v1" } ]
2016-12-19
[ [ "Chen", "Jiaqi", "" ], [ "Bin", "Fen", "" ], [ "Ge", "Xiaohu", "" ], [ "Li", "Qiang", "" ], [ "Wang", "Cheng-Xiang", "" ] ]
With the anticipated increase in the number of low power base stations (BSs) deployed in small cell networks, blockage effects becoming more sensitive on wireless transmissions over high spectrums, variable propagation fading scenarios make it hard to describe coverage of small cell networks. In this paper, we propose a dual-directional path loss model cooperating with Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) transmissions for the fifth generation (5G) fractal small cell networks. Based on the proposed path loss model, a LoS transmission probability is derived as a function of the coordinate azimuth of the BS and the distance between the mobile user (MU) and the BS. Moreover, the coverage probability and the average achievable rate are analyzed for 5G fractal small cell networks. Numerical results imply that the minimum intensity of blockages and the maximum intensity of BSs can not guarantee the maximum average achievable rate in 5G fractal small cell networks. Our results explore the relationship between the anisotropic path loss fading and the small cell coverage in 5G fractal small cell networks.
2308.01574
Andreas Bj\"orklund
Andreas Bj\"orklund, Petteri Kaski, and Jesper Nederlof
Another Hamiltonian Cycle in Bipartite Pfaffian Graphs
Adds analysis of Thomason's lollipop method
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding a Hamiltonian cycle in a given graph is computationally challenging, and in general remains so even when one is further given one Hamiltonian cycle in the graph and asked to find another. In fact, no significantly faster algorithms are known for finding another Hamiltonian cycle than for finding a first one even in the setting where another Hamiltonian cycle is structurally guaranteed to exist, such as for odd-degree graphs. We identify a graph class -- the bipartite Pfaffian graphs of minimum degree three -- where it is NP-complete to decide whether a given graph in the class is Hamiltonian, but when presented with a Hamiltonian cycle as part of the input, another Hamiltonian cycle can be found efficiently. We prove that Thomason's lollipop method~[Ann.~Discrete Math.,~1978], a well-known algorithm for finding another Hamiltonian cycle, runs in a linear number of steps in cubic bipartite Pfaffian graphs. This was conjectured for cubic bipartite planar graphs by Haddadan [MSc~thesis,~Waterloo,~2015]; in contrast, examples are known of both cubic bipartite graphs and cubic planar graphs where the lollipop method takes exponential time. Beyond the lollipop method, we address a slightly more general graph class and present two algorithms, one running in linear-time and one operating in logarithmic space, that take as input (i) a bipartite Pfaffian graph $G$ of minimum degree three, (ii) a Hamiltonian cycle $H$ in $G$, and (iii) an edge $e$ in $H$, and output at least three other Hamiltonian cycles through the edge $e$ in $G$. We also present further improved algorithms for finding optimal traveling salesperson tours and counting Hamiltonian cycles in bipartite planar graphs with running times that are not known to hold in general planar graphs.
[ { "created": "Thu, 3 Aug 2023 07:22:12 GMT", "version": "v1" }, { "created": "Thu, 22 Feb 2024 04:44:55 GMT", "version": "v2" } ]
2024-02-23
[ [ "Björklund", "Andreas", "" ], [ "Kaski", "Petteri", "" ], [ "Nederlof", "Jesper", "" ] ]
Finding a Hamiltonian cycle in a given graph is computationally challenging, and in general remains so even when one is further given one Hamiltonian cycle in the graph and asked to find another. In fact, no significantly faster algorithms are known for finding another Hamiltonian cycle than for finding a first one even in the setting where another Hamiltonian cycle is structurally guaranteed to exist, such as for odd-degree graphs. We identify a graph class -- the bipartite Pfaffian graphs of minimum degree three -- where it is NP-complete to decide whether a given graph in the class is Hamiltonian, but when presented with a Hamiltonian cycle as part of the input, another Hamiltonian cycle can be found efficiently. We prove that Thomason's lollipop method~[Ann.~Discrete Math.,~1978], a well-known algorithm for finding another Hamiltonian cycle, runs in a linear number of steps in cubic bipartite Pfaffian graphs. This was conjectured for cubic bipartite planar graphs by Haddadan [MSc~thesis,~Waterloo,~2015]; in contrast, examples are known of both cubic bipartite graphs and cubic planar graphs where the lollipop method takes exponential time. Beyond the lollipop method, we address a slightly more general graph class and present two algorithms, one running in linear-time and one operating in logarithmic space, that take as input (i) a bipartite Pfaffian graph $G$ of minimum degree three, (ii) a Hamiltonian cycle $H$ in $G$, and (iii) an edge $e$ in $H$, and output at least three other Hamiltonian cycles through the edge $e$ in $G$. We also present further improved algorithms for finding optimal traveling salesperson tours and counting Hamiltonian cycles in bipartite planar graphs with running times that are not known to hold in general planar graphs.
1705.06463
Hongmin Wang
Hongmin Wang, Yue Zhang, GuangYong Leonard Chan, Jie Yang, Hai Leong Chieu
Universal Dependencies Parsing for Colloquial Singaporean English
Accepted by ACL 2017
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Singlish can be interesting to the ACL community both linguistically as a major creole based on English, and computationally for information extraction and sentiment analysis of regional social media. We investigate dependency parsing of Singlish by constructing a dependency treebank under the Universal Dependencies scheme, and then training a neural network model by integrating English syntactic knowledge into a state-of-the-art parser trained on the Singlish treebank. Results show that English knowledge can lead to 25% relative error reduction, resulting in a parser of 84.47% accuracies. To the best of our knowledge, we are the first to use neural stacking to improve cross-lingual dependency parsing on low-resource languages. We make both our annotation and parser available for further research.
[ { "created": "Thu, 18 May 2017 08:27:42 GMT", "version": "v1" } ]
2017-05-19
[ [ "Wang", "Hongmin", "" ], [ "Zhang", "Yue", "" ], [ "Chan", "GuangYong Leonard", "" ], [ "Yang", "Jie", "" ], [ "Chieu", "Hai Leong", "" ] ]
Singlish can be interesting to the ACL community both linguistically as a major creole based on English, and computationally for information extraction and sentiment analysis of regional social media. We investigate dependency parsing of Singlish by constructing a dependency treebank under the Universal Dependencies scheme, and then training a neural network model by integrating English syntactic knowledge into a state-of-the-art parser trained on the Singlish treebank. Results show that English knowledge can lead to 25% relative error reduction, resulting in a parser of 84.47% accuracies. To the best of our knowledge, we are the first to use neural stacking to improve cross-lingual dependency parsing on low-resource languages. We make both our annotation and parser available for further research.
1401.2482
Marko Horvat
Marko Horvat, Nikola Bogunovi\'c, Kre\v{s}imir \'Cosi\'c
STIMONT: A core ontology for multimedia stimuli description
27 pages, 13 figures
Multimedia tools and applications, 11042, July 2013
10.1007/s11042-013-1624-4
null
cs.MM cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Affective multimedia documents such as images, sounds or videos elicit emotional responses in exposed human subjects. These stimuli are stored in affective multimedia databases and successfully used for a wide variety of research in psychology and neuroscience in areas related to attention and emotion processing. Although important all affective multimedia databases have numerous deficiencies which impair their applicability. These problems, which are brought forward in the paper, result in low recall and precision of multimedia stimuli retrieval which makes creating emotion elicitation procedures difficult and labor-intensive. To address these issues a new core ontology STIMONT is introduced. The STIMONT is written in OWL-DL formalism and extends W3C EmotionML format with an expressive and formal representation of affective concepts, high-level semantics, stimuli document metadata and the elicited physiology. The advantages of ontology in description of affective multimedia stimuli are demonstrated in a document retrieval experiment and compared against contemporary keyword-based querying methods. Also, a software tool Intelligent Stimulus Generator for retrieval of affective multimedia and construction of stimuli sequences is presented.
[ { "created": "Fri, 10 Jan 2014 23:36:51 GMT", "version": "v1" } ]
2014-01-14
[ [ "Horvat", "Marko", "" ], [ "Bogunović", "Nikola", "" ], [ "Ćosić", "Krešimir", "" ] ]
Affective multimedia documents such as images, sounds or videos elicit emotional responses in exposed human subjects. These stimuli are stored in affective multimedia databases and successfully used for a wide variety of research in psychology and neuroscience in areas related to attention and emotion processing. Although important all affective multimedia databases have numerous deficiencies which impair their applicability. These problems, which are brought forward in the paper, result in low recall and precision of multimedia stimuli retrieval which makes creating emotion elicitation procedures difficult and labor-intensive. To address these issues a new core ontology STIMONT is introduced. The STIMONT is written in OWL-DL formalism and extends W3C EmotionML format with an expressive and formal representation of affective concepts, high-level semantics, stimuli document metadata and the elicited physiology. The advantages of ontology in description of affective multimedia stimuli are demonstrated in a document retrieval experiment and compared against contemporary keyword-based querying methods. Also, a software tool Intelligent Stimulus Generator for retrieval of affective multimedia and construction of stimuli sequences is presented.
2105.03172
Hlynur Dav{\i}{\dh} Hlynsson
Hlynur Dav\'i{\dh} Hlynsson, Laurenz Wiskott
Reward prediction for representation learning and reward shaping
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
One of the fundamental challenges in reinforcement learning (RL) is the one of data efficiency: modern algorithms require a very large number of training samples, especially compared to humans, for solving environments with high-dimensional observations. The severity of this problem is increased when the reward signal is sparse. In this work, we propose learning a state representation in a self-supervised manner for reward prediction. The reward predictor learns to estimate either a raw or a smoothed version of the true reward signal in environment with a single, terminating, goal state. We augment the training of out-of-the-box RL agents by shaping the reward using our reward predictor during policy learning. Using our representation for preprocessing high-dimensional observations, as well as using the predictor for reward shaping, is shown to significantly enhance Actor Critic using Kronecker-factored Trust Region and Proximal Policy Optimization in single-goal environments with visual inputs.
[ { "created": "Fri, 7 May 2021 11:29:32 GMT", "version": "v1" } ]
2021-05-10
[ [ "Hlynsson", "Hlynur Davíð", "" ], [ "Wiskott", "Laurenz", "" ] ]
One of the fundamental challenges in reinforcement learning (RL) is the one of data efficiency: modern algorithms require a very large number of training samples, especially compared to humans, for solving environments with high-dimensional observations. The severity of this problem is increased when the reward signal is sparse. In this work, we propose learning a state representation in a self-supervised manner for reward prediction. The reward predictor learns to estimate either a raw or a smoothed version of the true reward signal in environment with a single, terminating, goal state. We augment the training of out-of-the-box RL agents by shaping the reward using our reward predictor during policy learning. Using our representation for preprocessing high-dimensional observations, as well as using the predictor for reward shaping, is shown to significantly enhance Actor Critic using Kronecker-factored Trust Region and Proximal Policy Optimization in single-goal environments with visual inputs.
2104.09799
Zhu Bo
Zhu Bo, Rang Liu, Ming Li, and Qian Liu
Deep Learning based Efficient Symbol-Level Precoding Design for MU-MISO Systems
5 pages, 5 figures, 2 tables, submitted to IEEE Transactions on Vehicular Technology
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by-sa/4.0/
The recently emerged symbol-level precoding (SLP) technique has been regarded as a promising solution in multi-user wireless communication systems, since it can convert harmful multi-user interference (MUI) into beneficial signals for enhancing system performance. However, the tremendous computational complexity of conventional symbol-level precoding designs severely hinders the practical implementations. In order to tackle this difficulty, we propose a novel deep learning (DL) based approach to efficiently design the symbol-level precoders. Particularly, in this correspondence, we consider a multi-user multi-input single-output (MU-MISO) downlink system. An efficient precoding neural network (EPNN) is introduced to optimize the symbol-level precoders for maximizing the minimum quality-of-service (QoS) of all users under the power constraint. Simulation results demonstrate that the proposed EPNN based SLP design can dramatically reduce the computing time at the price of slight performance loss compared with the conventional convex optimization based SLP design.
[ { "created": "Tue, 20 Apr 2021 07:24:59 GMT", "version": "v1" } ]
2021-04-21
[ [ "Bo", "Zhu", "" ], [ "Liu", "Rang", "" ], [ "Li", "Ming", "" ], [ "Liu", "Qian", "" ] ]
The recently emerged symbol-level precoding (SLP) technique has been regarded as a promising solution in multi-user wireless communication systems, since it can convert harmful multi-user interference (MUI) into beneficial signals for enhancing system performance. However, the tremendous computational complexity of conventional symbol-level precoding designs severely hinders the practical implementations. In order to tackle this difficulty, we propose a novel deep learning (DL) based approach to efficiently design the symbol-level precoders. Particularly, in this correspondence, we consider a multi-user multi-input single-output (MU-MISO) downlink system. An efficient precoding neural network (EPNN) is introduced to optimize the symbol-level precoders for maximizing the minimum quality-of-service (QoS) of all users under the power constraint. Simulation results demonstrate that the proposed EPNN based SLP design can dramatically reduce the computing time at the price of slight performance loss compared with the conventional convex optimization based SLP design.
2407.12860
Jack Boylan
Aaron Zolnai-Lucas, Jack Boylan, Chris Hokamp, Parsa Ghaffari
STAGE: Simplified Text-Attributed Graph Embeddings Using Pre-trained LLMs
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
We present Simplified Text-Attributed Graph Embeddings (STAGE), a straightforward yet effective method for enhancing node features in Graph Neural Network (GNN) models that encode Text-Attributed Graphs (TAGs). Our approach leverages Large-Language Models (LLMs) to generate embeddings for textual attributes. STAGE achieves competitive results on various node classification benchmarks while also maintaining a simplicity in implementation relative to current state-of-the-art (SoTA) techniques. We show that utilizing pre-trained LLMs as embedding generators provides robust features for ensemble GNN training, enabling pipelines that are simpler than current SoTA approaches which require multiple expensive training and prompting stages. We also implement diffusion-pattern GNNs in an effort to make this pipeline scalable to graphs beyond academic benchmarks.
[ { "created": "Wed, 10 Jul 2024 08:50:25 GMT", "version": "v1" } ]
2024-07-19
[ [ "Zolnai-Lucas", "Aaron", "" ], [ "Boylan", "Jack", "" ], [ "Hokamp", "Chris", "" ], [ "Ghaffari", "Parsa", "" ] ]
We present Simplified Text-Attributed Graph Embeddings (STAGE), a straightforward yet effective method for enhancing node features in Graph Neural Network (GNN) models that encode Text-Attributed Graphs (TAGs). Our approach leverages Large-Language Models (LLMs) to generate embeddings for textual attributes. STAGE achieves competitive results on various node classification benchmarks while also maintaining a simplicity in implementation relative to current state-of-the-art (SoTA) techniques. We show that utilizing pre-trained LLMs as embedding generators provides robust features for ensemble GNN training, enabling pipelines that are simpler than current SoTA approaches which require multiple expensive training and prompting stages. We also implement diffusion-pattern GNNs in an effort to make this pipeline scalable to graphs beyond academic benchmarks.
2310.18937
Weipeng Huang
Eoin M. Kenny and Weipeng Huang
The Utility of "Even if..." Semifactual Explanation to Optimise Positive Outcomes
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
When users receive either a positive or negative outcome from an automated system, Explainable AI (XAI) has almost exclusively focused on how to mutate negative outcomes into positive ones by crossing a decision boundary using counterfactuals (e.g., \textit{"If you earn 2k more, we will accept your loan application"}). Here, we instead focus on \textit{positive} outcomes, and take the novel step of using XAI to optimise them (e.g., \textit{"Even if you wish to half your down-payment, we will still accept your loan application"}). Explanations such as these that employ "even if..." reasoning, and do not cross a decision boundary, are known as semifactuals. To instantiate semifactuals in this context, we introduce the concept of \textit{Gain} (i.e., how much a user stands to benefit from the explanation), and consider the first causal formalisation of semifactuals. Tests on benchmark datasets show our algorithms are better at maximising gain compared to prior work, and that causality is important in the process. Most importantly however, a user study supports our main hypothesis by showing people find semifactual explanations more useful than counterfactuals when they receive the positive outcome of a loan acceptance.
[ { "created": "Sun, 29 Oct 2023 08:52:23 GMT", "version": "v1" } ]
2023-10-31
[ [ "Kenny", "Eoin M.", "" ], [ "Huang", "Weipeng", "" ] ]
When users receive either a positive or negative outcome from an automated system, Explainable AI (XAI) has almost exclusively focused on how to mutate negative outcomes into positive ones by crossing a decision boundary using counterfactuals (e.g., \textit{"If you earn 2k more, we will accept your loan application"}). Here, we instead focus on \textit{positive} outcomes, and take the novel step of using XAI to optimise them (e.g., \textit{"Even if you wish to half your down-payment, we will still accept your loan application"}). Explanations such as these that employ "even if..." reasoning, and do not cross a decision boundary, are known as semifactuals. To instantiate semifactuals in this context, we introduce the concept of \textit{Gain} (i.e., how much a user stands to benefit from the explanation), and consider the first causal formalisation of semifactuals. Tests on benchmark datasets show our algorithms are better at maximising gain compared to prior work, and that causality is important in the process. Most importantly however, a user study supports our main hypothesis by showing people find semifactual explanations more useful than counterfactuals when they receive the positive outcome of a loan acceptance.
2011.13492
Aaron Tuor
Jan Drgona, Soumya Vasisht, Aaron Tuor, Draguna Vrabie
Dissipative Deep Neural Dynamical Systems
Under review at IEEE Open Journal of Control Systems
null
null
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we provide sufficient conditions for dissipativity and local asymptotic stability of discrete-time dynamical systems parametrized by deep neural networks. We leverage the representation of neural networks as pointwise affine maps, thus exposing their local linear operators and making them accessible to classical system analytic and design methods. This allows us to "crack open the black box" of the neural dynamical system's behavior by evaluating their dissipativity, and estimating their stationary points and state-space partitioning. We relate the norms of these local linear operators to the energy stored in the dissipative system with supply rates represented by their aggregate bias terms. Empirically, we analyze the variance in dynamical behavior and eigenvalue spectra of these local linear operators with varying weight factorizations, activation functions, bias terms, and depths.
[ { "created": "Thu, 26 Nov 2020 23:13:16 GMT", "version": "v1" }, { "created": "Thu, 27 Jan 2022 15:58:44 GMT", "version": "v2" }, { "created": "Wed, 8 Jun 2022 16:03:21 GMT", "version": "v3" } ]
2022-06-09
[ [ "Drgona", "Jan", "" ], [ "Vasisht", "Soumya", "" ], [ "Tuor", "Aaron", "" ], [ "Vrabie", "Draguna", "" ] ]
In this paper, we provide sufficient conditions for dissipativity and local asymptotic stability of discrete-time dynamical systems parametrized by deep neural networks. We leverage the representation of neural networks as pointwise affine maps, thus exposing their local linear operators and making them accessible to classical system analytic and design methods. This allows us to "crack open the black box" of the neural dynamical system's behavior by evaluating their dissipativity, and estimating their stationary points and state-space partitioning. We relate the norms of these local linear operators to the energy stored in the dissipative system with supply rates represented by their aggregate bias terms. Empirically, we analyze the variance in dynamical behavior and eigenvalue spectra of these local linear operators with varying weight factorizations, activation functions, bias terms, and depths.
2209.04244
S. Hitarth
M. Praveen and S. Hitarth
Window Expressions for Stream Data Processing
null
null
null
null
cs.FL
http://creativecommons.org/licenses/by/4.0/
Traditional ways of storing and querying data do not work well in scenarios where data is being generated continuously and quick decisions need to be taken. For example, in hospital intensive care units, signals from multiple devices need to be monitored and the occurrence of any anomaly should raise alarms immediately. A typical design would take the average from a window of say 10 seconds (time-based) or 10 successive (count-based) readings and look for sudden deviations. Existing stream processing systems either restrict the windows to time or count-based windows or let users define customized windows in imperative programming languages. These are subject to the implementers' interpretation of what is desired and hard to understand for others. We introduce a formalism for specifying windows based on Monadic Second Order logic. It offers several advantages over ad-hoc definitions written in imperative languages. We demonstrate four such advantages. First, we illustrate how practical streaming data queries can be easily written with precise semantics. Second, we can get different but expressively equivalent formalisms for defining windows. We use one of them (regular expressions) to design an end-user-friendly language for defining windows. Third, we use another expressively equivalent formalism (automata) to design a processor that automatically generates windows according to specifications. The fourth advantage we demonstrate is more sophisticated. Some window definitions have the problem of too many windows overlapping with each other, overwhelming the processing engine. This is handled in different ways by different engines, but all the options are about what to do when this happens at runtime. We study this as a static analysis question and prove that it is undecidable to check whether such a scenario can ever arise for a given window definition. We identify a decidable fragment...
[ { "created": "Fri, 9 Sep 2022 11:11:37 GMT", "version": "v1" }, { "created": "Fri, 1 Mar 2024 14:13:13 GMT", "version": "v2" } ]
2024-03-04
[ [ "Praveen", "M.", "" ], [ "Hitarth", "S.", "" ] ]
Traditional ways of storing and querying data do not work well in scenarios where data is being generated continuously and quick decisions need to be taken. For example, in hospital intensive care units, signals from multiple devices need to be monitored and the occurrence of any anomaly should raise alarms immediately. A typical design would take the average from a window of say 10 seconds (time-based) or 10 successive (count-based) readings and look for sudden deviations. Existing stream processing systems either restrict the windows to time or count-based windows or let users define customized windows in imperative programming languages. These are subject to the implementers' interpretation of what is desired and hard to understand for others. We introduce a formalism for specifying windows based on Monadic Second Order logic. It offers several advantages over ad-hoc definitions written in imperative languages. We demonstrate four such advantages. First, we illustrate how practical streaming data queries can be easily written with precise semantics. Second, we can get different but expressively equivalent formalisms for defining windows. We use one of them (regular expressions) to design an end-user-friendly language for defining windows. Third, we use another expressively equivalent formalism (automata) to design a processor that automatically generates windows according to specifications. The fourth advantage we demonstrate is more sophisticated. Some window definitions have the problem of too many windows overlapping with each other, overwhelming the processing engine. This is handled in different ways by different engines, but all the options are about what to do when this happens at runtime. We study this as a static analysis question and prove that it is undecidable to check whether such a scenario can ever arise for a given window definition. We identify a decidable fragment...
1011.6134
Christopher Wilkens
Christopher A. Wilkens and Balasubramanian Sivan
Single-Call Mechanisms
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Truthfulness is fragile and demanding. It is oftentimes computationally harder than solving the original problem. Even worse, truthfulness can be utterly destroyed by small uncertainties in a mechanism's outcome. One obstacle is that truthful payments depend on outcomes other than the one realized, such as the lengths of non-shortest-paths in a shortest-path auction. Single-call mechanisms are a powerful tool that circumvents this obstacle --- they implicitly charge truthful payments, guaranteeing truthfulness in expectation using only the outcome realized by the mechanism. The cost of such truthfulness is a trade-off between the expected quality of the outcome and the risk of large payments. We largely settle when and to what extent single-call mechanisms are possible. The first single-call construction was discovered by Babaioff, Kleinberg, and Slivkins [BKS10] in single-parameter domains. They give a transformation that turns any monotone, single-parameter allocation rule into a truthful-in-expectation single-call mechanism. Our first result is a natural complement to [BKS10]: we give a new transformation that produces a single-call VCG mechanism from any allocation rule for which VCG payments are truthful. Second, in both the single-parameter and VCG settings, we precisely characterize the possible transformations, showing that that a wide variety of transformations are possible but that all take a very simple form. Finally, we study the inherent trade-off between the expected quality of the outcome and the risk of large payments. We show that our construction and that of [BKS10] simultaneously optimize a variety of metrics in their respective domains. As an example, we analyze pay-per-click advertising auctions, where the truthfulness of the standard VCG-based auction is easily broken when the auctioneer's estimated click-through-rates are imprecise.
[ { "created": "Mon, 29 Nov 2010 06:01:10 GMT", "version": "v1" }, { "created": "Mon, 26 Mar 2012 20:53:24 GMT", "version": "v2" }, { "created": "Thu, 29 Mar 2012 15:48:19 GMT", "version": "v3" } ]
2012-03-30
[ [ "Wilkens", "Christopher A.", "" ], [ "Sivan", "Balasubramanian", "" ] ]
Truthfulness is fragile and demanding. It is oftentimes computationally harder than solving the original problem. Even worse, truthfulness can be utterly destroyed by small uncertainties in a mechanism's outcome. One obstacle is that truthful payments depend on outcomes other than the one realized, such as the lengths of non-shortest-paths in a shortest-path auction. Single-call mechanisms are a powerful tool that circumvents this obstacle --- they implicitly charge truthful payments, guaranteeing truthfulness in expectation using only the outcome realized by the mechanism. The cost of such truthfulness is a trade-off between the expected quality of the outcome and the risk of large payments. We largely settle when and to what extent single-call mechanisms are possible. The first single-call construction was discovered by Babaioff, Kleinberg, and Slivkins [BKS10] in single-parameter domains. They give a transformation that turns any monotone, single-parameter allocation rule into a truthful-in-expectation single-call mechanism. Our first result is a natural complement to [BKS10]: we give a new transformation that produces a single-call VCG mechanism from any allocation rule for which VCG payments are truthful. Second, in both the single-parameter and VCG settings, we precisely characterize the possible transformations, showing that that a wide variety of transformations are possible but that all take a very simple form. Finally, we study the inherent trade-off between the expected quality of the outcome and the risk of large payments. We show that our construction and that of [BKS10] simultaneously optimize a variety of metrics in their respective domains. As an example, we analyze pay-per-click advertising auctions, where the truthfulness of the standard VCG-based auction is easily broken when the auctioneer's estimated click-through-rates are imprecise.
1704.03940
Kai Hui
Kai Hui, Andrew Yates, Klaus Berberich, Gerard de Melo
PACRR: A Position-Aware Neural IR Model for Relevance Matching
To appear in EMNLP2017
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to adopt deep learning for information retrieval, models are needed that can capture all relevant information required to assess the relevance of a document to a given user query. While previous works have successfully captured unigram term matches, how to fully employ position-dependent information such as proximity and term dependencies has been insufficiently explored. In this work, we propose a novel neural IR model named PACRR aiming at better modeling position-dependent interactions between a query and a document. Extensive experiments on six years' TREC Web Track data confirm that the proposed model yields better results under multiple benchmarks.
[ { "created": "Wed, 12 Apr 2017 21:56:59 GMT", "version": "v1" }, { "created": "Sun, 7 May 2017 16:12:36 GMT", "version": "v2" }, { "created": "Fri, 21 Jul 2017 23:02:06 GMT", "version": "v3" } ]
2017-07-25
[ [ "Hui", "Kai", "" ], [ "Yates", "Andrew", "" ], [ "Berberich", "Klaus", "" ], [ "de Melo", "Gerard", "" ] ]
In order to adopt deep learning for information retrieval, models are needed that can capture all relevant information required to assess the relevance of a document to a given user query. While previous works have successfully captured unigram term matches, how to fully employ position-dependent information such as proximity and term dependencies has been insufficiently explored. In this work, we propose a novel neural IR model named PACRR aiming at better modeling position-dependent interactions between a query and a document. Extensive experiments on six years' TREC Web Track data confirm that the proposed model yields better results under multiple benchmarks.
0807.0908
Fionn Murtagh
Fionn Murtagh
The Correspondence Analysis Platform for Uncovering Deep Structure in Data and Information
Sixth Annual Boole Lecture in Informatics, Boole Centre for Research in Informatics, Cork, Ireland, 29 April 2008. 28 pp., 17 figures. To appear, Computer Journal. This version: 3 typos corrected
Computer Journal, 53 (3), 304-315, 2010
10.1093/comjnl/bxn045
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study two aspects of information semantics: (i) the collection of all relationships, (ii) tracking and spotting anomaly and change. The first is implemented by endowing all relevant information spaces with a Euclidean metric in a common projected space. The second is modelled by an induced ultrametric. A very general way to achieve a Euclidean embedding of different information spaces based on cross-tabulation counts (and from other input data formats) is provided by Correspondence Analysis. From there, the induced ultrametric that we are particularly interested in takes a sequential - e.g. temporal - ordering of the data into account. We employ such a perspective to look at narrative, "the flow of thought and the flow of language" (Chafe). In application to policy decision making, we show how we can focus analysis in a small number of dimensions.
[ { "created": "Sun, 6 Jul 2008 15:22:54 GMT", "version": "v1" }, { "created": "Tue, 2 Sep 2008 17:07:52 GMT", "version": "v2" } ]
2011-01-11
[ [ "Murtagh", "Fionn", "" ] ]
We study two aspects of information semantics: (i) the collection of all relationships, (ii) tracking and spotting anomaly and change. The first is implemented by endowing all relevant information spaces with a Euclidean metric in a common projected space. The second is modelled by an induced ultrametric. A very general way to achieve a Euclidean embedding of different information spaces based on cross-tabulation counts (and from other input data formats) is provided by Correspondence Analysis. From there, the induced ultrametric that we are particularly interested in takes a sequential - e.g. temporal - ordering of the data into account. We employ such a perspective to look at narrative, "the flow of thought and the flow of language" (Chafe). In application to policy decision making, we show how we can focus analysis in a small number of dimensions.
0912.3926
William Jackson
P. Balasubramanie, M. Lilly Florence
Application of Radial Basis Network Model for HIV/AIDs Regimen Specifications
null
Journal of Computing, Volume 1, Issue 1, pp 136-140, December 2009
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
HIV/AIDs Regimen specification one of many problems for which bioinformaticians have implemented and trained machine learning methods such as neural networks. Predicting HIV resistance would be much easier, but unfortunately we rarely have enough structural information available to train a neural network. To network model designed to predict how long the HIV patient can prolong his/her life time with certain regimen specification. To learn this model 300 patient's details have taken as a training set to train the network and 100 patients medical history has taken to test this model. This network model is trained using MAT lab implementation.
[ { "created": "Sat, 19 Dec 2009 18:54:56 GMT", "version": "v1" } ]
2009-12-22
[ [ "Balasubramanie", "P.", "" ], [ "Florence", "M. Lilly", "" ] ]
HIV/AIDs Regimen specification one of many problems for which bioinformaticians have implemented and trained machine learning methods such as neural networks. Predicting HIV resistance would be much easier, but unfortunately we rarely have enough structural information available to train a neural network. To network model designed to predict how long the HIV patient can prolong his/her life time with certain regimen specification. To learn this model 300 patient's details have taken as a training set to train the network and 100 patients medical history has taken to test this model. This network model is trained using MAT lab implementation.
2004.11657
Martin G\"unther
Till Grenzd\"orffer, Martin G\"unther and Joachim Hertzberg
YCB-M: A Multi-Camera RGB-D Dataset for Object Recognition and 6DoF Pose Estimation
Published at ICRA-2020
null
10.1109/ICRA40945.2020.9197426
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While a great variety of 3D cameras have been introduced in recent years, most publicly available datasets for object recognition and pose estimation focus on one single camera. In this work, we present a dataset of 32 scenes that have been captured by 7 different 3D cameras, totaling 49,294 frames. This allows evaluating the sensitivity of pose estimation algorithms to the specifics of the used camera and the development of more robust algorithms that are more independent of the camera model. Vice versa, our dataset enables researchers to perform a quantitative comparison of the data from several different cameras and depth sensing technologies and evaluate their algorithms before selecting a camera for their specific task. The scenes in our dataset contain 20 different objects from the common benchmark YCB object and model set [1], [2]. We provide full ground truth 6DoF poses for each object, per-pixel segmentation, 2D and 3D bounding boxes and a measure of the amount of occlusion of each object. We have also performed an initial evaluation of the cameras using our dataset on a state-of-the-art object recognition and pose estimation system [3].
[ { "created": "Fri, 24 Apr 2020 11:14:04 GMT", "version": "v1" }, { "created": "Tue, 29 Sep 2020 07:58:18 GMT", "version": "v2" } ]
2020-09-30
[ [ "Grenzdörffer", "Till", "" ], [ "Günther", "Martin", "" ], [ "Hertzberg", "Joachim", "" ] ]
While a great variety of 3D cameras have been introduced in recent years, most publicly available datasets for object recognition and pose estimation focus on one single camera. In this work, we present a dataset of 32 scenes that have been captured by 7 different 3D cameras, totaling 49,294 frames. This allows evaluating the sensitivity of pose estimation algorithms to the specifics of the used camera and the development of more robust algorithms that are more independent of the camera model. Vice versa, our dataset enables researchers to perform a quantitative comparison of the data from several different cameras and depth sensing technologies and evaluate their algorithms before selecting a camera for their specific task. The scenes in our dataset contain 20 different objects from the common benchmark YCB object and model set [1], [2]. We provide full ground truth 6DoF poses for each object, per-pixel segmentation, 2D and 3D bounding boxes and a measure of the amount of occlusion of each object. We have also performed an initial evaluation of the cameras using our dataset on a state-of-the-art object recognition and pose estimation system [3].
1907.06633
Apdullah Yayik
Apdullah Yay{\i}k, Yakup Kutlu, G\"okhan Altan
On improving learning capability of ELM and an application to brain-computer interface
11 pages, 6 figures, Neural Computing and Application, Springer (under-review)
null
10.13140/RG.2.2.30778.34248
null
cs.LG eess.SP stat.ML
http://creativecommons.org/licenses/by/4.0/
As a type of pseudoinverse learning, extreme learning machine (ELM) is able to achieve high performances in a rapid pace on benchmark datasets. However, when it is applied to real life large data, decline related to low-convergence of singular value decomposition (SVD) method occurs. Our study aims to resolve this issue via replacing SVD with theoretically and empirically much efficient 5 number of methods: lower upper triangularization, Hessenberg decomposition, Schur decomposition, modified Gram Schmidt algorithm and Householder reflection. Comparisons were made on electroencephalography based brain-computer interface classification problem to decide which method is the most useful. Results of subject-based classifications suggested that if priority was given to training pace, Hessenberg decomposition method, whereas if priority was given to performances Householder reflection method should be preferred.
[ { "created": "Sun, 14 Jul 2019 09:28:07 GMT", "version": "v1" } ]
2019-07-29
[ [ "Yayık", "Apdullah", "" ], [ "Kutlu", "Yakup", "" ], [ "Altan", "Gökhan", "" ] ]
As a type of pseudoinverse learning, extreme learning machine (ELM) is able to achieve high performances in a rapid pace on benchmark datasets. However, when it is applied to real life large data, decline related to low-convergence of singular value decomposition (SVD) method occurs. Our study aims to resolve this issue via replacing SVD with theoretically and empirically much efficient 5 number of methods: lower upper triangularization, Hessenberg decomposition, Schur decomposition, modified Gram Schmidt algorithm and Householder reflection. Comparisons were made on electroencephalography based brain-computer interface classification problem to decide which method is the most useful. Results of subject-based classifications suggested that if priority was given to training pace, Hessenberg decomposition method, whereas if priority was given to performances Householder reflection method should be preferred.
2211.16993
Xingyu Yan
Xingyu Yan (1), Licheng Wang (2), Lize Gu (1), Ziyi Li (3), Jingwen Suo (1) ((1) State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, 100876, China. (2) School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing, 100081, China. (3) State Key Laboratory of Information Security, Institute of Information Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China.)
Post-Quantum $\kappa$-to-1 Trapdoor Claw-free Functions from Extrapolated Dihedral Cosets
34 pages, 7 figures
null
null
null
cs.CR cs.CC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
\emph{Noisy trapdoor claw-free function} (NTCF) as a powerful post-quantum cryptographic tool can efficiently constrain actions of untrusted quantum devices. However, the original NTCF is essentially \emph{2-to-1} one-way function (NTCF$^1_2$). In this work, we attempt to further extend the NTCF$^1_2$ to achieve \emph{many-to-one} trapdoor claw-free functions with polynomial bounded preimage size. Specifically, we focus on a significant extrapolation of NTCF$^1_2$ by drawing on extrapolated dihedral cosets, thereby giving a model of NTCF$^1_{\kappa}$ where $\kappa$ is a polynomial integer. Then, we present an efficient construction of NTCF$^1_{\kappa}$ assuming \emph{quantum hardness of the learning with errors (LWE)} problem. We point out that NTCF can be used to bridge the LWE and the dihedral coset problem (DCP). By leveraging NTCF$^1_2$ (resp. NTCF$^1_{\kappa}$), our work reveals a new quantum reduction path from the LWE problem to the DCP (resp. extrapolated DCP). Finally, we demonstrate the NTCF$^1_{\kappa}$ can naturally be reduced to the NTCF$^1_2$, thereby achieving the same application for proving the quantumness.
[ { "created": "Wed, 30 Nov 2022 13:48:44 GMT", "version": "v1" }, { "created": "Fri, 21 Jul 2023 00:50:31 GMT", "version": "v2" } ]
2023-07-24
[ [ "Yan", "Xingyu", "" ], [ "Wang", "Licheng", "" ], [ "Gu", "Lize", "" ], [ "Li", "Ziyi", "" ], [ "Suo", "Jingwen", "" ] ]
\emph{Noisy trapdoor claw-free function} (NTCF) as a powerful post-quantum cryptographic tool can efficiently constrain actions of untrusted quantum devices. However, the original NTCF is essentially \emph{2-to-1} one-way function (NTCF$^1_2$). In this work, we attempt to further extend the NTCF$^1_2$ to achieve \emph{many-to-one} trapdoor claw-free functions with polynomial bounded preimage size. Specifically, we focus on a significant extrapolation of NTCF$^1_2$ by drawing on extrapolated dihedral cosets, thereby giving a model of NTCF$^1_{\kappa}$ where $\kappa$ is a polynomial integer. Then, we present an efficient construction of NTCF$^1_{\kappa}$ assuming \emph{quantum hardness of the learning with errors (LWE)} problem. We point out that NTCF can be used to bridge the LWE and the dihedral coset problem (DCP). By leveraging NTCF$^1_2$ (resp. NTCF$^1_{\kappa}$), our work reveals a new quantum reduction path from the LWE problem to the DCP (resp. extrapolated DCP). Finally, we demonstrate the NTCF$^1_{\kappa}$ can naturally be reduced to the NTCF$^1_2$, thereby achieving the same application for proving the quantumness.
cs/0512018
Anthony Mouraud
Anthony Mouraud (GRIMAAG, ISC), Didier Puzenat (GRIMAAG), H\'el\`ene Paugam-Moisy (ISC)
DAMNED: A Distributed and Multithreaded Neural Event-Driven simulation framework
6 pages
null
null
null
cs.NE cs.LG
null
In a Spiking Neural Networks (SNN), spike emissions are sparsely and irregularly distributed both in time and in the network architecture. Since a current feature of SNNs is a low average activity, efficient implementations of SNNs are usually based on an Event-Driven Simulation (EDS). On the other hand, simulations of large scale neural networks can take advantage of distributing the neurons on a set of processors (either workstation cluster or parallel computer). This article presents DAMNED, a large scale SNN simulation framework able to gather the benefits of EDS and parallel computing. Two levels of parallelism are combined: Distributed mapping of the neural topology, at the network level, and local multithreaded allocation of resources for simultaneous processing of events, at the neuron level. Based on the causality of events, a distributed solution is proposed for solving the complex problem of scheduling without synchronization barrier.
[ { "created": "Mon, 5 Dec 2005 06:57:39 GMT", "version": "v1" }, { "created": "Tue, 21 Mar 2006 12:31:02 GMT", "version": "v2" } ]
2016-08-16
[ [ "Mouraud", "Anthony", "", "GRIMAAG, ISC" ], [ "Puzenat", "Didier", "", "GRIMAAG" ], [ "Paugam-Moisy", "Hélène", "", "ISC" ] ]
In a Spiking Neural Networks (SNN), spike emissions are sparsely and irregularly distributed both in time and in the network architecture. Since a current feature of SNNs is a low average activity, efficient implementations of SNNs are usually based on an Event-Driven Simulation (EDS). On the other hand, simulations of large scale neural networks can take advantage of distributing the neurons on a set of processors (either workstation cluster or parallel computer). This article presents DAMNED, a large scale SNN simulation framework able to gather the benefits of EDS and parallel computing. Two levels of parallelism are combined: Distributed mapping of the neural topology, at the network level, and local multithreaded allocation of resources for simultaneous processing of events, at the neuron level. Based on the causality of events, a distributed solution is proposed for solving the complex problem of scheduling without synchronization barrier.
1809.00758
Myungsu Chae
Myungsu Chae, Tae-Ho Kim, Young Hoon Shin, June-Woo Kim, and Soo-Young Lee
End-to-end Multimodal Emotion and Gender Recognition with Dynamic Joint Loss Weights
IROS 2018 Workshop on Crossmodal Learning for Intelligent Robotics
null
null
null
cs.LG cs.CV cs.SD eess.AS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-task learning is a method for improving the generalizability of multiple tasks. In order to perform multiple classification tasks with one neural network model, the losses of each task should be combined. Previous studies have mostly focused on multiple prediction tasks using joint loss with static weights for training models, choosing the weights between tasks without making sufficient considerations by setting them uniformly or empirically. In this study, we propose a method to calculate joint loss using dynamic weights to improve the total performance, instead of the individual performance, of tasks. We apply this method to design an end-to-end multimodal emotion and gender recognition model using audio and video data. This approach provides proper weights for the loss of each task when the training process ends. In our experiments, emotion and gender recognition with the proposed method yielded a lower joint loss, which is computed as the negative log-likelihood, than using static weights for joint loss. Moreover, our proposed model has better generalizability than other models. To the best of our knowledge, this research is the first to demonstrate the strength of using dynamic weights for joint loss for maximizing overall performance in emotion and gender recognition tasks.
[ { "created": "Tue, 4 Sep 2018 00:52:25 GMT", "version": "v1" }, { "created": "Thu, 6 Sep 2018 06:55:13 GMT", "version": "v2" }, { "created": "Tue, 2 Oct 2018 04:16:54 GMT", "version": "v3" } ]
2018-10-03
[ [ "Chae", "Myungsu", "" ], [ "Kim", "Tae-Ho", "" ], [ "Shin", "Young Hoon", "" ], [ "Kim", "June-Woo", "" ], [ "Lee", "Soo-Young", "" ] ]
Multi-task learning is a method for improving the generalizability of multiple tasks. In order to perform multiple classification tasks with one neural network model, the losses of each task should be combined. Previous studies have mostly focused on multiple prediction tasks using joint loss with static weights for training models, choosing the weights between tasks without making sufficient considerations by setting them uniformly or empirically. In this study, we propose a method to calculate joint loss using dynamic weights to improve the total performance, instead of the individual performance, of tasks. We apply this method to design an end-to-end multimodal emotion and gender recognition model using audio and video data. This approach provides proper weights for the loss of each task when the training process ends. In our experiments, emotion and gender recognition with the proposed method yielded a lower joint loss, which is computed as the negative log-likelihood, than using static weights for joint loss. Moreover, our proposed model has better generalizability than other models. To the best of our knowledge, this research is the first to demonstrate the strength of using dynamic weights for joint loss for maximizing overall performance in emotion and gender recognition tasks.
1306.6168
Bruno Courcelle
Bruno Courcelle (LaBRI, IUF)
Clique-width and edge contraction
Information Processinhgs Letters 2013, In press
null
null
null
cs.DM cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that edge contractions do not preserve the property that a set of graphs has bounded clique-width. This property is preserved by contractions of edges, one end of which is a vertex of degree 2.
[ { "created": "Wed, 26 Jun 2013 08:54:30 GMT", "version": "v1" }, { "created": "Mon, 21 Oct 2013 10:42:34 GMT", "version": "v2" } ]
2013-10-22
[ [ "Courcelle", "Bruno", "", "LaBRI, IUF" ] ]
We prove that edge contractions do not preserve the property that a set of graphs has bounded clique-width. This property is preserved by contractions of edges, one end of which is a vertex of degree 2.
2311.10746
Mihran Miroyan
Mihran Miroyan, Shiny Weng, Rahul Shah, Lisa Yan, Narges Norouzi
EIT: Earnest Insight Toolkit for Evaluating Students' Earnestness in Interactive Lecture Participation Exercises
null
null
null
null
cs.CY cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In today's rapidly evolving educational landscape, traditional modes of passive information delivery are giving way to transformative pedagogical approaches that prioritize active student engagement. Within the context of large-scale hybrid classrooms, the challenge lies in fostering meaningful and active interaction between students and course content. This study delves into the significance of measuring students' earnestness during interactive lecture participation exercises. By analyzing students' responses to interactive lecture poll questions, establishing a clear rubric for evaluating earnestness, and conducting a comprehensive assessment, we introduce EIT (Earnest Insight Toolkit), a tool designed to assess students' engagement within interactive lecture participation exercises - particularly in the context of large-scale hybrid classrooms. Through the utilization of EIT, our objective is to equip educators with valuable means of identifying at-risk students for enhancing intervention and support strategies, as well as measuring students' levels of engagement with course content.
[ { "created": "Tue, 31 Oct 2023 07:05:00 GMT", "version": "v1" } ]
2023-11-21
[ [ "Miroyan", "Mihran", "" ], [ "Weng", "Shiny", "" ], [ "Shah", "Rahul", "" ], [ "Yan", "Lisa", "" ], [ "Norouzi", "Narges", "" ] ]
In today's rapidly evolving educational landscape, traditional modes of passive information delivery are giving way to transformative pedagogical approaches that prioritize active student engagement. Within the context of large-scale hybrid classrooms, the challenge lies in fostering meaningful and active interaction between students and course content. This study delves into the significance of measuring students' earnestness during interactive lecture participation exercises. By analyzing students' responses to interactive lecture poll questions, establishing a clear rubric for evaluating earnestness, and conducting a comprehensive assessment, we introduce EIT (Earnest Insight Toolkit), a tool designed to assess students' engagement within interactive lecture participation exercises - particularly in the context of large-scale hybrid classrooms. Through the utilization of EIT, our objective is to equip educators with valuable means of identifying at-risk students for enhancing intervention and support strategies, as well as measuring students' levels of engagement with course content.
2004.05220
Younes Abdi
Younes Abdi and Tapani Ristaniemi
Modeling and Mitigating Errors in Belief Propagation for Distributed Detection
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the behavior of the belief-propagation (BP) algorithm affected by erroneous data exchange in a wireless sensor network (WSN). The WSN conducts a distributed binary hypothesis test where the joint statistical behavior of the sensor observations is modeled by a Markov random field whose parameters are used to build the BP messages exchanged between the sensing nodes. Through linearization of the BP message-update rule, we analyze the behavior of the resulting erroneous decision variables and derive closed-form relationships that describe the impact of stochastic errors on the performance of the BP algorithm. We then develop a decentralized distributed optimization framework to enhance the system performance by mitigating the impact of errors via a distributed linear data-fusion scheme. Finally, we compare the results of the proposed analysis with the existing works and visualize, via computer simulations, the performance gain obtained by the proposed optimization.
[ { "created": "Fri, 10 Apr 2020 20:24:12 GMT", "version": "v1" } ]
2020-04-14
[ [ "Abdi", "Younes", "" ], [ "Ristaniemi", "Tapani", "" ] ]
We study the behavior of the belief-propagation (BP) algorithm affected by erroneous data exchange in a wireless sensor network (WSN). The WSN conducts a distributed binary hypothesis test where the joint statistical behavior of the sensor observations is modeled by a Markov random field whose parameters are used to build the BP messages exchanged between the sensing nodes. Through linearization of the BP message-update rule, we analyze the behavior of the resulting erroneous decision variables and derive closed-form relationships that describe the impact of stochastic errors on the performance of the BP algorithm. We then develop a decentralized distributed optimization framework to enhance the system performance by mitigating the impact of errors via a distributed linear data-fusion scheme. Finally, we compare the results of the proposed analysis with the existing works and visualize, via computer simulations, the performance gain obtained by the proposed optimization.
2204.01090
Alaa Anani
Alaa Anani, Mohamed Ghanem and Lotfy Abdel Khaliq
Breaking the De-Pois Poisoning Defense
null
null
null
null
cs.LG cs.CR
http://creativecommons.org/licenses/by/4.0/
Attacks on machine learning models have been, since their conception, a very persistent and evasive issue resembling an endless cat-and-mouse game. One major variant of such attacks is poisoning attacks which can indirectly manipulate an ML model. It has been observed over the years that the majority of proposed effective defense models are only effective when an attacker is not aware of them being employed. In this paper, we show that the attack-agnostic De-Pois defense is hardly an exception to that rule. In fact, we demonstrate its vulnerability to the simplest White-Box and Black-Box attacks by an attacker that knows the structure of the De-Pois defense model. In essence, the De-Pois defense relies on a critic model that can be used to detect poisoned data before passing it to the target model. In our work, we break this poison-protection layer by replicating the critic model and then performing a composed gradient-sign attack on both the critic and target models simultaneously -- allowing us to bypass the critic firewall to poison the target model.
[ { "created": "Sun, 3 Apr 2022 15:17:47 GMT", "version": "v1" } ]
2022-04-05
[ [ "Anani", "Alaa", "" ], [ "Ghanem", "Mohamed", "" ], [ "Khaliq", "Lotfy Abdel", "" ] ]
Attacks on machine learning models have been, since their conception, a very persistent and evasive issue resembling an endless cat-and-mouse game. One major variant of such attacks is poisoning attacks which can indirectly manipulate an ML model. It has been observed over the years that the majority of proposed effective defense models are only effective when an attacker is not aware of them being employed. In this paper, we show that the attack-agnostic De-Pois defense is hardly an exception to that rule. In fact, we demonstrate its vulnerability to the simplest White-Box and Black-Box attacks by an attacker that knows the structure of the De-Pois defense model. In essence, the De-Pois defense relies on a critic model that can be used to detect poisoned data before passing it to the target model. In our work, we break this poison-protection layer by replicating the critic model and then performing a composed gradient-sign attack on both the critic and target models simultaneously -- allowing us to bypass the critic firewall to poison the target model.
2304.12829
Jos\'e Cano
Ferheen Ayaz, Idris Zakariyya, Jos\'e Cano, Sye Loong Keoh, Jeremy Singer, Danilo Pau, Mounia Kharbouche-Harrari
Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
Accepted at IJCNN 2023. 8 pages, 5 figures
null
null
null
cs.LG cs.CR cs.PF
http://creativecommons.org/licenses/by-nc-sa/4.0/
Reducing the memory footprint of Machine Learning (ML) models, particularly Deep Neural Networks (DNNs), is essential to enable their deployment into resource-constrained tiny devices. However, a disadvantage of DNN models is their vulnerability to adversarial attacks, as they can be fooled by adding slight perturbations to the inputs. Therefore, the challenge is how to create accurate, robust, and tiny DNN models deployable on resource-constrained embedded devices. This paper reports the results of devising a tiny DNN model, robust to adversarial black and white box attacks, trained with an automatic quantizationaware training framework, i.e. QKeras, with deep quantization loss accounted in the learning loop, thereby making the designed DNNs more accurate for deployment on tiny devices. We investigated how QKeras and an adversarial robustness technique, Jacobian Regularization (JR), can provide a co-optimization strategy by exploiting the DNN topology and the per layer JR approach to produce robust yet tiny deeply quantized DNN models. As a result, a new DNN model implementing this cooptimization strategy was conceived, developed and tested on three datasets containing both images and audio inputs, as well as compared its performance with existing benchmarks against various white-box and black-box attacks. Experimental results demonstrated that on average our proposed DNN model resulted in 8.3% and 79.5% higher accuracy than MLCommons/Tiny benchmarks in the presence of white-box and black-box attacks on the CIFAR-10 image dataset and a subset of the Google Speech Commands audio dataset respectively. It was also 6.5% more accurate for black-box attacks on the SVHN image dataset.
[ { "created": "Tue, 25 Apr 2023 13:56:35 GMT", "version": "v1" } ]
2023-04-26
[ [ "Ayaz", "Ferheen", "" ], [ "Zakariyya", "Idris", "" ], [ "Cano", "José", "" ], [ "Keoh", "Sye Loong", "" ], [ "Singer", "Jeremy", "" ], [ "Pau", "Danilo", "" ], [ "Kharbouche-Harrari", "Mounia", "" ] ]
Reducing the memory footprint of Machine Learning (ML) models, particularly Deep Neural Networks (DNNs), is essential to enable their deployment into resource-constrained tiny devices. However, a disadvantage of DNN models is their vulnerability to adversarial attacks, as they can be fooled by adding slight perturbations to the inputs. Therefore, the challenge is how to create accurate, robust, and tiny DNN models deployable on resource-constrained embedded devices. This paper reports the results of devising a tiny DNN model, robust to adversarial black and white box attacks, trained with an automatic quantizationaware training framework, i.e. QKeras, with deep quantization loss accounted in the learning loop, thereby making the designed DNNs more accurate for deployment on tiny devices. We investigated how QKeras and an adversarial robustness technique, Jacobian Regularization (JR), can provide a co-optimization strategy by exploiting the DNN topology and the per layer JR approach to produce robust yet tiny deeply quantized DNN models. As a result, a new DNN model implementing this cooptimization strategy was conceived, developed and tested on three datasets containing both images and audio inputs, as well as compared its performance with existing benchmarks against various white-box and black-box attacks. Experimental results demonstrated that on average our proposed DNN model resulted in 8.3% and 79.5% higher accuracy than MLCommons/Tiny benchmarks in the presence of white-box and black-box attacks on the CIFAR-10 image dataset and a subset of the Google Speech Commands audio dataset respectively. It was also 6.5% more accurate for black-box attacks on the SVHN image dataset.
1811.12670
Ziwei Liu
Weidong Yin, Ziwei Liu, Chen Change Loy
Instance-level Facial Attributes Transfer with Geometry-Aware Flow
To appear in AAAI 2019. Code and models are available at: https://github.com/wdyin/GeoGAN
null
null
null
cs.CV cs.GR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of instance-level facial attribute transfer without paired training data, e.g. faithfully transferring the exact mustache from a source face to a target face. This is a more challenging task than the conventional semantic-level attribute transfer, which only preserves the generic attribute style instead of instance-level traits. We propose the use of geometry-aware flow, which serves as a well-suited representation for modeling the transformation between instance-level facial attributes. Specifically, we leverage the facial landmarks as the geometric guidance to learn the differentiable flows automatically, despite of the large pose gap existed. Geometry-aware flow is able to warp the source face attribute into the target face context and generate a warp-and-blend result. To compensate for the potential appearance gap between source and target faces, we propose a hallucination sub-network that produces an appearance residual to further refine the warp-and-blend result. Finally, a cycle-consistency framework consisting of both attribute transfer module and attribute removal module is designed, so that abundant unpaired face images can be used as training data. Extensive evaluations validate the capability of our approach in transferring instance-level facial attributes faithfully across large pose and appearance gaps. Thanks to the flow representation, our approach can readily be applied to generate realistic details on high-resolution images.
[ { "created": "Fri, 30 Nov 2018 08:43:00 GMT", "version": "v1" } ]
2018-12-03
[ [ "Yin", "Weidong", "" ], [ "Liu", "Ziwei", "" ], [ "Loy", "Chen Change", "" ] ]
We address the problem of instance-level facial attribute transfer without paired training data, e.g. faithfully transferring the exact mustache from a source face to a target face. This is a more challenging task than the conventional semantic-level attribute transfer, which only preserves the generic attribute style instead of instance-level traits. We propose the use of geometry-aware flow, which serves as a well-suited representation for modeling the transformation between instance-level facial attributes. Specifically, we leverage the facial landmarks as the geometric guidance to learn the differentiable flows automatically, despite of the large pose gap existed. Geometry-aware flow is able to warp the source face attribute into the target face context and generate a warp-and-blend result. To compensate for the potential appearance gap between source and target faces, we propose a hallucination sub-network that produces an appearance residual to further refine the warp-and-blend result. Finally, a cycle-consistency framework consisting of both attribute transfer module and attribute removal module is designed, so that abundant unpaired face images can be used as training data. Extensive evaluations validate the capability of our approach in transferring instance-level facial attributes faithfully across large pose and appearance gaps. Thanks to the flow representation, our approach can readily be applied to generate realistic details on high-resolution images.
2108.06428
Yu Rong
Yu Rong, Takaaki Shiratori, Hanbyul Joo
FrankMocap: A Monocular 3D Whole-Body Pose Estimation System via Regression and Integration
Accepted to ICCV 2021 Workshops on Assistive Computer Vision and Robotics. An updated version of arXiv:2008.08324. Code, models and demo videos are available at:https://github.com/facebookresearch/frankmocap
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Most existing monocular 3D pose estimation approaches only focus on a single body part, neglecting the fact that the essential nuance of human motion is conveyed through a concert of subtle movements of face, hands, and body. In this paper, we present FrankMocap, a fast and accurate whole-body 3D pose estimation system that can produce 3D face, hands, and body simultaneously from in-the-wild monocular images. The core idea of FrankMocap is its modular design: We first run 3D pose regression methods for face, hands, and body independently, followed by composing the regression outputs via an integration module. The separate regression modules allow us to take full advantage of their state-of-the-art performances without compromising the original accuracy and reliability in practice. We develop three different integration modules that trade off between latency and accuracy. All of them are capable of providing simple yet effective solutions to unify the separate outputs into seamless whole-body pose estimation results. We quantitatively and qualitatively demonstrate that our modularized system outperforms both the optimization-based and end-to-end methods of estimating whole-body pose.
[ { "created": "Fri, 13 Aug 2021 23:57:27 GMT", "version": "v1" } ]
2021-08-24
[ [ "Rong", "Yu", "" ], [ "Shiratori", "Takaaki", "" ], [ "Joo", "Hanbyul", "" ] ]
Most existing monocular 3D pose estimation approaches only focus on a single body part, neglecting the fact that the essential nuance of human motion is conveyed through a concert of subtle movements of face, hands, and body. In this paper, we present FrankMocap, a fast and accurate whole-body 3D pose estimation system that can produce 3D face, hands, and body simultaneously from in-the-wild monocular images. The core idea of FrankMocap is its modular design: We first run 3D pose regression methods for face, hands, and body independently, followed by composing the regression outputs via an integration module. The separate regression modules allow us to take full advantage of their state-of-the-art performances without compromising the original accuracy and reliability in practice. We develop three different integration modules that trade off between latency and accuracy. All of them are capable of providing simple yet effective solutions to unify the separate outputs into seamless whole-body pose estimation results. We quantitatively and qualitatively demonstrate that our modularized system outperforms both the optimization-based and end-to-end methods of estimating whole-body pose.
1701.03129
Besat Kassaie
Besat Kassaie
De-identification In practice
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We report our effort to identify the sensitive information, subset of data items listed by HIPAA (Health Insurance Portability and Accountability), from medical text using the recent advances in natural language processing and machine learning techniques. We represent the words with high dimensional continuous vectors learned by a variant of Word2Vec called Continous Bag Of Words (CBOW). We feed the word vectors into a simple neural network with a Long Short-Term Memory (LSTM) architecture. Without any attempts to extract manually crafted features and considering that our medical dataset is too small to be fed into neural network, we obtained promising results. The results thrilled us to think about the larger scale of the project with precise parameter tuning and other possible improvements.
[ { "created": "Wed, 11 Jan 2017 19:22:56 GMT", "version": "v1" } ]
2017-01-13
[ [ "Kassaie", "Besat", "" ] ]
We report our effort to identify the sensitive information, subset of data items listed by HIPAA (Health Insurance Portability and Accountability), from medical text using the recent advances in natural language processing and machine learning techniques. We represent the words with high dimensional continuous vectors learned by a variant of Word2Vec called Continous Bag Of Words (CBOW). We feed the word vectors into a simple neural network with a Long Short-Term Memory (LSTM) architecture. Without any attempts to extract manually crafted features and considering that our medical dataset is too small to be fed into neural network, we obtained promising results. The results thrilled us to think about the larger scale of the project with precise parameter tuning and other possible improvements.
2211.01778
Hyungtae Lee
Yi-Ting Shen, Hyungtae Lee, Heesung Kwon, Shuvra Shikhar Bhattacharyya
Progressive Transformation Learning for Leveraging Virtual Images in Training
CVPR 2023 (Selected as Highlight)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
To effectively interrogate UAV-based images for detecting objects of interest, such as humans, it is essential to acquire large-scale UAV-based datasets that include human instances with various poses captured from widely varying viewing angles. As a viable alternative to laborious and costly data curation, we introduce Progressive Transformation Learning (PTL), which gradually augments a training dataset by adding transformed virtual images with enhanced realism. Generally, a virtual2real transformation generator in the conditional GAN framework suffers from quality degradation when a large domain gap exists between real and virtual images. To deal with the domain gap, PTL takes a novel approach that progressively iterates the following three steps: 1) select a subset from a pool of virtual images according to the domain gap, 2) transform the selected virtual images to enhance realism, and 3) add the transformed virtual images to the training set while removing them from the pool. In PTL, accurately quantifying the domain gap is critical. To do that, we theoretically demonstrate that the feature representation space of a given object detector can be modeled as a multivariate Gaussian distribution from which the Mahalanobis distance between a virtual object and the Gaussian distribution of each object category in the representation space can be readily computed. Experiments show that PTL results in a substantial performance increase over the baseline, especially in the small data and the cross-domain regime.
[ { "created": "Thu, 3 Nov 2022 13:04:15 GMT", "version": "v1" }, { "created": "Mon, 27 Mar 2023 19:21:27 GMT", "version": "v2" } ]
2023-03-29
[ [ "Shen", "Yi-Ting", "" ], [ "Lee", "Hyungtae", "" ], [ "Kwon", "Heesung", "" ], [ "Bhattacharyya", "Shuvra Shikhar", "" ] ]
To effectively interrogate UAV-based images for detecting objects of interest, such as humans, it is essential to acquire large-scale UAV-based datasets that include human instances with various poses captured from widely varying viewing angles. As a viable alternative to laborious and costly data curation, we introduce Progressive Transformation Learning (PTL), which gradually augments a training dataset by adding transformed virtual images with enhanced realism. Generally, a virtual2real transformation generator in the conditional GAN framework suffers from quality degradation when a large domain gap exists between real and virtual images. To deal with the domain gap, PTL takes a novel approach that progressively iterates the following three steps: 1) select a subset from a pool of virtual images according to the domain gap, 2) transform the selected virtual images to enhance realism, and 3) add the transformed virtual images to the training set while removing them from the pool. In PTL, accurately quantifying the domain gap is critical. To do that, we theoretically demonstrate that the feature representation space of a given object detector can be modeled as a multivariate Gaussian distribution from which the Mahalanobis distance between a virtual object and the Gaussian distribution of each object category in the representation space can be readily computed. Experiments show that PTL results in a substantial performance increase over the baseline, especially in the small data and the cross-domain regime.
0910.3301
Julian McAuley
Julian J. McAuley, Tiberio S. Caetano
Faster Algorithms for Max-Product Message-Passing
34 pages, 22 figures
null
null
null
cs.AI cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Maximum A Posteriori inference in graphical models is often solved via message-passing algorithms, such as the junction-tree algorithm, or loopy belief-propagation. The exact solution to this problem is well known to be exponential in the size of the model's maximal cliques after it is triangulated, while approximate inference is typically exponential in the size of the model's factors. In this paper, we take advantage of the fact that many models have maximal cliques that are larger than their constituent factors, and also of the fact that many factors consist entirely of latent variables (i.e., they do not depend on an observation). This is a common case in a wide variety of applications, including grids, trees, and ring-structured models. In such cases, we are able to decrease the exponent of complexity for message-passing by 0.5 for both exact and approximate inference.
[ { "created": "Sat, 17 Oct 2009 13:42:35 GMT", "version": "v1" }, { "created": "Thu, 22 Oct 2009 04:02:16 GMT", "version": "v2" }, { "created": "Sat, 5 Dec 2009 03:41:24 GMT", "version": "v3" }, { "created": "Thu, 8 Apr 2010 05:24:55 GMT", "version": "v4" } ]
2010-04-09
[ [ "McAuley", "Julian J.", "" ], [ "Caetano", "Tiberio S.", "" ] ]
Maximum A Posteriori inference in graphical models is often solved via message-passing algorithms, such as the junction-tree algorithm, or loopy belief-propagation. The exact solution to this problem is well known to be exponential in the size of the model's maximal cliques after it is triangulated, while approximate inference is typically exponential in the size of the model's factors. In this paper, we take advantage of the fact that many models have maximal cliques that are larger than their constituent factors, and also of the fact that many factors consist entirely of latent variables (i.e., they do not depend on an observation). This is a common case in a wide variety of applications, including grids, trees, and ring-structured models. In such cases, we are able to decrease the exponent of complexity for message-passing by 0.5 for both exact and approximate inference.
1002.1154
Vishal Goyal
Dorsaf Sebai, Abderrazak Jemai, Imed Bennour
Performance Analysis of Software to Hardware Task Migration in Codesign
International Journal of Computer Science Issues, IJCSI, Vol. 7, Issue 1, No. 1, January 2010, http://ijcsi.org/articles/Performance-Analysis-of-Software-to-Hardware-Task-Migration-in-Codesign.php
International Journal of Computer Science Issues, IJCSI, Vol. 7, Issue 1, No. 1, January 2010, http://ijcsi.org/articles/Performance-Analysis-of-Software-to-Hardware-Task-Migration-in-Codesign.php
null
null
cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The complexity of multimedia applications in terms of intensity of computation and heterogeneity of treated data led the designers to embark them on multiprocessor systems on chip. The complexity of these systems on one hand and the expectations of the consumers on the other hand complicate the designers job to conceive and supply strong and successful systems in the shortest deadlines. They have to explore the different solutions of the design space and estimate their performances in order to deduce the solution that respects their design constraints. In this context, we propose the modeling of one of the design space possible solutions: the software to hardware task migration. This modeling exploits the synchronous dataflow graphs to take into account the different migration impacts and estimate their performances in terms of throughput.
[ { "created": "Fri, 5 Feb 2010 08:51:44 GMT", "version": "v1" } ]
2010-02-08
[ [ "Sebai", "Dorsaf", "" ], [ "Jemai", "Abderrazak", "" ], [ "Bennour", "Imed", "" ] ]
The complexity of multimedia applications in terms of intensity of computation and heterogeneity of treated data led the designers to embark them on multiprocessor systems on chip. The complexity of these systems on one hand and the expectations of the consumers on the other hand complicate the designers job to conceive and supply strong and successful systems in the shortest deadlines. They have to explore the different solutions of the design space and estimate their performances in order to deduce the solution that respects their design constraints. In this context, we propose the modeling of one of the design space possible solutions: the software to hardware task migration. This modeling exploits the synchronous dataflow graphs to take into account the different migration impacts and estimate their performances in terms of throughput.
1307.5494
Laura Balzano
Laura Balzano and Stephen J. Wright
On GROUSE and Incremental SVD
null
null
null
null
cs.NA cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an incremental algorithm for identifying a subspace of Rn from a sequence of vectors in this subspace, where only a subset of components of each vector is revealed at each iteration. Recent analysis has shown that GROUSE converges locally at an expected linear rate, under certain assumptions. GROUSE has a similar flavor to the incremental singular value decomposition algorithm, which updates the SVD of a matrix following addition of a single column. In this paper, we modify the incremental SVD approach to handle missing data, and demonstrate that this modified approach is equivalent to GROUSE, for a certain choice of an algorithmic parameter.
[ { "created": "Sun, 21 Jul 2013 03:47:16 GMT", "version": "v1" } ]
2013-07-23
[ [ "Balzano", "Laura", "" ], [ "Wright", "Stephen J.", "" ] ]
GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an incremental algorithm for identifying a subspace of Rn from a sequence of vectors in this subspace, where only a subset of components of each vector is revealed at each iteration. Recent analysis has shown that GROUSE converges locally at an expected linear rate, under certain assumptions. GROUSE has a similar flavor to the incremental singular value decomposition algorithm, which updates the SVD of a matrix following addition of a single column. In this paper, we modify the incremental SVD approach to handle missing data, and demonstrate that this modified approach is equivalent to GROUSE, for a certain choice of an algorithmic parameter.
2102.13090
Qianqian Wang
Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul Srinivasan, Howard Zhou, Jonathan T. Barron, Ricardo Martin-Brualla, Noah Snavely, Thomas Funkhouser
IBRNet: Learning Multi-View Image-Based Rendering
CVPR 2021. Project page: https://ibrnet.github.io/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views. The core of our method is a network architecture that includes a multilayer perceptron and a ray transformer that estimates radiance and volume density at continuous 5D locations (3D spatial locations and 2D viewing directions), drawing appearance information on the fly from multiple source views. By drawing on source views at render time, our method hearkens back to classic work on image-based rendering (IBR), and allows us to render high-resolution imagery. Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes. We render images using classic volume rendering, which is fully differentiable and allows us to train using only multi-view posed images as supervision. Experiments show that our method outperforms recent novel view synthesis methods that also seek to generalize to novel scenes. Further, if fine-tuned on each scene, our method is competitive with state-of-the-art single-scene neural rendering methods. Project page: https://ibrnet.github.io/
[ { "created": "Thu, 25 Feb 2021 18:56:21 GMT", "version": "v1" }, { "created": "Tue, 6 Apr 2021 23:19:33 GMT", "version": "v2" } ]
2021-04-08
[ [ "Wang", "Qianqian", "" ], [ "Wang", "Zhicheng", "" ], [ "Genova", "Kyle", "" ], [ "Srinivasan", "Pratul", "" ], [ "Zhou", "Howard", "" ], [ "Barron", "Jonathan T.", "" ], [ "Martin-Brualla", "Ricardo", "" ], [ "Snavely", "Noah", "" ], [ "Funkhouser", "Thomas", "" ] ]
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views. The core of our method is a network architecture that includes a multilayer perceptron and a ray transformer that estimates radiance and volume density at continuous 5D locations (3D spatial locations and 2D viewing directions), drawing appearance information on the fly from multiple source views. By drawing on source views at render time, our method hearkens back to classic work on image-based rendering (IBR), and allows us to render high-resolution imagery. Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes. We render images using classic volume rendering, which is fully differentiable and allows us to train using only multi-view posed images as supervision. Experiments show that our method outperforms recent novel view synthesis methods that also seek to generalize to novel scenes. Further, if fine-tuned on each scene, our method is competitive with state-of-the-art single-scene neural rendering methods. Project page: https://ibrnet.github.io/
2212.12756
Lo\"ic Paulev\'e
Kyungduk Moon, Kangbok Lee, Lo\"ic Paulev\'e
Computational Complexity of Minimal Trap Spaces in Boolean Networks
null
null
null
null
cs.DM cs.CC math.DS
http://creativecommons.org/licenses/by/4.0/
A Boolean network (BN) is a discrete dynamical system defined by a Boolean function that maps to the domain itself. A trap space of a BN is a generalization of a fixed point, which is defined as the sub-hypercubes closed by the function of the BN. A trap space is minimal if it does not contain any smaller trap space. Minimal trap spaces have applications for the analysis of attractors of BNs with various update modes. This paper establishes the computational complexity results of three decision problems related to minimal trap spaces: the decision of the trap space property of a sub-hypercube, the decision of its minimality, and the decision of the membership of a given configuration to a minimal trap space. Under several cases on Boolean function representations, we investigate the computational complexity of each problem. In the general case, we demonstrate that the trap space property is coNP-complete, and the minimality and the membership properties are $\Pi_2^{\text P}$-complete. The complexities drop by one level in the polynomial hierarchy whenever the local functions of the BN are either unate, or are represented using truth-tables, binary decision diagrams, or double DNFs (Petri net encoding): the trap space property can be decided in a polynomial time, whereas deciding the minimality and the membership are coNP- complete. When the BN is given as its functional graph, all these problems are in P.
[ { "created": "Sat, 24 Dec 2022 15:37:01 GMT", "version": "v1" }, { "created": "Tue, 14 Mar 2023 13:24:45 GMT", "version": "v2" } ]
2023-03-15
[ [ "Moon", "Kyungduk", "" ], [ "Lee", "Kangbok", "" ], [ "Paulevé", "Loïc", "" ] ]
A Boolean network (BN) is a discrete dynamical system defined by a Boolean function that maps to the domain itself. A trap space of a BN is a generalization of a fixed point, which is defined as the sub-hypercubes closed by the function of the BN. A trap space is minimal if it does not contain any smaller trap space. Minimal trap spaces have applications for the analysis of attractors of BNs with various update modes. This paper establishes the computational complexity results of three decision problems related to minimal trap spaces: the decision of the trap space property of a sub-hypercube, the decision of its minimality, and the decision of the membership of a given configuration to a minimal trap space. Under several cases on Boolean function representations, we investigate the computational complexity of each problem. In the general case, we demonstrate that the trap space property is coNP-complete, and the minimality and the membership properties are $\Pi_2^{\text P}$-complete. The complexities drop by one level in the polynomial hierarchy whenever the local functions of the BN are either unate, or are represented using truth-tables, binary decision diagrams, or double DNFs (Petri net encoding): the trap space property can be decided in a polynomial time, whereas deciding the minimality and the membership are coNP- complete. When the BN is given as its functional graph, all these problems are in P.
1702.08005
Kunwoo Park
Kunwoo Park and Meeyoung Cha and Haewoon Kwak and Kuan-Ta Chen
Achievement and Friends: Key Factors of Player Retention Vary Across Player Levels in Online Multiplayer Games
9 pages, 4 figures, WWW '17 companion
null
null
null
cs.SI cs.HC
http://creativecommons.org/licenses/by/4.0/
Retaining players over an extended period of time is a long-standing challenge in game industry. Significant effort has been paid to understanding what motivates players enjoy games. While individuals may have varying reasons to play or abandon a game at different stages within the game, previous studies have looked at the retention problem from a snapshot view. This study, by analyzing in-game logs of 51,104 distinct individuals in an online multiplayer game, uniquely offers a multifaceted view of the retention problem over the players' virtual life phases. We find that key indicators of longevity change with the game level. Achievement features are important for players at the initial to the advanced phases, yet social features become the most predictive of longevity once players reach the highest level offered by the game. These findings have theoretical and practical implications for designing online games that are adaptive to meeting the players' needs.
[ { "created": "Sun, 26 Feb 2017 09:08:59 GMT", "version": "v1" } ]
2017-02-28
[ [ "Park", "Kunwoo", "" ], [ "Cha", "Meeyoung", "" ], [ "Kwak", "Haewoon", "" ], [ "Chen", "Kuan-Ta", "" ] ]
Retaining players over an extended period of time is a long-standing challenge in game industry. Significant effort has been paid to understanding what motivates players enjoy games. While individuals may have varying reasons to play or abandon a game at different stages within the game, previous studies have looked at the retention problem from a snapshot view. This study, by analyzing in-game logs of 51,104 distinct individuals in an online multiplayer game, uniquely offers a multifaceted view of the retention problem over the players' virtual life phases. We find that key indicators of longevity change with the game level. Achievement features are important for players at the initial to the advanced phases, yet social features become the most predictive of longevity once players reach the highest level offered by the game. These findings have theoretical and practical implications for designing online games that are adaptive to meeting the players' needs.
2106.13476
Zhen Gao
Malong Ke, Zhen Gao, Yang Huang, Guoru Ding, Derrick Wing Kwan Ng, Qihui Wu, and Jun Zhang
An Edge Computing Paradigm for Massive IoT Connectivity over High-Altitude Platform Networks
8 pages, 6 figures. The current version has been accepted by IEEE Wireless Communications Magzine Special Issue on Aerial Computing: Drones for Multi-Access Edge Computing
null
null
null
cs.IT cs.NI eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advent of the Internet-of-Things (IoT) era, the ever-increasing number of devices and emerging applications have triggered the need for ubiquitous connectivity and more efficient computing paradigms. These stringent demands have posed significant challenges to the current wireless networks and their computing architectures. In this article, we propose a high-altitude platform (HAP) network-enabled edge computing paradigm to tackle the key issues of massive IoT connectivity. Specifically, we first provide a comprehensive overview of the recent advances in non-terrestrial network-based edge computing architectures. Then, the limitations of the existing solutions are further summarized from the perspectives of the network architecture, random access procedure, and multiple access techniques. To overcome the limitations, we propose a HAP-enabled aerial cell-free massive multiple-input multiple-output network to realize the edge computing paradigm, where multiple HAPs cooperate via the edge servers to serve IoT devices. For the case of a massive number of devices, we further adopt a grant-free massive access scheme to guarantee low-latency and high-efficiency massive IoT connectivity to the network. Besides, a case study is provided to demonstrate the effectiveness of the proposed solution. Finally, to shed light on the future research directions of HAP network-enabled edge computing paradigms, the key challenges and open issues are discussed.
[ { "created": "Fri, 25 Jun 2021 07:41:16 GMT", "version": "v1" } ]
2021-06-28
[ [ "Ke", "Malong", "" ], [ "Gao", "Zhen", "" ], [ "Huang", "Yang", "" ], [ "Ding", "Guoru", "" ], [ "Ng", "Derrick Wing Kwan", "" ], [ "Wu", "Qihui", "" ], [ "Zhang", "Jun", "" ] ]
With the advent of the Internet-of-Things (IoT) era, the ever-increasing number of devices and emerging applications have triggered the need for ubiquitous connectivity and more efficient computing paradigms. These stringent demands have posed significant challenges to the current wireless networks and their computing architectures. In this article, we propose a high-altitude platform (HAP) network-enabled edge computing paradigm to tackle the key issues of massive IoT connectivity. Specifically, we first provide a comprehensive overview of the recent advances in non-terrestrial network-based edge computing architectures. Then, the limitations of the existing solutions are further summarized from the perspectives of the network architecture, random access procedure, and multiple access techniques. To overcome the limitations, we propose a HAP-enabled aerial cell-free massive multiple-input multiple-output network to realize the edge computing paradigm, where multiple HAPs cooperate via the edge servers to serve IoT devices. For the case of a massive number of devices, we further adopt a grant-free massive access scheme to guarantee low-latency and high-efficiency massive IoT connectivity to the network. Besides, a case study is provided to demonstrate the effectiveness of the proposed solution. Finally, to shed light on the future research directions of HAP network-enabled edge computing paradigms, the key challenges and open issues are discussed.
2403.09548
Jo\~ao Manoel Herrera Pinheiro
Jo\~ao Manoel Herrera Pinheiro, Marcelo Becker
Breast Cancer Classification Using Gradient Boosting Algorithms Focusing on Reducing the False Negative and SHAP for Explainability
9 pages, 16 figures
null
null
null
cs.LG cs.CY q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Cancer is one of the diseases that kill the most women in the world, with breast cancer being responsible for the highest number of cancer cases and consequently deaths. However, it can be prevented by early detection and, consequently, early treatment. Any development for detection or perdition this kind of cancer is important for a better healthy life. Many studies focus on a model with high accuracy in cancer prediction, but sometimes accuracy alone may not always be a reliable metric. This study implies an investigative approach to studying the performance of different machine learning algorithms based on boosting to predict breast cancer focusing on the recall metric. Boosting machine learning algorithms has been proven to be an effective tool for detecting medical diseases. The dataset of the University of California, Irvine (UCI) repository has been utilized to train and test the model classifier that contains their attributes. The main objective of this study is to use state-of-the-art boosting algorithms such as AdaBoost, XGBoost, CatBoost and LightGBM to predict and diagnose breast cancer and to find the most effective metric regarding recall, ROC-AUC, and confusion matrix. Furthermore, our study is the first to use these four boosting algorithms with Optuna, a library for hyperparameter optimization, and the SHAP method to improve the interpretability of our model, which can be used as a support to identify and predict breast cancer. We were able to improve AUC or recall for all the models and reduce the False Negative for AdaBoost and LigthGBM the final AUC were more than 99.41\% for all models.
[ { "created": "Thu, 14 Mar 2024 16:35:43 GMT", "version": "v1" } ]
2024-03-15
[ [ "Pinheiro", "João Manoel Herrera", "" ], [ "Becker", "Marcelo", "" ] ]
Cancer is one of the diseases that kill the most women in the world, with breast cancer being responsible for the highest number of cancer cases and consequently deaths. However, it can be prevented by early detection and, consequently, early treatment. Any development for detection or perdition this kind of cancer is important for a better healthy life. Many studies focus on a model with high accuracy in cancer prediction, but sometimes accuracy alone may not always be a reliable metric. This study implies an investigative approach to studying the performance of different machine learning algorithms based on boosting to predict breast cancer focusing on the recall metric. Boosting machine learning algorithms has been proven to be an effective tool for detecting medical diseases. The dataset of the University of California, Irvine (UCI) repository has been utilized to train and test the model classifier that contains their attributes. The main objective of this study is to use state-of-the-art boosting algorithms such as AdaBoost, XGBoost, CatBoost and LightGBM to predict and diagnose breast cancer and to find the most effective metric regarding recall, ROC-AUC, and confusion matrix. Furthermore, our study is the first to use these four boosting algorithms with Optuna, a library for hyperparameter optimization, and the SHAP method to improve the interpretability of our model, which can be used as a support to identify and predict breast cancer. We were able to improve AUC or recall for all the models and reduce the False Negative for AdaBoost and LigthGBM the final AUC were more than 99.41\% for all models.
2305.16360
Hao Wang
Yuxin Huang, Hao Wang, Zhaoran Liu, Licheng Pan, Haozhe Li, Xinggao Liu
Modeling Task Relationships in Multi-variate Soft Sensor with Balanced Mixture-of-Experts
null
null
10.1109/TII.2022.3202909
null
cs.LG cs.CE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate estimation of multiple quality variables is critical for building industrial soft sensor models, which have long been confronted with data efficiency and negative transfer issues. Methods sharing backbone parameters among tasks address the data efficiency issue; however, they still fail to mitigate the negative transfer problem. To address this issue, a balanced Mixture-of-Experts (BMoE) is proposed in this work, which consists of a multi-gate mixture of experts (MMoE) module and a task gradient balancing (TGB) module. The MoE module aims to portray task relationships, while the TGB module balances the gradients among tasks dynamically. Both of them cooperate to mitigate the negative transfer problem. Experiments on the typical sulfur recovery unit demonstrate that BMoE models task relationship and balances the training process effectively, and achieves better performance than baseline models significantly.
[ { "created": "Thu, 25 May 2023 07:32:03 GMT", "version": "v1" } ]
2023-05-29
[ [ "Huang", "Yuxin", "" ], [ "Wang", "Hao", "" ], [ "Liu", "Zhaoran", "" ], [ "Pan", "Licheng", "" ], [ "Li", "Haozhe", "" ], [ "Liu", "Xinggao", "" ] ]
Accurate estimation of multiple quality variables is critical for building industrial soft sensor models, which have long been confronted with data efficiency and negative transfer issues. Methods sharing backbone parameters among tasks address the data efficiency issue; however, they still fail to mitigate the negative transfer problem. To address this issue, a balanced Mixture-of-Experts (BMoE) is proposed in this work, which consists of a multi-gate mixture of experts (MMoE) module and a task gradient balancing (TGB) module. The MoE module aims to portray task relationships, while the TGB module balances the gradients among tasks dynamically. Both of them cooperate to mitigate the negative transfer problem. Experiments on the typical sulfur recovery unit demonstrate that BMoE models task relationship and balances the training process effectively, and achieves better performance than baseline models significantly.
2405.07265
Andreas A{\ss}muth
Nicholas J\"ager and Andreas A{\ss}muth
An Approach for Decentralized Authentication in Networks of UAVs
5 pages
Proc of the 12th International Conference on Cloud Computing, GRIDs, and Virtualization (Cloud Computing 2021), Porto Portugal, April 2021, pp. 13-17, ISSN 2308-4294
null
null
cs.DC cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a decentralized authentication system for networks of unmanned aerial vehicles. A blockchain-based public key infrastructure allows the usage of public key cryptography and public key based authentication protocols. The blockchain provides a common storage of the public keys and their relations and can provide the required information for the authentication process. Furthermore, the unmanned aerial vehicles store selected parts of the blockchain in order to operate independently in areas where they might not have access to the Internet. This allows unmanned aerial vehicles to authenticate entities of the network, like other unmanned aerial vehicles, cloud services, cars, and any computer.
[ { "created": "Sun, 12 May 2024 12:19:48 GMT", "version": "v1" } ]
2024-05-14
[ [ "Jäger", "Nicholas", "" ], [ "Aßmuth", "Andreas", "" ] ]
We propose a decentralized authentication system for networks of unmanned aerial vehicles. A blockchain-based public key infrastructure allows the usage of public key cryptography and public key based authentication protocols. The blockchain provides a common storage of the public keys and their relations and can provide the required information for the authentication process. Furthermore, the unmanned aerial vehicles store selected parts of the blockchain in order to operate independently in areas where they might not have access to the Internet. This allows unmanned aerial vehicles to authenticate entities of the network, like other unmanned aerial vehicles, cloud services, cars, and any computer.
1310.5930
Adam Noel
Adam Noel, Karen C. Cheung, Robert Schober
A Unifying Model for External Noise Sources and ISI in Diffusive Molecular Communication
14 pages, 7 figures, 4 tables, 1 appendix. To appear in IEEE Journal on Selected Areas in Communications (JSAC). Submitted October 21, 2013, revised April 21, 2014, accepted June 3, 2014
null
10.1109/JSAC.2014.2367693
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers the impact of external noise sources, including interfering transmitters, on a diffusive molecular communication system, where the impact is measured as the number of noise molecules expected to be observed at a passive receiver. A unifying model for noise, multiuser interference, and intersymbol interference is presented, where, under certain circumstances, interference can be approximated as a noise source that is emitting continuously. The model includes the presence of advection and molecule degradation. The time-varying and asymptotic impact is derived for a series of special cases, some of which facilitate closed-form solutions. Simulation results show the accuracy of the expressions derived for the impact of a continuously-emitting noise source, and show how approximating intersymbol interference as a noise source can simplify the calculation of the expected bit error probability of a weighted sum detector.
[ { "created": "Tue, 22 Oct 2013 14:22:23 GMT", "version": "v1" }, { "created": "Tue, 22 Apr 2014 00:31:21 GMT", "version": "v2" }, { "created": "Mon, 7 Jul 2014 17:52:01 GMT", "version": "v3" } ]
2015-11-20
[ [ "Noel", "Adam", "" ], [ "Cheung", "Karen C.", "" ], [ "Schober", "Robert", "" ] ]
This paper considers the impact of external noise sources, including interfering transmitters, on a diffusive molecular communication system, where the impact is measured as the number of noise molecules expected to be observed at a passive receiver. A unifying model for noise, multiuser interference, and intersymbol interference is presented, where, under certain circumstances, interference can be approximated as a noise source that is emitting continuously. The model includes the presence of advection and molecule degradation. The time-varying and asymptotic impact is derived for a series of special cases, some of which facilitate closed-form solutions. Simulation results show the accuracy of the expressions derived for the impact of a continuously-emitting noise source, and show how approximating intersymbol interference as a noise source can simplify the calculation of the expected bit error probability of a weighted sum detector.
2209.02431
Shuaitao Zhao
Shuaitao Zhao, Kun Liu, Yuhang Huang, Qian Bao, Dan Zeng, and Wu Liu
DPIT: Dual-Pipeline Integrated Transformer for Human Pose Estimation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human pose estimation aims to figure out the keypoints of all people in different scenes. Current approaches still face some challenges despite promising results. Existing top-down methods deal with a single person individually, without the interaction between different people and the scene they are situated in. Consequently, the performance of human detection degrades when serious occlusion happens. On the other hand, existing bottom-up methods consider all people at the same time and capture the global knowledge of the entire image. However, they are less accurate than the top-down methods due to the scale variation. To address these problems, we propose a novel Dual-Pipeline Integrated Transformer (DPIT) by integrating top-down and bottom-up pipelines to explore the visual clues of different receptive fields and achieve their complementarity. Specifically, DPIT consists of two branches, the bottom-up branch deals with the whole image to capture the global visual information, while the top-down branch extracts the feature representation of local vision from the single-human bounding box. Then, the extracted feature representations from bottom-up and top-down branches are fed into the transformer encoder to fuse the global and local knowledge interactively. Moreover, we define the keypoint queries to explore both full-scene and single-human posture visual clues to realize the mutual complementarity of the two pipelines. To the best of our knowledge, this is one of the first works to integrate the bottom-up and top-down pipelines with transformers for human pose estimation. Extensive experiments on COCO and MPII datasets demonstrate that our DPIT achieves comparable performance to the state-of-the-art methods.
[ { "created": "Fri, 2 Sep 2022 10:18:26 GMT", "version": "v1" } ]
2022-09-07
[ [ "Zhao", "Shuaitao", "" ], [ "Liu", "Kun", "" ], [ "Huang", "Yuhang", "" ], [ "Bao", "Qian", "" ], [ "Zeng", "Dan", "" ], [ "Liu", "Wu", "" ] ]
Human pose estimation aims to figure out the keypoints of all people in different scenes. Current approaches still face some challenges despite promising results. Existing top-down methods deal with a single person individually, without the interaction between different people and the scene they are situated in. Consequently, the performance of human detection degrades when serious occlusion happens. On the other hand, existing bottom-up methods consider all people at the same time and capture the global knowledge of the entire image. However, they are less accurate than the top-down methods due to the scale variation. To address these problems, we propose a novel Dual-Pipeline Integrated Transformer (DPIT) by integrating top-down and bottom-up pipelines to explore the visual clues of different receptive fields and achieve their complementarity. Specifically, DPIT consists of two branches, the bottom-up branch deals with the whole image to capture the global visual information, while the top-down branch extracts the feature representation of local vision from the single-human bounding box. Then, the extracted feature representations from bottom-up and top-down branches are fed into the transformer encoder to fuse the global and local knowledge interactively. Moreover, we define the keypoint queries to explore both full-scene and single-human posture visual clues to realize the mutual complementarity of the two pipelines. To the best of our knowledge, this is one of the first works to integrate the bottom-up and top-down pipelines with transformers for human pose estimation. Extensive experiments on COCO and MPII datasets demonstrate that our DPIT achieves comparable performance to the state-of-the-art methods.
1802.05315
Dong Yin
Andr\'es Mu\~noz Medina, Sergei Vassilvitskii, Dong Yin
Online Learning for Non-Stationary A/B Tests
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rollout of new versions of a feature in modern applications is a manual multi-stage process, as the feature is released to ever larger groups of users, while its performance is carefully monitored. This kind of A/B testing is ubiquitous, but suboptimal, as the monitoring requires heavy human intervention, is not guaranteed to capture consistent, but short-term fluctuations in performance, and is inefficient, as better versions take a long time to reach the full population. In this work we formulate this question as that of expert learning, and give a new algorithm Follow-The-Best-Interval, FTBI, that works in dynamic, non-stationary environments. Our approach is practical, simple, and efficient, and has rigorous guarantees on its performance. Finally, we perform a thorough evaluation on synthetic and real world datasets and show that our approach outperforms current state-of-the-art methods.
[ { "created": "Wed, 14 Feb 2018 20:42:51 GMT", "version": "v1" }, { "created": "Sun, 27 May 2018 06:42:39 GMT", "version": "v2" } ]
2018-05-29
[ [ "Medina", "Andrés Muñoz", "" ], [ "Vassilvitskii", "Sergei", "" ], [ "Yin", "Dong", "" ] ]
The rollout of new versions of a feature in modern applications is a manual multi-stage process, as the feature is released to ever larger groups of users, while its performance is carefully monitored. This kind of A/B testing is ubiquitous, but suboptimal, as the monitoring requires heavy human intervention, is not guaranteed to capture consistent, but short-term fluctuations in performance, and is inefficient, as better versions take a long time to reach the full population. In this work we formulate this question as that of expert learning, and give a new algorithm Follow-The-Best-Interval, FTBI, that works in dynamic, non-stationary environments. Our approach is practical, simple, and efficient, and has rigorous guarantees on its performance. Finally, we perform a thorough evaluation on synthetic and real world datasets and show that our approach outperforms current state-of-the-art methods.
2203.13349
Rawal Khirodkar
Rawal Khirodkar, Shashank Tripathi, Kris Kitani
Occluded Human Mesh Recovery
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Top-down methods for monocular human mesh recovery have two stages: (1) detect human bounding boxes; (2) treat each bounding box as an independent single-human mesh recovery task. Unfortunately, the single-human assumption does not hold in images with multi-human occlusion and crowding. Consequently, top-down methods have difficulties in recovering accurate 3D human meshes under severe person-person occlusion. To address this, we present Occluded Human Mesh Recovery (OCHMR) - a novel top-down mesh recovery approach that incorporates image spatial context to overcome the limitations of the single-human assumption. The approach is conceptually simple and can be applied to any existing top-down architecture. Along with the input image, we condition the top-down model on spatial context from the image in the form of body-center heatmaps. To reason from the predicted body centermaps, we introduce Contextual Normalization (CoNorm) blocks to adaptively modulate intermediate features of the top-down model. The contextual conditioning helps our model disambiguate between two severely overlapping human bounding-boxes, making it robust to multi-person occlusion. Compared with state-of-the-art methods, OCHMR achieves superior performance on challenging multi-person benchmarks like 3DPW, CrowdPose and OCHuman. Specifically, our proposed contextual reasoning architecture applied to the SPIN model with ResNet-50 backbone results in 75.2 PMPJPE on 3DPW-PC, 23.6 AP on CrowdPose and 37.7 AP on OCHuman datasets, a significant improvement of 6.9 mm, 6.4 AP and 20.8 AP respectively over the baseline. Code and models will be released.
[ { "created": "Thu, 24 Mar 2022 21:39:20 GMT", "version": "v1" } ]
2022-03-28
[ [ "Khirodkar", "Rawal", "" ], [ "Tripathi", "Shashank", "" ], [ "Kitani", "Kris", "" ] ]
Top-down methods for monocular human mesh recovery have two stages: (1) detect human bounding boxes; (2) treat each bounding box as an independent single-human mesh recovery task. Unfortunately, the single-human assumption does not hold in images with multi-human occlusion and crowding. Consequently, top-down methods have difficulties in recovering accurate 3D human meshes under severe person-person occlusion. To address this, we present Occluded Human Mesh Recovery (OCHMR) - a novel top-down mesh recovery approach that incorporates image spatial context to overcome the limitations of the single-human assumption. The approach is conceptually simple and can be applied to any existing top-down architecture. Along with the input image, we condition the top-down model on spatial context from the image in the form of body-center heatmaps. To reason from the predicted body centermaps, we introduce Contextual Normalization (CoNorm) blocks to adaptively modulate intermediate features of the top-down model. The contextual conditioning helps our model disambiguate between two severely overlapping human bounding-boxes, making it robust to multi-person occlusion. Compared with state-of-the-art methods, OCHMR achieves superior performance on challenging multi-person benchmarks like 3DPW, CrowdPose and OCHuman. Specifically, our proposed contextual reasoning architecture applied to the SPIN model with ResNet-50 backbone results in 75.2 PMPJPE on 3DPW-PC, 23.6 AP on CrowdPose and 37.7 AP on OCHuman datasets, a significant improvement of 6.9 mm, 6.4 AP and 20.8 AP respectively over the baseline. Code and models will be released.
2402.06697
Navid Aftabi
Navid Aftabi and Nima Moradi and Fatemeh Mahroo
Feed-Forward Neural Networks as a Mixed-Integer Program
null
null
null
null
cs.LG cs.AI math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) are widely studied in various applications. A DNN consists of layers of neurons that compute affine combinations, apply nonlinear operations, and produce corresponding activations. The rectified linear unit (ReLU) is a typical nonlinear operator, outputting the max of its input and zero. In scenarios like max pooling, where multiple input values are involved, a fixed-parameter DNN can be modeled as a mixed-integer program (MIP). This formulation, with continuous variables representing unit outputs and binary variables for ReLU activation, finds applications across diverse domains. This study explores the formulation of trained ReLU neurons as MIP and applies MIP models for training neural networks (NNs). Specifically, it investigates interactions between MIP techniques and various NN architectures, including binary DNNs (employing step activation functions) and binarized DNNs (with weights and activations limited to $-1,0,+1$). The research focuses on training and evaluating proposed approaches through experiments on handwritten digit classification models. The comparative study assesses the performance of trained ReLU NNs, shedding light on the effectiveness of MIP formulations in enhancing training processes for NNs.
[ { "created": "Fri, 9 Feb 2024 02:23:37 GMT", "version": "v1" } ]
2024-02-13
[ [ "Aftabi", "Navid", "" ], [ "Moradi", "Nima", "" ], [ "Mahroo", "Fatemeh", "" ] ]
Deep neural networks (DNNs) are widely studied in various applications. A DNN consists of layers of neurons that compute affine combinations, apply nonlinear operations, and produce corresponding activations. The rectified linear unit (ReLU) is a typical nonlinear operator, outputting the max of its input and zero. In scenarios like max pooling, where multiple input values are involved, a fixed-parameter DNN can be modeled as a mixed-integer program (MIP). This formulation, with continuous variables representing unit outputs and binary variables for ReLU activation, finds applications across diverse domains. This study explores the formulation of trained ReLU neurons as MIP and applies MIP models for training neural networks (NNs). Specifically, it investigates interactions between MIP techniques and various NN architectures, including binary DNNs (employing step activation functions) and binarized DNNs (with weights and activations limited to $-1,0,+1$). The research focuses on training and evaluating proposed approaches through experiments on handwritten digit classification models. The comparative study assesses the performance of trained ReLU NNs, shedding light on the effectiveness of MIP formulations in enhancing training processes for NNs.
2406.14102
Mert Cihangiroglu
Dincy R. Arikkat, Mert Cihangiroglu, Mauro Conti, Rafidha Rehiman K. A., Serena Nicolazzo, Antonino Nocera, Vinod P
SeCTIS: A Framework to Secure CTI Sharing
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
The rise of IT-dependent operations in modern organizations has heightened their vulnerability to cyberattacks. As a growing number of organizations include smart, interconnected devices in their systems to automate their processes, the attack surface becomes much bigger, and the complexity and frequency of attacks pose a significant threat. Consequently, organizations have been compelled to seek innovative approaches to mitigate the menaces inherent in their infrastructure. In response, considerable research efforts have been directed towards creating effective solutions for sharing Cyber Threat Intelligence (CTI). Current information-sharing methods lack privacy safeguards, leaving organizations vulnerable to leaks of both proprietary and confidential data. To tackle this problem, we designed a novel framework called SeCTIS (Secure Cyber Threat Intelligence Sharing), integrating Swarm Learning and Blockchain technologies to enable businesses to collaborate, preserving the privacy of their CTI data. Moreover, our approach provides a way to assess the data and model quality, and the trustworthiness of all the participants leveraging some validators through Zero Knowledge Proofs. An extensive experimental campaign demonstrates our framework's correctness and performance, and the detailed attack model discusses its robustness against attacks in the context of data and model quality.
[ { "created": "Thu, 20 Jun 2024 08:34:50 GMT", "version": "v1" } ]
2024-06-21
[ [ "Arikkat", "Dincy R.", "" ], [ "Cihangiroglu", "Mert", "" ], [ "Conti", "Mauro", "" ], [ "A.", "Rafidha Rehiman K.", "" ], [ "Nicolazzo", "Serena", "" ], [ "Nocera", "Antonino", "" ], [ "P", "Vinod", "" ] ]
The rise of IT-dependent operations in modern organizations has heightened their vulnerability to cyberattacks. As a growing number of organizations include smart, interconnected devices in their systems to automate their processes, the attack surface becomes much bigger, and the complexity and frequency of attacks pose a significant threat. Consequently, organizations have been compelled to seek innovative approaches to mitigate the menaces inherent in their infrastructure. In response, considerable research efforts have been directed towards creating effective solutions for sharing Cyber Threat Intelligence (CTI). Current information-sharing methods lack privacy safeguards, leaving organizations vulnerable to leaks of both proprietary and confidential data. To tackle this problem, we designed a novel framework called SeCTIS (Secure Cyber Threat Intelligence Sharing), integrating Swarm Learning and Blockchain technologies to enable businesses to collaborate, preserving the privacy of their CTI data. Moreover, our approach provides a way to assess the data and model quality, and the trustworthiness of all the participants leveraging some validators through Zero Knowledge Proofs. An extensive experimental campaign demonstrates our framework's correctness and performance, and the detailed attack model discusses its robustness against attacks in the context of data and model quality.
2003.02389
Alex Renda
Alex Renda, Jonathan Frankle, Michael Carbin
Comparing Rewinding and Fine-tuning in Neural Network Pruning
null
ICLR 2020
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many neural network pruning algorithms proceed in three steps: train the network to completion, remove unwanted structure to compress the network, and retrain the remaining structure to recover lost accuracy. The standard retraining technique, fine-tuning, trains the unpruned weights from their final trained values using a small fixed learning rate. In this paper, we compare fine-tuning to alternative retraining techniques. Weight rewinding (as proposed by Frankle et al., (2019)), rewinds unpruned weights to their values from earlier in training and retrains them from there using the original training schedule. Learning rate rewinding (which we propose) trains the unpruned weights from their final values using the same learning rate schedule as weight rewinding. Both rewinding techniques outperform fine-tuning, forming the basis of a network-agnostic pruning algorithm that matches the accuracy and compression ratios of several more network-specific state-of-the-art techniques.
[ { "created": "Thu, 5 Mar 2020 00:53:18 GMT", "version": "v1" } ]
2020-03-06
[ [ "Renda", "Alex", "" ], [ "Frankle", "Jonathan", "" ], [ "Carbin", "Michael", "" ] ]
Many neural network pruning algorithms proceed in three steps: train the network to completion, remove unwanted structure to compress the network, and retrain the remaining structure to recover lost accuracy. The standard retraining technique, fine-tuning, trains the unpruned weights from their final trained values using a small fixed learning rate. In this paper, we compare fine-tuning to alternative retraining techniques. Weight rewinding (as proposed by Frankle et al., (2019)), rewinds unpruned weights to their values from earlier in training and retrains them from there using the original training schedule. Learning rate rewinding (which we propose) trains the unpruned weights from their final values using the same learning rate schedule as weight rewinding. Both rewinding techniques outperform fine-tuning, forming the basis of a network-agnostic pruning algorithm that matches the accuracy and compression ratios of several more network-specific state-of-the-art techniques.
2005.00527
Ruosong Wang
Ruosong Wang, Simon S. Du, Lin F. Yang, Sham M. Kakade
Is Long Horizon Reinforcement Learning More Difficult Than Short Horizon Reinforcement Learning?
null
null
null
null
cs.LG cs.AI math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning to plan for long horizons is a central challenge in episodic reinforcement learning problems. A fundamental question is to understand how the difficulty of the problem scales as the horizon increases. Here the natural measure of sample complexity is a normalized one: we are interested in the number of episodes it takes to provably discover a policy whose value is $\varepsilon$ near to that of the optimal value, where the value is measured by the normalized cumulative reward in each episode. In a COLT 2018 open problem, Jiang and Agarwal conjectured that, for tabular, episodic reinforcement learning problems, there exists a sample complexity lower bound which exhibits a polynomial dependence on the horizon -- a conjecture which is consistent with all known sample complexity upper bounds. This work refutes this conjecture, proving that tabular, episodic reinforcement learning is possible with a sample complexity that scales only logarithmically with the planning horizon. In other words, when the values are appropriately normalized (to lie in the unit interval), this results shows that long horizon RL is no more difficult than short horizon RL, at least in a minimax sense. Our analysis introduces two ideas: (i) the construction of an $\varepsilon$-net for optimal policies whose log-covering number scales only logarithmically with the planning horizon, and (ii) the Online Trajectory Synthesis algorithm, which adaptively evaluates all policies in a given policy class using sample complexity that scales with the log-covering number of the given policy class. Both may be of independent interest.
[ { "created": "Fri, 1 May 2020 17:56:38 GMT", "version": "v1" }, { "created": "Thu, 9 Jul 2020 16:20:06 GMT", "version": "v2" } ]
2020-07-10
[ [ "Wang", "Ruosong", "" ], [ "Du", "Simon S.", "" ], [ "Yang", "Lin F.", "" ], [ "Kakade", "Sham M.", "" ] ]
Learning to plan for long horizons is a central challenge in episodic reinforcement learning problems. A fundamental question is to understand how the difficulty of the problem scales as the horizon increases. Here the natural measure of sample complexity is a normalized one: we are interested in the number of episodes it takes to provably discover a policy whose value is $\varepsilon$ near to that of the optimal value, where the value is measured by the normalized cumulative reward in each episode. In a COLT 2018 open problem, Jiang and Agarwal conjectured that, for tabular, episodic reinforcement learning problems, there exists a sample complexity lower bound which exhibits a polynomial dependence on the horizon -- a conjecture which is consistent with all known sample complexity upper bounds. This work refutes this conjecture, proving that tabular, episodic reinforcement learning is possible with a sample complexity that scales only logarithmically with the planning horizon. In other words, when the values are appropriately normalized (to lie in the unit interval), this results shows that long horizon RL is no more difficult than short horizon RL, at least in a minimax sense. Our analysis introduces two ideas: (i) the construction of an $\varepsilon$-net for optimal policies whose log-covering number scales only logarithmically with the planning horizon, and (ii) the Online Trajectory Synthesis algorithm, which adaptively evaluates all policies in a given policy class using sample complexity that scales with the log-covering number of the given policy class. Both may be of independent interest.
1906.00872
Shizhe Chen
Shizhe Chen, Qin Jin and Jianlong Fu
From Words to Sentences: A Progressive Learning Approach for Zero-resource Machine Translation with Visual Pivots
Accepted by IJCAI 2019
null
null
null
cs.CL cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The neural machine translation model has suffered from the lack of large-scale parallel corpora. In contrast, we humans can learn multi-lingual translations even without parallel texts by referring our languages to the external world. To mimic such human learning behavior, we employ images as pivots to enable zero-resource translation learning. However, a picture tells a thousand words, which makes multi-lingual sentences pivoted by the same image noisy as mutual translations and thus hinders the translation model learning. In this work, we propose a progressive learning approach for image-pivoted zero-resource machine translation. Since words are less diverse when grounded in the image, we first learn word-level translation with image pivots, and then progress to learn the sentence-level translation by utilizing the learned word translation to suppress noises in image-pivoted multi-lingual sentences. Experimental results on two widely used image-pivot translation datasets, IAPR-TC12 and Multi30k, show that the proposed approach significantly outperforms other state-of-the-art methods.
[ { "created": "Mon, 3 Jun 2019 15:28:48 GMT", "version": "v1" } ]
2019-06-04
[ [ "Chen", "Shizhe", "" ], [ "Jin", "Qin", "" ], [ "Fu", "Jianlong", "" ] ]
The neural machine translation model has suffered from the lack of large-scale parallel corpora. In contrast, we humans can learn multi-lingual translations even without parallel texts by referring our languages to the external world. To mimic such human learning behavior, we employ images as pivots to enable zero-resource translation learning. However, a picture tells a thousand words, which makes multi-lingual sentences pivoted by the same image noisy as mutual translations and thus hinders the translation model learning. In this work, we propose a progressive learning approach for image-pivoted zero-resource machine translation. Since words are less diverse when grounded in the image, we first learn word-level translation with image pivots, and then progress to learn the sentence-level translation by utilizing the learned word translation to suppress noises in image-pivoted multi-lingual sentences. Experimental results on two widely used image-pivot translation datasets, IAPR-TC12 and Multi30k, show that the proposed approach significantly outperforms other state-of-the-art methods.
1703.07047
Krzysztof J. Geras
Krzysztof J. Geras and Stacey Wolfson and Yiqiu Shen and Nan Wu and S. Gene Kim and Eric Kim and Laura Heacock and Ujas Parikh and Linda Moy and Kyunghyun Cho
High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in deep learning for natural images have prompted a surge of interest in applying similar techniques to medical images. The majority of the initial attempts focused on replacing the input of a deep convolutional neural network with a medical image, which does not take into consideration the fundamental differences between these two types of images. Specifically, fine details are necessary for detection in medical images, unlike in natural images where coarse structures matter most. This difference makes it inadequate to use the existing network architectures developed for natural images, because they work on heavily downscaled images to reduce the memory requirements. This hides details necessary to make accurate predictions. Additionally, a single exam in medical imaging often comes with a set of views which must be fused in order to reach a correct conclusion. In our work, we propose to use a multi-view deep convolutional neural network that handles a set of high-resolution medical images. We evaluate it on large-scale mammography-based breast cancer screening (BI-RADS prediction) using 886,000 images. We focus on investigating the impact of the training set size and image size on the prediction accuracy. Our results highlight that performance increases with the size of training set, and that the best performance can only be achieved using the original resolution. In the reader study, performed on a random subset of the test set, we confirmed the efficacy of our model, which achieved performance comparable to a committee of radiologists when presented with the same data.
[ { "created": "Tue, 21 Mar 2017 04:11:13 GMT", "version": "v1" }, { "created": "Mon, 6 Nov 2017 06:39:33 GMT", "version": "v2" }, { "created": "Thu, 28 Jun 2018 01:21:51 GMT", "version": "v3" } ]
2018-06-29
[ [ "Geras", "Krzysztof J.", "" ], [ "Wolfson", "Stacey", "" ], [ "Shen", "Yiqiu", "" ], [ "Wu", "Nan", "" ], [ "Kim", "S. Gene", "" ], [ "Kim", "Eric", "" ], [ "Heacock", "Laura", "" ], [ "Parikh", "Ujas", "" ], [ "Moy", "Linda", "" ], [ "Cho", "Kyunghyun", "" ] ]
Advances in deep learning for natural images have prompted a surge of interest in applying similar techniques to medical images. The majority of the initial attempts focused on replacing the input of a deep convolutional neural network with a medical image, which does not take into consideration the fundamental differences between these two types of images. Specifically, fine details are necessary for detection in medical images, unlike in natural images where coarse structures matter most. This difference makes it inadequate to use the existing network architectures developed for natural images, because they work on heavily downscaled images to reduce the memory requirements. This hides details necessary to make accurate predictions. Additionally, a single exam in medical imaging often comes with a set of views which must be fused in order to reach a correct conclusion. In our work, we propose to use a multi-view deep convolutional neural network that handles a set of high-resolution medical images. We evaluate it on large-scale mammography-based breast cancer screening (BI-RADS prediction) using 886,000 images. We focus on investigating the impact of the training set size and image size on the prediction accuracy. Our results highlight that performance increases with the size of training set, and that the best performance can only be achieved using the original resolution. In the reader study, performed on a random subset of the test set, we confirmed the efficacy of our model, which achieved performance comparable to a committee of radiologists when presented with the same data.
1602.02847
Hamed Azami
Hamed Azami, Alberto Fernandez, Javier Escudero
Refined Multiscale Fuzzy Entropy based on Standard Deviation for Biomedical Signal Analysis
null
null
10.1007/s11517-017-1647-5
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiscale entropy (MSE) has been a prevalent algorithm to quantify the complexity of fluctuations in the local mean value of biomedical time series. Recent developments in the field have tried to improve the MSE by reducing its variability in large scale factors. On the other hand, there has been recent interest in using other statistical moments than the mean, i.e. variance, in the coarse-graining step of the MSE. Building on these trends, here we introduce the so-called refined composite multiscale fuzzy entropy based on the standard deviation (RCMFE{\sigma}) to quantify the dynamical properties of spread over multiple time scales. We demonstrate the dependency of the RCMFE{\sigma}, in comparison with other multiscale approaches, on several straightforward signal processing concepts using a set of synthetic signals. We also investigate the complementarity of using the standard deviation instead of the mean in the coarse-graining process using magnetoencephalograms in Alzheimer disease and publicly available electroencephalograms recorded from focal and non-focal areas in epilepsy. Our results indicate that RCMFE{\sigma} offers complementary information to that revealed by classical coarse-graining approaches and that it has superior performance to distinguish different types of physiological activity.
[ { "created": "Tue, 9 Feb 2016 03:17:07 GMT", "version": "v1" }, { "created": "Wed, 3 May 2017 15:48:52 GMT", "version": "v2" } ]
2017-05-04
[ [ "Azami", "Hamed", "" ], [ "Fernandez", "Alberto", "" ], [ "Escudero", "Javier", "" ] ]
Multiscale entropy (MSE) has been a prevalent algorithm to quantify the complexity of fluctuations in the local mean value of biomedical time series. Recent developments in the field have tried to improve the MSE by reducing its variability in large scale factors. On the other hand, there has been recent interest in using other statistical moments than the mean, i.e. variance, in the coarse-graining step of the MSE. Building on these trends, here we introduce the so-called refined composite multiscale fuzzy entropy based on the standard deviation (RCMFE{\sigma}) to quantify the dynamical properties of spread over multiple time scales. We demonstrate the dependency of the RCMFE{\sigma}, in comparison with other multiscale approaches, on several straightforward signal processing concepts using a set of synthetic signals. We also investigate the complementarity of using the standard deviation instead of the mean in the coarse-graining process using magnetoencephalograms in Alzheimer disease and publicly available electroencephalograms recorded from focal and non-focal areas in epilepsy. Our results indicate that RCMFE{\sigma} offers complementary information to that revealed by classical coarse-graining approaches and that it has superior performance to distinguish different types of physiological activity.
2010.15206
Qiong Wu
Qiong Wu, Zhenming Liu
Rosella: A Self-Driving Distributed Scheduler for Heterogeneous Clusters
8 pages
International Conference on Mobility, Sensing and Networking, 2021
null
null
cs.DC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale interactive web services and advanced AI applications make sophisticated decisions in real-time, based on executing a massive amount of computation tasks on thousands of servers. Task schedulers, which often operate in heterogeneous and volatile environments, require high throughput, i.e., scheduling millions of tasks per second, and low latency, i.e., incurring minimal scheduling delays for millisecond-level tasks. Scheduling is further complicated by other users' workloads in a shared system, other background activities, and the diverse hardware configurations inside datacenters. We present Rosella, a new self-driving, distributed approach for task scheduling in heterogeneous clusters. Rosella automatically learns the compute environment and adjusts its scheduling policy in real-time. The solution provides high throughput and low latency simultaneously because it runs in parallel on multiple machines with minimum coordination and only performs simple operations for each scheduling decision. Our learning module monitors total system load and uses the information to dynamically determine optimal estimation strategy for the backends' compute-power. Rosella generalizes power-of-two-choice algorithms to handle heterogeneous workers, reducing the max queue length of O(log n) obtained by prior algorithms to O(log log n). We evaluate Rosella with a variety of workloads on a 32-node AWS cluster. Experimental results show that Rosella significantly reduces task response time, and adapts to environment changes quickly.
[ { "created": "Wed, 28 Oct 2020 20:12:29 GMT", "version": "v1" }, { "created": "Tue, 10 Nov 2020 00:43:50 GMT", "version": "v2" }, { "created": "Wed, 27 Oct 2021 00:12:54 GMT", "version": "v3" } ]
2021-10-28
[ [ "Wu", "Qiong", "" ], [ "Liu", "Zhenming", "" ] ]
Large-scale interactive web services and advanced AI applications make sophisticated decisions in real-time, based on executing a massive amount of computation tasks on thousands of servers. Task schedulers, which often operate in heterogeneous and volatile environments, require high throughput, i.e., scheduling millions of tasks per second, and low latency, i.e., incurring minimal scheduling delays for millisecond-level tasks. Scheduling is further complicated by other users' workloads in a shared system, other background activities, and the diverse hardware configurations inside datacenters. We present Rosella, a new self-driving, distributed approach for task scheduling in heterogeneous clusters. Rosella automatically learns the compute environment and adjusts its scheduling policy in real-time. The solution provides high throughput and low latency simultaneously because it runs in parallel on multiple machines with minimum coordination and only performs simple operations for each scheduling decision. Our learning module monitors total system load and uses the information to dynamically determine optimal estimation strategy for the backends' compute-power. Rosella generalizes power-of-two-choice algorithms to handle heterogeneous workers, reducing the max queue length of O(log n) obtained by prior algorithms to O(log log n). We evaluate Rosella with a variety of workloads on a 32-node AWS cluster. Experimental results show that Rosella significantly reduces task response time, and adapts to environment changes quickly.
1509.00309
Edward Valeev
Justus A. Calvin, Cannada A. Lewis, and Edward F. Valeev
Scalable Task-Based Algorithm for Multiplication of Block-Rank-Sparse Matrices
8 pages, 6 figures, accepted to IA3 2015. arXiv admin note: text overlap with arXiv:1504.05046
null
10.1145/2833179.2833186
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A task-based formulation of Scalable Universal Matrix Multiplication Algorithm (SUMMA), a popular algorithm for matrix multiplication (MM), is applied to the multiplication of hierarchy-free, rank-structured matrices that appear in the domain of quantum chemistry (QC). The novel features of our formulation are: (1) concurrent scheduling of multiple SUMMA iterations, and (2) fine-grained task-based composition. These features make it tolerant of the load imbalance due to the irregular matrix structure and eliminate all artifactual sources of global synchronization.Scalability of iterative computation of square-root inverse of block-rank-sparse QC matrices is demonstrated; for full-rank (dense) matrices the performance of our SUMMA formulation usually exceeds that of the state-of-the-art dense MM implementations (ScaLAPACK and Cyclops Tensor Framework).
[ { "created": "Tue, 1 Sep 2015 14:22:38 GMT", "version": "v1" }, { "created": "Fri, 9 Oct 2015 21:29:08 GMT", "version": "v2" } ]
2015-10-13
[ [ "Calvin", "Justus A.", "" ], [ "Lewis", "Cannada A.", "" ], [ "Valeev", "Edward F.", "" ] ]
A task-based formulation of Scalable Universal Matrix Multiplication Algorithm (SUMMA), a popular algorithm for matrix multiplication (MM), is applied to the multiplication of hierarchy-free, rank-structured matrices that appear in the domain of quantum chemistry (QC). The novel features of our formulation are: (1) concurrent scheduling of multiple SUMMA iterations, and (2) fine-grained task-based composition. These features make it tolerant of the load imbalance due to the irregular matrix structure and eliminate all artifactual sources of global synchronization.Scalability of iterative computation of square-root inverse of block-rank-sparse QC matrices is demonstrated; for full-rank (dense) matrices the performance of our SUMMA formulation usually exceeds that of the state-of-the-art dense MM implementations (ScaLAPACK and Cyclops Tensor Framework).
1608.08171
Richard Wang
Yao Sui, Guanghui Wang, Yafei Tang, Li Zhang
Tracking Completion
Published at ECCV 2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental component of modern trackers is an online learned tracking model, which is typically modeled either globally or locally. The two kinds of models perform differently in terms of effectiveness and robustness under different challenging situations. This work exploits the advantages of both models. A subspace model, from a global perspective, is learned from previously obtained targets via rank-minimization to address the tracking, and a pixel-level local observation is leveraged si- multaneously, from a local point of view, to augment the subspace model. A matrix completion method is employed to integrate the two models. Unlike previous tracking methods, which locate the target among all fully observed target candidates, the proposed approach first estimates an expected target via the matrix completion through partially observed target candidates, and then, identifies the target according to the estimation accuracy with respect to the target candidates. Specifically, the tracking is formulated as a problem of target appearance estimation. Extensive experiments on various challenging video sequences verify the effectiveness of the proposed approach and demonstrate that the proposed tracker outperforms other popular state-of-the-art trackers.
[ { "created": "Mon, 29 Aug 2016 18:33:23 GMT", "version": "v1" }, { "created": "Fri, 9 Sep 2016 01:49:46 GMT", "version": "v2" } ]
2016-09-12
[ [ "Sui", "Yao", "" ], [ "Wang", "Guanghui", "" ], [ "Tang", "Yafei", "" ], [ "Zhang", "Li", "" ] ]
A fundamental component of modern trackers is an online learned tracking model, which is typically modeled either globally or locally. The two kinds of models perform differently in terms of effectiveness and robustness under different challenging situations. This work exploits the advantages of both models. A subspace model, from a global perspective, is learned from previously obtained targets via rank-minimization to address the tracking, and a pixel-level local observation is leveraged si- multaneously, from a local point of view, to augment the subspace model. A matrix completion method is employed to integrate the two models. Unlike previous tracking methods, which locate the target among all fully observed target candidates, the proposed approach first estimates an expected target via the matrix completion through partially observed target candidates, and then, identifies the target according to the estimation accuracy with respect to the target candidates. Specifically, the tracking is formulated as a problem of target appearance estimation. Extensive experiments on various challenging video sequences verify the effectiveness of the proposed approach and demonstrate that the proposed tracker outperforms other popular state-of-the-art trackers.
0812.2405
SeyyedMajid Valiollahzadeh
SeyyedMajid Valiollahzadeh, Mohammad Nazari, Massoud Babaie-Zadeh, Christian Jutten
A New Trend in Optimization on Multi Overcomplete Dictionary toward Inpainting
4 pages
null
null
ICASSP 2009
cs.MM cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, great attention was intended toward overcomplete dictionaries and the sparse representations they can provide. In a wide variety of signal processing problems, sparsity serves a crucial property leading to high performance. Inpainting, the process of reconstructing lost or deteriorated parts of images or videos, is an interesting application which can be handled by suitably decomposition of an image through combination of overcomplete dictionaries. This paper addresses a novel technique of such a decomposition and investigate that through inpainting of images. Simulations are presented to demonstrate the validation of our approach.
[ { "created": "Fri, 12 Dec 2008 15:56:42 GMT", "version": "v1" } ]
2008-12-15
[ [ "Valiollahzadeh", "SeyyedMajid", "" ], [ "Nazari", "Mohammad", "" ], [ "Babaie-Zadeh", "Massoud", "" ], [ "Jutten", "Christian", "" ] ]
Recently, great attention was intended toward overcomplete dictionaries and the sparse representations they can provide. In a wide variety of signal processing problems, sparsity serves a crucial property leading to high performance. Inpainting, the process of reconstructing lost or deteriorated parts of images or videos, is an interesting application which can be handled by suitably decomposition of an image through combination of overcomplete dictionaries. This paper addresses a novel technique of such a decomposition and investigate that through inpainting of images. Simulations are presented to demonstrate the validation of our approach.
2009.09675
Robby Neven
Robby Neven, Marian Verhelst, Tinne Tuytelaars and Toon Goedem\'e
Feed-Forward On-Edge Fine-tuning Using Static Synthetic Gradient Modules
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Training deep learning models on embedded devices is typically avoided since this requires more memory, computation and power over inference. In this work, we focus on lowering the amount of memory needed for storing all activations, which are required during the backward pass to compute the gradients. Instead, during the forward pass, static Synthetic Gradient Modules (SGMs) predict gradients for each layer. This allows training the model in a feed-forward manner without having to store all activations. We tested our method on a robot grasping scenario where a robot needs to learn to grasp new objects given only a single demonstration. By first training the SGMs in a meta-learning manner on a set of common objects, during fine-tuning, the SGMs provided the model with accurate gradients to successfully learn to grasp new objects. We have shown that our method has comparable results to using standard backpropagation.
[ { "created": "Mon, 21 Sep 2020 08:27:01 GMT", "version": "v1" } ]
2020-09-22
[ [ "Neven", "Robby", "" ], [ "Verhelst", "Marian", "" ], [ "Tuytelaars", "Tinne", "" ], [ "Goedemé", "Toon", "" ] ]
Training deep learning models on embedded devices is typically avoided since this requires more memory, computation and power over inference. In this work, we focus on lowering the amount of memory needed for storing all activations, which are required during the backward pass to compute the gradients. Instead, during the forward pass, static Synthetic Gradient Modules (SGMs) predict gradients for each layer. This allows training the model in a feed-forward manner without having to store all activations. We tested our method on a robot grasping scenario where a robot needs to learn to grasp new objects given only a single demonstration. By first training the SGMs in a meta-learning manner on a set of common objects, during fine-tuning, the SGMs provided the model with accurate gradients to successfully learn to grasp new objects. We have shown that our method has comparable results to using standard backpropagation.
1906.00139
Zhengyang Shen
Zhengyang Shen, Fran\c{c}ois-Xavier Vialard, Marc Niethammer
Region-specific Diffeomorphic Metric Mapping
Accepted by NeurIPS 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a region-specific diffeomorphic metric mapping (RDMM) registration approach. RDMM is non-parametric, estimating spatio-temporal velocity fields which parameterize the sought-for spatial transformation. Regularization of these velocity fields is necessary. However, while existing non-parametric registration approaches, e.g., the large displacement diffeomorphic metric mapping (LDDMM) model, use a fixed spatially-invariant regularization our model advects a spatially-varying regularizer with the estimated velocity field, thereby naturally attaching a spatio-temporal regularizer to deforming objects. We explore a family of RDMM registration approaches: 1) a registration model where regions with separate regularizations are pre-defined (e.g., in an atlas space), 2) a registration model where a general spatially-varying regularizer is estimated, and 3) a registration model where the spatially-varying regularizer is obtained via an end-to-end trained deep learning (DL) model. We provide a variational derivation of RDMM, show that the model can assure diffeomorphic transformations in the continuum, and that LDDMM is a particular instance of RDMM. To evaluate RDMM performance we experiment 1) on synthetic 2D data and 2) on two 3D datasets: knee magnetic resonance images (MRIs) of the Osteoarthritis Initiative (OAI) and computed tomography images (CT) of the lung. Results show that our framework achieves state-of-the-art image registration performance, while providing additional information via a learned spatio-temoporal regularizer. Further, our deep learning approach allows for very fast RDMM and LDDMM estimations. Our code will be open-sourced. Code is available at https://github.com/uncbiag/registration.
[ { "created": "Sat, 1 Jun 2019 03:14:15 GMT", "version": "v1" }, { "created": "Sat, 9 Nov 2019 03:19:44 GMT", "version": "v2" } ]
2019-11-12
[ [ "Shen", "Zhengyang", "" ], [ "Vialard", "François-Xavier", "" ], [ "Niethammer", "Marc", "" ] ]
We introduce a region-specific diffeomorphic metric mapping (RDMM) registration approach. RDMM is non-parametric, estimating spatio-temporal velocity fields which parameterize the sought-for spatial transformation. Regularization of these velocity fields is necessary. However, while existing non-parametric registration approaches, e.g., the large displacement diffeomorphic metric mapping (LDDMM) model, use a fixed spatially-invariant regularization our model advects a spatially-varying regularizer with the estimated velocity field, thereby naturally attaching a spatio-temporal regularizer to deforming objects. We explore a family of RDMM registration approaches: 1) a registration model where regions with separate regularizations are pre-defined (e.g., in an atlas space), 2) a registration model where a general spatially-varying regularizer is estimated, and 3) a registration model where the spatially-varying regularizer is obtained via an end-to-end trained deep learning (DL) model. We provide a variational derivation of RDMM, show that the model can assure diffeomorphic transformations in the continuum, and that LDDMM is a particular instance of RDMM. To evaluate RDMM performance we experiment 1) on synthetic 2D data and 2) on two 3D datasets: knee magnetic resonance images (MRIs) of the Osteoarthritis Initiative (OAI) and computed tomography images (CT) of the lung. Results show that our framework achieves state-of-the-art image registration performance, while providing additional information via a learned spatio-temoporal regularizer. Further, our deep learning approach allows for very fast RDMM and LDDMM estimations. Our code will be open-sourced. Code is available at https://github.com/uncbiag/registration.
1812.08869
Xiao Chen
Xiao Chen, Julian Cheng, Zaichen Zhang, Liang Wu, Jian Dang
Data-Rate Driven Transmission Strategy for Deep Learning Based Communication Systems
Published
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning (DL) based autoencoder is a promising architecture to implement end-to-end communication systems. One fundamental problem of such systems is how to increase the transmission rate. Two new schemes are proposed to address the limited data rate issue: adaptive transmission scheme and generalized data representation (GDR) scheme. In the first scheme, an adaptive transmission is designed to select the transmission vectors for maximizing the data rate under different channel conditions. The block error rate (BLER) of the first scheme is 80% lower than that of the conventional one-hot vector scheme. This implies that higher data rate can be achieved by the adaptive transmission scheme. In the second scheme, the GDR replaces the conventional one-hot representation. The GDR scheme can achieve higher data rate than the conventional one-hot vector scheme with comparable BLER performance. For example, when the vector size is eight, the proposed GDR scheme can double the date rate of the one-hot vector scheme. Besides, the joint scheme of the two proposed schemes can create further benefits. The effect of signal-to-noise ratio (SNR) is analyzed for these DL-based communication systems. Numerical results show that training the autoencoder using data set with various SNR values can attain robust BLER performance under different channel conditions.
[ { "created": "Thu, 20 Dec 2018 22:19:25 GMT", "version": "v1" }, { "created": "Wed, 29 Apr 2020 02:59:16 GMT", "version": "v2" } ]
2020-04-30
[ [ "Chen", "Xiao", "" ], [ "Cheng", "Julian", "" ], [ "Zhang", "Zaichen", "" ], [ "Wu", "Liang", "" ], [ "Dang", "Jian", "" ] ]
Deep learning (DL) based autoencoder is a promising architecture to implement end-to-end communication systems. One fundamental problem of such systems is how to increase the transmission rate. Two new schemes are proposed to address the limited data rate issue: adaptive transmission scheme and generalized data representation (GDR) scheme. In the first scheme, an adaptive transmission is designed to select the transmission vectors for maximizing the data rate under different channel conditions. The block error rate (BLER) of the first scheme is 80% lower than that of the conventional one-hot vector scheme. This implies that higher data rate can be achieved by the adaptive transmission scheme. In the second scheme, the GDR replaces the conventional one-hot representation. The GDR scheme can achieve higher data rate than the conventional one-hot vector scheme with comparable BLER performance. For example, when the vector size is eight, the proposed GDR scheme can double the date rate of the one-hot vector scheme. Besides, the joint scheme of the two proposed schemes can create further benefits. The effect of signal-to-noise ratio (SNR) is analyzed for these DL-based communication systems. Numerical results show that training the autoencoder using data set with various SNR values can attain robust BLER performance under different channel conditions.
2306.01650
Diego Saez-Trumper
Mykola Trokhymovych, Muniza Aslam, Ai-Jou Chou, Ricardo Baeza-Yates, and Diego Saez-Trumper
Fair multilingual vandalism detection system for Wikipedia
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel design of the system aimed at supporting the Wikipedia community in addressing vandalism on the platform. To achieve this, we collected a massive dataset of 47 languages, and applied advanced filtering and feature engineering techniques, including multilingual masked language modeling to build the training dataset from human-generated data. The performance of the system was evaluated through comparison with the one used in production in Wikipedia, known as ORES. Our research results in a significant increase in the number of languages covered, making Wikipedia patrolling more efficient to a wider range of communities. Furthermore, our model outperforms ORES, ensuring that the results provided are not only more accurate but also less biased against certain groups of contributors.
[ { "created": "Fri, 2 Jun 2023 16:19:16 GMT", "version": "v1" } ]
2023-06-05
[ [ "Trokhymovych", "Mykola", "" ], [ "Aslam", "Muniza", "" ], [ "Chou", "Ai-Jou", "" ], [ "Baeza-Yates", "Ricardo", "" ], [ "Saez-Trumper", "Diego", "" ] ]
This paper presents a novel design of the system aimed at supporting the Wikipedia community in addressing vandalism on the platform. To achieve this, we collected a massive dataset of 47 languages, and applied advanced filtering and feature engineering techniques, including multilingual masked language modeling to build the training dataset from human-generated data. The performance of the system was evaluated through comparison with the one used in production in Wikipedia, known as ORES. Our research results in a significant increase in the number of languages covered, making Wikipedia patrolling more efficient to a wider range of communities. Furthermore, our model outperforms ORES, ensuring that the results provided are not only more accurate but also less biased against certain groups of contributors.
2310.15583
Sicen Li
Sicen Li, Yiming Pang, Panju Bai, Zhaojin Liu, Jiawei Li, Shihao Hu, Liquan Wang, and Gang Wang
Learning Agility and Adaptive Legged Locomotion via Curricular Hindsight Reinforcement Learning
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Agile and adaptive maneuvers such as fall recovery, high-speed turning, and sprinting in the wild are challenging for legged systems. We propose a Curricular Hindsight Reinforcement Learning (CHRL) that learns an end-to-end tracking controller that achieves powerful agility and adaptation for the legged robot. The two key components are (I) a novel automatic curriculum strategy on task difficulty and (ii) a Hindsight Experience Replay strategy adapted to legged locomotion tasks. We demonstrated successful agile and adaptive locomotion on a real quadruped robot that performed fall recovery autonomously, coherent trotting, sustained outdoor speeds up to 3.45 m/s, and tuning speeds up to 3.2 rad/s. This system produces adaptive behaviours responding to changing situations and unexpected disturbances on natural terrains like grass and dirt.
[ { "created": "Tue, 24 Oct 2023 07:48:40 GMT", "version": "v1" } ]
2023-10-25
[ [ "Li", "Sicen", "" ], [ "Pang", "Yiming", "" ], [ "Bai", "Panju", "" ], [ "Liu", "Zhaojin", "" ], [ "Li", "Jiawei", "" ], [ "Hu", "Shihao", "" ], [ "Wang", "Liquan", "" ], [ "Wang", "Gang", "" ] ]
Agile and adaptive maneuvers such as fall recovery, high-speed turning, and sprinting in the wild are challenging for legged systems. We propose a Curricular Hindsight Reinforcement Learning (CHRL) that learns an end-to-end tracking controller that achieves powerful agility and adaptation for the legged robot. The two key components are (I) a novel automatic curriculum strategy on task difficulty and (ii) a Hindsight Experience Replay strategy adapted to legged locomotion tasks. We demonstrated successful agile and adaptive locomotion on a real quadruped robot that performed fall recovery autonomously, coherent trotting, sustained outdoor speeds up to 3.45 m/s, and tuning speeds up to 3.2 rad/s. This system produces adaptive behaviours responding to changing situations and unexpected disturbances on natural terrains like grass and dirt.
2310.12049
Patrick Y. Wu
Patrick Y. Wu, Jonathan Nagler, Joshua A. Tucker, Solomon Messing
Concept-Guided Chain-of-Thought Prompting for Pairwise Comparison Scaling of Texts with Large Language Models
26 pages, 2 figures
null
null
null
cs.CL cs.CY
http://creativecommons.org/publicdomain/zero/1.0/
Existing text scaling methods often require a large corpus, struggle with short texts, or require labeled data. We develop a text scaling method that leverages the pattern recognition capabilities of generative large language models (LLMs). Specifically, we propose concept-guided chain-of-thought (CGCoT), which uses prompts designed to summarize ideas and identify target parties in texts to generate concept-specific breakdowns, in many ways similar to guidance for human coder content analysis. CGCoT effectively shifts pairwise text comparisons from a reasoning problem to a pattern recognition problem. We then pairwise compare concept-specific breakdowns using an LLM. We use the results of these pairwise comparisons to estimate a scale using the Bradley-Terry model. We use this approach to scale affective speech on Twitter. Our measures correlate more strongly with human judgments than alternative approaches like Wordfish. Besides a small set of pilot data to develop the CGCoT prompts, our measures require no additional labeled data and produce binary predictions comparable to a RoBERTa-Large model fine-tuned on thousands of human-labeled tweets. We demonstrate how combining substantive knowledge with LLMs can create state-of-the-art measures of abstract concepts.
[ { "created": "Wed, 18 Oct 2023 15:34:37 GMT", "version": "v1" } ]
2023-10-19
[ [ "Wu", "Patrick Y.", "" ], [ "Nagler", "Jonathan", "" ], [ "Tucker", "Joshua A.", "" ], [ "Messing", "Solomon", "" ] ]
Existing text scaling methods often require a large corpus, struggle with short texts, or require labeled data. We develop a text scaling method that leverages the pattern recognition capabilities of generative large language models (LLMs). Specifically, we propose concept-guided chain-of-thought (CGCoT), which uses prompts designed to summarize ideas and identify target parties in texts to generate concept-specific breakdowns, in many ways similar to guidance for human coder content analysis. CGCoT effectively shifts pairwise text comparisons from a reasoning problem to a pattern recognition problem. We then pairwise compare concept-specific breakdowns using an LLM. We use the results of these pairwise comparisons to estimate a scale using the Bradley-Terry model. We use this approach to scale affective speech on Twitter. Our measures correlate more strongly with human judgments than alternative approaches like Wordfish. Besides a small set of pilot data to develop the CGCoT prompts, our measures require no additional labeled data and produce binary predictions comparable to a RoBERTa-Large model fine-tuned on thousands of human-labeled tweets. We demonstrate how combining substantive knowledge with LLMs can create state-of-the-art measures of abstract concepts.
cs/9910011
Anand V. Raman
Anand Venkataraman
A statistical model for word discovery in child directed speech
48 pgs, 10 figs
null
null
null
cs.CL cs.LG
null
A statistical model for segmentation and word discovery in child directed speech is presented. An incremental unsupervised learning algorithm to infer word boundaries based on this model is described and results of empirical tests showing that the algorithm is competitive with other models that have been used for similar tasks are also presented.
[ { "created": "Wed, 13 Oct 1999 03:25:33 GMT", "version": "v1" } ]
2007-05-23
[ [ "Venkataraman", "Anand", "" ] ]
A statistical model for segmentation and word discovery in child directed speech is presented. An incremental unsupervised learning algorithm to infer word boundaries based on this model is described and results of empirical tests showing that the algorithm is competitive with other models that have been used for similar tasks are also presented.
1610.00368
Ramin Soltani
Ramin Soltani, Dennis Goeckel, Don Towsley, Amir Houmansadr
Covert Communications on Renewal Packet Channels
Contains details of an Allerton 2016 submission arXiv:1610.00381
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Security and privacy are major concerns in modern communication networks. In recent years, the information theory of covert communications, where the very presence of the communication is undetectable to a watchful and determined adversary, has been of great interest. This emerging body of work has focused on additive white Gaussian noise (AWGN), discrete memoryless channels (DMCs), and optical channels. In contrast, our recent work introduced the information-theoretic limits for covert communications over packet channels whose packet timings are governed by a Poisson point process. However, actual network packet arrival times do not generally conform to the Poisson process assumption, and thus here we consider the extension of our work to timing channels characterized by more general renewal processes of rate $\lambda$. We consider two scenarios. In the first scenario, the source of the packets on the channel cannot be authenticated by Willie, and therefore Alice can insert packets into the channel. We show that if the total number of transmitted packets by Jack is $N$, Alice can covertly insert $\mathcal{O}\left(\sqrt{N}\right)$ packets and, if she transmits more, she will be detected by Willie. In the second scenario, packets are authenticated by Willie but we assume that Alice and Bob share a secret key; hence, Alice alters the timings of the packets according to a pre-shared codebook with Bob to send information to him over a $G/M/1$ queue with service rate $\mu>\lambda$. We show that Alice can covertly and reliably transmit $\mathcal{O}(N)$ bits to Bob when the total number of packets sent from Jack to Steve is $N$.
[ { "created": "Sun, 2 Oct 2016 23:31:54 GMT", "version": "v1" }, { "created": "Tue, 28 Nov 2017 02:13:56 GMT", "version": "v2" } ]
2017-11-29
[ [ "Soltani", "Ramin", "" ], [ "Goeckel", "Dennis", "" ], [ "Towsley", "Don", "" ], [ "Houmansadr", "Amir", "" ] ]
Security and privacy are major concerns in modern communication networks. In recent years, the information theory of covert communications, where the very presence of the communication is undetectable to a watchful and determined adversary, has been of great interest. This emerging body of work has focused on additive white Gaussian noise (AWGN), discrete memoryless channels (DMCs), and optical channels. In contrast, our recent work introduced the information-theoretic limits for covert communications over packet channels whose packet timings are governed by a Poisson point process. However, actual network packet arrival times do not generally conform to the Poisson process assumption, and thus here we consider the extension of our work to timing channels characterized by more general renewal processes of rate $\lambda$. We consider two scenarios. In the first scenario, the source of the packets on the channel cannot be authenticated by Willie, and therefore Alice can insert packets into the channel. We show that if the total number of transmitted packets by Jack is $N$, Alice can covertly insert $\mathcal{O}\left(\sqrt{N}\right)$ packets and, if she transmits more, she will be detected by Willie. In the second scenario, packets are authenticated by Willie but we assume that Alice and Bob share a secret key; hence, Alice alters the timings of the packets according to a pre-shared codebook with Bob to send information to him over a $G/M/1$ queue with service rate $\mu>\lambda$. We show that Alice can covertly and reliably transmit $\mathcal{O}(N)$ bits to Bob when the total number of packets sent from Jack to Steve is $N$.
2101.09547
Chun-Hung Liu
Chun-Hung Liu, Di-Chun Liang, Rung-Hung Gau
A 3D Modeling Approach to Tractable Analysis in UAV-Enabled Cellular Networks
6 pages, 4 figures, conference paper. arXiv admin note: substantial text overlap with arXiv:2007.09866
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper aims to propose a three-dimensional (3D) point process that can be employed to generally deploy unmanned aerial vehicles (UAVs) in a large-scale cellular network and tractably analyze the fundamental network-wide performances of the network. This 3D point process is devised based on a 2D marked Poisson point process in which each point and its random mark uniquely correspond to the projection and the altitude of each point in the 3D point process, respectively. We elaborate on some important statistical properties of the proposed 3D point process and use them to tractably analyze the coverage performances of a UAV-enabled cellular network wherein all the UAVs equipped with multiple antennas are served as aerial base stations. The downlink coverage of the UAV-enabled cellular network is found and its closed-form results for some special cases are explicitly derived as well. Furthermore, the fundamental limits achieved by cell-free massive antenna array are characterized when coordinating all the UAVs to jointly perform non-coherent downlink transmission. These findings are validated by numerical simulation.
[ { "created": "Sat, 23 Jan 2021 18:02:10 GMT", "version": "v1" } ]
2021-01-26
[ [ "Liu", "Chun-Hung", "" ], [ "Liang", "Di-Chun", "" ], [ "Gau", "Rung-Hung", "" ] ]
This paper aims to propose a three-dimensional (3D) point process that can be employed to generally deploy unmanned aerial vehicles (UAVs) in a large-scale cellular network and tractably analyze the fundamental network-wide performances of the network. This 3D point process is devised based on a 2D marked Poisson point process in which each point and its random mark uniquely correspond to the projection and the altitude of each point in the 3D point process, respectively. We elaborate on some important statistical properties of the proposed 3D point process and use them to tractably analyze the coverage performances of a UAV-enabled cellular network wherein all the UAVs equipped with multiple antennas are served as aerial base stations. The downlink coverage of the UAV-enabled cellular network is found and its closed-form results for some special cases are explicitly derived as well. Furthermore, the fundamental limits achieved by cell-free massive antenna array are characterized when coordinating all the UAVs to jointly perform non-coherent downlink transmission. These findings are validated by numerical simulation.
2103.00810
Zhenxi Li
Zhenxi Li, Guillaume-Alexandre Bilodeau, Wassim Bouachir
MFST: Multi-Features Siamese Tracker
ICPR 2021, Oral
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Siamese trackers have recently achieved interesting results due to their balance between accuracy and speed. This success is mainly due to the fact that deep similarity networks were specifically designed to address the image similarity problem. Therefore, they are inherently more appropriate than classical CNNs for the tracking task. However, Siamese trackers rely on the last convolutional layers for similarity analysis and target search, which restricts their performance. In this paper, we argue that using a single convolutional layer as feature representation is not the optimal choice within the deep similarity framework, as multiple convolutional layers provide several abstraction levels in characterizing an object. Starting from this motivation, we present the Multi-Features Siamese Tracker (MFST), a novel tracking algorithm exploiting several hierarchical feature maps for robust deep similarity tracking. MFST proceeds by fusing hierarchical features to ensure a richer and more efficient representation. Moreover, we handle appearance variation by calibrating deep features extracted from two different CNN models. Based on this advanced feature representation, our algorithm achieves high tracking accuracy, while outperforming several state-of-the-art trackers, including standard Siamese trackers. The code and trained models are available at https://github.com/zhenxili96/MFST.
[ { "created": "Mon, 1 Mar 2021 07:18:32 GMT", "version": "v1" } ]
2021-03-02
[ [ "Li", "Zhenxi", "" ], [ "Bilodeau", "Guillaume-Alexandre", "" ], [ "Bouachir", "Wassim", "" ] ]
Siamese trackers have recently achieved interesting results due to their balance between accuracy and speed. This success is mainly due to the fact that deep similarity networks were specifically designed to address the image similarity problem. Therefore, they are inherently more appropriate than classical CNNs for the tracking task. However, Siamese trackers rely on the last convolutional layers for similarity analysis and target search, which restricts their performance. In this paper, we argue that using a single convolutional layer as feature representation is not the optimal choice within the deep similarity framework, as multiple convolutional layers provide several abstraction levels in characterizing an object. Starting from this motivation, we present the Multi-Features Siamese Tracker (MFST), a novel tracking algorithm exploiting several hierarchical feature maps for robust deep similarity tracking. MFST proceeds by fusing hierarchical features to ensure a richer and more efficient representation. Moreover, we handle appearance variation by calibrating deep features extracted from two different CNN models. Based on this advanced feature representation, our algorithm achieves high tracking accuracy, while outperforming several state-of-the-art trackers, including standard Siamese trackers. The code and trained models are available at https://github.com/zhenxili96/MFST.
1601.06023
Serkan Ak
Serkan Ak, Hazer Inaltekin, H. Vincent Poor
Gaussian Approximation for the Downlink Interference in Heterogeneous Cellular Networks
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper derives Gaussian approximation bounds for the standardized aggregate wireless interference (AWI) in the downlink of K-tier heterogeneous cellular networks when base stations in each tier are distributed over the plane according to a (possibly non-homogeneous) Poisson process. The proposed methodology is general enough to account for general bounded path-loss models and fading statistics. The deviations of the distribution of the standardized AWI from the standard normal distribution are measured in terms of the Kolmogorov-Smirnov distance. An explicit expression bounding the Kolmogorov-Smirnov distance between these two distributions is obtained as a function of a broad range of network parameters such as per-tier transmission power levels, base station locations, fading statistics and the path-loss model. A simulation study is performed to corroborate the analytical results. In particular, a good statistical match between the standardized AWI distribution and its normal approximation occurs even for moderately dense heterogeneous cellular networks. These results are expected to have important ramifications on the characterization of performance upper and lower bounds for emerging 5G network architectures.
[ { "created": "Fri, 22 Jan 2016 14:44:24 GMT", "version": "v1" }, { "created": "Thu, 4 Feb 2016 13:18:21 GMT", "version": "v2" } ]
2016-02-05
[ [ "Ak", "Serkan", "" ], [ "Inaltekin", "Hazer", "" ], [ "Poor", "H. Vincent", "" ] ]
This paper derives Gaussian approximation bounds for the standardized aggregate wireless interference (AWI) in the downlink of K-tier heterogeneous cellular networks when base stations in each tier are distributed over the plane according to a (possibly non-homogeneous) Poisson process. The proposed methodology is general enough to account for general bounded path-loss models and fading statistics. The deviations of the distribution of the standardized AWI from the standard normal distribution are measured in terms of the Kolmogorov-Smirnov distance. An explicit expression bounding the Kolmogorov-Smirnov distance between these two distributions is obtained as a function of a broad range of network parameters such as per-tier transmission power levels, base station locations, fading statistics and the path-loss model. A simulation study is performed to corroborate the analytical results. In particular, a good statistical match between the standardized AWI distribution and its normal approximation occurs even for moderately dense heterogeneous cellular networks. These results are expected to have important ramifications on the characterization of performance upper and lower bounds for emerging 5G network architectures.
1905.06802
Mohsen Annabestani
Mohsen Annabestani, Mahshid Nasserian, Fatemeh Hasanzadeh, Mohammad Taherzadeh-Sani, Alireza Hassanzadeh
An Algebraic Approach to Fast Estimation of the Threshold Voltage of Junctionless Double Gate MOSFETs Using the Gram Schmidt Method
null
null
null
null
cs.DC physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The effect of decreasing Drain-Induced Barrier Lowering (DIBL) is one of the non-desirable short-channel effects in the MOSFETs family, which causes the threshold voltage of the transistor to be reduced by increasing the voltage of the drain. This effect makes it impossible for circuit designers to consider VT as a constant value, and hence, it is necessary to calculate VT as a function of the drain voltage. Therefore, to consider the effect of DIBL in the design of integrated circuits, a large computational burden is imposed on the system, which slows down the simulation process in circuit-level simulators, particularly when a large number of transistors are to be simulated. Accordingly, in this paper, a multiple input single output (MISO) Nonlinear Autoregressive (N-AR) model using the Gram-Schmidt orthogonalization approach is proposed, that calculates the threshold voltage of the new generation of MOSFETs, i.e., Junctionless Double-Gate MOSFETs (JL-DG-MOSFETs), with a high precision and a significant speed-up in the computational procedure of the model. It is shown that, on average, the proposed numerical method is 313 times faster than the state-of-the-art analytical model. The calculated percentage of normalized mean square error between the proposed model and analytical one is 0.435% on average, showing that the proposed approach can be a fast and accurate candidate for replacing the analytical modeling.
[ { "created": "Thu, 9 May 2019 23:09:44 GMT", "version": "v1" } ]
2019-05-17
[ [ "Annabestani", "Mohsen", "" ], [ "Nasserian", "Mahshid", "" ], [ "Hasanzadeh", "Fatemeh", "" ], [ "Taherzadeh-Sani", "Mohammad", "" ], [ "Hassanzadeh", "Alireza", "" ] ]
The effect of decreasing Drain-Induced Barrier Lowering (DIBL) is one of the non-desirable short-channel effects in the MOSFETs family, which causes the threshold voltage of the transistor to be reduced by increasing the voltage of the drain. This effect makes it impossible for circuit designers to consider VT as a constant value, and hence, it is necessary to calculate VT as a function of the drain voltage. Therefore, to consider the effect of DIBL in the design of integrated circuits, a large computational burden is imposed on the system, which slows down the simulation process in circuit-level simulators, particularly when a large number of transistors are to be simulated. Accordingly, in this paper, a multiple input single output (MISO) Nonlinear Autoregressive (N-AR) model using the Gram-Schmidt orthogonalization approach is proposed, that calculates the threshold voltage of the new generation of MOSFETs, i.e., Junctionless Double-Gate MOSFETs (JL-DG-MOSFETs), with a high precision and a significant speed-up in the computational procedure of the model. It is shown that, on average, the proposed numerical method is 313 times faster than the state-of-the-art analytical model. The calculated percentage of normalized mean square error between the proposed model and analytical one is 0.435% on average, showing that the proposed approach can be a fast and accurate candidate for replacing the analytical modeling.
2111.06867
Debayan Gupta
Arup Mondal and Yash More and Ruthu Hulikal Rooparaghunath and Debayan Gupta
Flatee: Federated Learning Across Trusted Execution Environments
IEEE Euro S&P 2021 Poster; see https://www.ieee-security.org/TC/EuroSP2021/posters.html
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Federated learning allows us to distributively train a machine learning model where multiple parties share local model parameters without sharing private data. However, parameter exchange may still leak information. Several approaches have been proposed to overcome this, based on multi-party computation, fully homomorphic encryption, etc.; many of these protocols are slow and impractical for real-world use as they involve a large number of cryptographic operations. In this paper, we propose the use of Trusted Execution Environments (TEE), which provide a platform for isolated execution of code and handling of data, for this purpose. We describe Flatee, an efficient privacy-preserving federated learning framework across TEEs, which considerably reduces training and communication time. Our framework can handle malicious parties (we do not natively solve adversarial data poisoning, though we describe a preliminary approach to handle this).
[ { "created": "Fri, 12 Nov 2021 18:38:19 GMT", "version": "v1" } ]
2021-11-15
[ [ "Mondal", "Arup", "" ], [ "More", "Yash", "" ], [ "Rooparaghunath", "Ruthu Hulikal", "" ], [ "Gupta", "Debayan", "" ] ]
Federated learning allows us to distributively train a machine learning model where multiple parties share local model parameters without sharing private data. However, parameter exchange may still leak information. Several approaches have been proposed to overcome this, based on multi-party computation, fully homomorphic encryption, etc.; many of these protocols are slow and impractical for real-world use as they involve a large number of cryptographic operations. In this paper, we propose the use of Trusted Execution Environments (TEE), which provide a platform for isolated execution of code and handling of data, for this purpose. We describe Flatee, an efficient privacy-preserving federated learning framework across TEEs, which considerably reduces training and communication time. Our framework can handle malicious parties (we do not natively solve adversarial data poisoning, though we describe a preliminary approach to handle this).
2402.10451
Susumu Kubo
Susumu Kubo, Kazuhisa Makino, Souta Sakamoto
Composition Orderings for Linear Functions and Matrix Multiplication Orderings
38 pages
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider composition orderings for linear functions of one variable. Given $n$ linear functions $f_1,\dots,f_n$ and a constant $c$, the objective is to find a permutation $\sigma$ that minimizes/maximizes $f_{\sigma(n)}\circ\dots\circ f_{\sigma(1)}(c)$. It was first studied in the area of time-dependent scheduling, and known to be solvable in $O(n\log n)$ time if all functions are nondecreasing. In this paper, we present a complete characterization of optimal composition orderings for this case, by regarding linear functions as two-dimensional vectors. We also show several interesting properties on optimal composition orderings such as the equivalence between local and global optimality. Furthermore, by using the characterization above, we provide a fixed-parameter tractable (FPT) algorithm for the composition ordering problem for general linear functions, with respect to the number of decreasing linear functions. We next deal with matrix multiplication orderings as a generalization of composition of linear functions. Given $n$ matrices $M_1,\dots,M_n\in\mathbb{R}^{m\times m}$ and two vectors $w,y\in\mathbb{R}^m$, where $m$ denotes a positive integer, the objective is to find a permutation $\sigma$ that minimizes/maximizes $w^\top M_{\sigma(n)}\dots M_{\sigma(1)} y$. The problem is also viewed as a generalization of flow shop scheduling through a limit. By this extension, we show that the multiplication ordering problem for $2\times 2$ matrices is solvable in $O(n\log n)$ time if all the matrices are simultaneously triangularizable and have nonnegative determinants, and FPT with respect to the number of matrices with negative determinants, if all the matrices are simultaneously triangularizable. As the negative side, we finally prove that three possible natural generalizations are NP-hard: 1) when $m=2$, 2) when $m\geq 3$, and 3) the target version of the problem.
[ { "created": "Fri, 16 Feb 2024 04:55:49 GMT", "version": "v1" } ]
2024-02-19
[ [ "Kubo", "Susumu", "" ], [ "Makino", "Kazuhisa", "" ], [ "Sakamoto", "Souta", "" ] ]
We consider composition orderings for linear functions of one variable. Given $n$ linear functions $f_1,\dots,f_n$ and a constant $c$, the objective is to find a permutation $\sigma$ that minimizes/maximizes $f_{\sigma(n)}\circ\dots\circ f_{\sigma(1)}(c)$. It was first studied in the area of time-dependent scheduling, and known to be solvable in $O(n\log n)$ time if all functions are nondecreasing. In this paper, we present a complete characterization of optimal composition orderings for this case, by regarding linear functions as two-dimensional vectors. We also show several interesting properties on optimal composition orderings such as the equivalence between local and global optimality. Furthermore, by using the characterization above, we provide a fixed-parameter tractable (FPT) algorithm for the composition ordering problem for general linear functions, with respect to the number of decreasing linear functions. We next deal with matrix multiplication orderings as a generalization of composition of linear functions. Given $n$ matrices $M_1,\dots,M_n\in\mathbb{R}^{m\times m}$ and two vectors $w,y\in\mathbb{R}^m$, where $m$ denotes a positive integer, the objective is to find a permutation $\sigma$ that minimizes/maximizes $w^\top M_{\sigma(n)}\dots M_{\sigma(1)} y$. The problem is also viewed as a generalization of flow shop scheduling through a limit. By this extension, we show that the multiplication ordering problem for $2\times 2$ matrices is solvable in $O(n\log n)$ time if all the matrices are simultaneously triangularizable and have nonnegative determinants, and FPT with respect to the number of matrices with negative determinants, if all the matrices are simultaneously triangularizable. As the negative side, we finally prove that three possible natural generalizations are NP-hard: 1) when $m=2$, 2) when $m\geq 3$, and 3) the target version of the problem.
2106.04897
Hanan Aldarmaki
Hanan Aldarmaki, Asad Ullah, Nazar Zaki
Unsupervised Automatic Speech Recognition: A Review
26 pages + 10 pages of references, 3 figures. Speech Communication (2022)
null
10.1016/j.specom.2022.02.005
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
Automatic Speech Recognition (ASR) systems can be trained to achieve remarkable performance given large amounts of manually transcribed speech, but large labeled data sets can be difficult or expensive to acquire for all languages of interest. In this paper, we review the research literature to identify models and ideas that could lead to fully unsupervised ASR, including unsupervised segmentation of the speech signal, unsupervised mapping from speech segments to text, and semi-supervised models with nominal amounts of labeled examples. The objective of the study is to identify the limitations of what can be learned from speech data alone and to understand the minimum requirements for speech recognition. Identifying these limitations would help optimize the resources and efforts in ASR development for low-resource languages.
[ { "created": "Wed, 9 Jun 2021 08:33:20 GMT", "version": "v1" }, { "created": "Sun, 20 Mar 2022 09:46:53 GMT", "version": "v2" } ]
2022-03-22
[ [ "Aldarmaki", "Hanan", "" ], [ "Ullah", "Asad", "" ], [ "Zaki", "Nazar", "" ] ]
Automatic Speech Recognition (ASR) systems can be trained to achieve remarkable performance given large amounts of manually transcribed speech, but large labeled data sets can be difficult or expensive to acquire for all languages of interest. In this paper, we review the research literature to identify models and ideas that could lead to fully unsupervised ASR, including unsupervised segmentation of the speech signal, unsupervised mapping from speech segments to text, and semi-supervised models with nominal amounts of labeled examples. The objective of the study is to identify the limitations of what can be learned from speech data alone and to understand the minimum requirements for speech recognition. Identifying these limitations would help optimize the resources and efforts in ASR development for low-resource languages.
1911.13179
Tamir Bendory
Eitan Levin and Tamir Bendory
A note on Douglas-Rachford, gradients, and phase retrieval
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The properties of gradient techniques for the phase retrieval problem have received a considerable attention in recent years. In almost all applications, however, the phase retrieval problem is solved using a family of algorithms that can be interpreted as variants of Douglas-Rachford splitting. In this work, we establish a connection between Douglas-Rachford and gradient algorithms. Specifically, we show that in some cases a generalization of Douglas-Rachford, called relaxed-reflect-reflect (RRR), can be viewed as gradient descent on a certain objective function. The solutions coincide with the critical points of that objective, which---in contrast to standard gradient techniques---are not its minimizers. Using the objective function, we give simple proofs of some basic properties of the RRR algorithm. Specifically, we describe its set of solutions, show a local convexity around any solution, and derive stability guarantees. Nevertheless, in its present state, the analysis does not elucidate the remarkable empirical performance of RRR and its global properties.
[ { "created": "Fri, 29 Nov 2019 16:24:54 GMT", "version": "v1" }, { "created": "Thu, 4 Jun 2020 04:36:44 GMT", "version": "v2" } ]
2020-06-05
[ [ "Levin", "Eitan", "" ], [ "Bendory", "Tamir", "" ] ]
The properties of gradient techniques for the phase retrieval problem have received a considerable attention in recent years. In almost all applications, however, the phase retrieval problem is solved using a family of algorithms that can be interpreted as variants of Douglas-Rachford splitting. In this work, we establish a connection between Douglas-Rachford and gradient algorithms. Specifically, we show that in some cases a generalization of Douglas-Rachford, called relaxed-reflect-reflect (RRR), can be viewed as gradient descent on a certain objective function. The solutions coincide with the critical points of that objective, which---in contrast to standard gradient techniques---are not its minimizers. Using the objective function, we give simple proofs of some basic properties of the RRR algorithm. Specifically, we describe its set of solutions, show a local convexity around any solution, and derive stability guarantees. Nevertheless, in its present state, the analysis does not elucidate the remarkable empirical performance of RRR and its global properties.