id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1905.01326
Dominik Kulon
Dominik Kulon, Haoyang Wang, Riza Alp G\"uler, Michael Bronstein, Stefanos Zafeiriou
Single Image 3D Hand Reconstruction with Mesh Convolutions
Proceedings of the British Machine Vision Conference (BMVC 2019)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monocular 3D reconstruction of deformable objects, such as human body parts, has been typically approached by predicting parameters of heavyweight linear models. In this paper, we demonstrate an alternative solution that is based on the idea of encoding images into a latent non-linear representation of meshes. The prior on 3D hand shapes is learned by training an autoencoder with intrinsic graph convolutions performed in the spectral domain. The pre-trained decoder acts as a non-linear statistical deformable model. The latent parameters that reconstruct the shape and articulated pose of hands in the image are predicted using an image encoder. We show that our system reconstructs plausible meshes and operates in real-time. We evaluate the quality of the mesh reconstructions produced by the decoder on a new dataset and show latent space interpolation results. Our code, data, and models will be made publicly available.
[ { "created": "Sat, 4 May 2019 20:41:47 GMT", "version": "v1" }, { "created": "Mon, 13 May 2019 23:37:19 GMT", "version": "v2" }, { "created": "Mon, 5 Aug 2019 12:07:34 GMT", "version": "v3" } ]
2019-08-06
[ [ "Kulon", "Dominik", "" ], [ "Wang", "Haoyang", "" ], [ "Güler", "Riza Alp", "" ], [ "Bronstein", "Michael", "" ], [ "Zafeiriou", "Stefanos", "" ] ]
Monocular 3D reconstruction of deformable objects, such as human body parts, has been typically approached by predicting parameters of heavyweight linear models. In this paper, we demonstrate an alternative solution that is based on the idea of encoding images into a latent non-linear representation of meshes. The prior on 3D hand shapes is learned by training an autoencoder with intrinsic graph convolutions performed in the spectral domain. The pre-trained decoder acts as a non-linear statistical deformable model. The latent parameters that reconstruct the shape and articulated pose of hands in the image are predicted using an image encoder. We show that our system reconstructs plausible meshes and operates in real-time. We evaluate the quality of the mesh reconstructions produced by the decoder on a new dataset and show latent space interpolation results. Our code, data, and models will be made publicly available.
2305.17489
Zhongping Zhang
Zhongping Zhang, Jian Zheng, Jacob Zhiyuan Fang, Bryan A. Plummer
Text-to-image Editing by Image Information Removal
Full paper is accepted by WACV2024; Best paper runner-up of AI4CC@CVPR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion models have demonstrated impressive performance in text-guided image generation. Current methods that leverage the knowledge of these models for image editing either fine-tune them using the input image (e.g., Imagic) or incorporate structure information as additional constraints (e.g., ControlNet). However, fine-tuning large-scale diffusion models on a single image can lead to severe overfitting issues and lengthy inference time. Information leakage from pretrained models also make it challenging to preserve image content not related to the text input. Additionally, methods that incorporate structural guidance (e.g., edge maps, semantic maps, keypoints) find retaining attributes like colors and textures difficult. Using the input image as a control could mitigate these issues, but since these models are trained via reconstruction, a model can simply hide information about the original image when encoding it to perfectly reconstruct the image without learning the editing task. To address these challenges, we propose a text-to-image editing model with an Image Information Removal module (IIR) that selectively erases color-related and texture-related information from the original image, allowing us to better preserve the text-irrelevant content and avoid issues arising from information hiding. Our experiments on CUB, Outdoor Scenes, and COCO reports our approach achieves the best editability-fidelity trade-off results. In addition, a user study on COCO shows that our edited images are preferred 35% more often than prior work.
[ { "created": "Sat, 27 May 2023 14:48:05 GMT", "version": "v1" }, { "created": "Tue, 7 Nov 2023 19:22:36 GMT", "version": "v2" } ]
2023-11-09
[ [ "Zhang", "Zhongping", "" ], [ "Zheng", "Jian", "" ], [ "Fang", "Jacob Zhiyuan", "" ], [ "Plummer", "Bryan A.", "" ] ]
Diffusion models have demonstrated impressive performance in text-guided image generation. Current methods that leverage the knowledge of these models for image editing either fine-tune them using the input image (e.g., Imagic) or incorporate structure information as additional constraints (e.g., ControlNet). However, fine-tuning large-scale diffusion models on a single image can lead to severe overfitting issues and lengthy inference time. Information leakage from pretrained models also make it challenging to preserve image content not related to the text input. Additionally, methods that incorporate structural guidance (e.g., edge maps, semantic maps, keypoints) find retaining attributes like colors and textures difficult. Using the input image as a control could mitigate these issues, but since these models are trained via reconstruction, a model can simply hide information about the original image when encoding it to perfectly reconstruct the image without learning the editing task. To address these challenges, we propose a text-to-image editing model with an Image Information Removal module (IIR) that selectively erases color-related and texture-related information from the original image, allowing us to better preserve the text-irrelevant content and avoid issues arising from information hiding. Our experiments on CUB, Outdoor Scenes, and COCO reports our approach achieves the best editability-fidelity trade-off results. In addition, a user study on COCO shows that our edited images are preferred 35% more often than prior work.
1911.09427
Guy Shalev
Guy Shalev, Ran El-Yaniv, Daniel Klotz, Frederik Kratzert, Asher Metzger, Sella Nevo
Accurate Hydrologic Modeling Using Less Information
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Joint models are a common and important tool in the intersection of machine learning and the physical sciences, particularly in contexts where real-world measurements are scarce. Recent developments in rainfall-runoff modeling, one of the prime challenges in hydrology, show the value of a joint model with shared representation in this important context. However, current state-of-the-art models depend on detailed and reliable attributes characterizing each site to help the model differentiate correctly between the behavior of different sites. This dependency can present a challenge in data-poor regions. In this paper, we show that we can replace the need for such location-specific attributes with a completely data-driven learned embedding, and match previous state-of-the-art results with less information.
[ { "created": "Thu, 21 Nov 2019 12:01:19 GMT", "version": "v1" } ]
2019-11-22
[ [ "Shalev", "Guy", "" ], [ "El-Yaniv", "Ran", "" ], [ "Klotz", "Daniel", "" ], [ "Kratzert", "Frederik", "" ], [ "Metzger", "Asher", "" ], [ "Nevo", "Sella", "" ] ]
Joint models are a common and important tool in the intersection of machine learning and the physical sciences, particularly in contexts where real-world measurements are scarce. Recent developments in rainfall-runoff modeling, one of the prime challenges in hydrology, show the value of a joint model with shared representation in this important context. However, current state-of-the-art models depend on detailed and reliable attributes characterizing each site to help the model differentiate correctly between the behavior of different sites. This dependency can present a challenge in data-poor regions. In this paper, we show that we can replace the need for such location-specific attributes with a completely data-driven learned embedding, and match previous state-of-the-art results with less information.
2306.09389
Junjun Yan
Junjun Yan, Xinhai Chen, Zhichao Wang, Enqiang Zhoui and Jie Liu
ST-PINN: A Self-Training Physics-Informed Neural Network for Partial Differential Equations
null
null
null
null
cs.LG cs.AI physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Partial differential equations (PDEs) are an essential computational kernel in physics and engineering. With the advance of deep learning, physics-informed neural networks (PINNs), as a mesh-free method, have shown great potential for fast PDE solving in various applications. To address the issue of low accuracy and convergence problems of existing PINNs, we propose a self-training physics-informed neural network, ST-PINN. Specifically, ST-PINN introduces a pseudo label based self-learning algorithm during training. It employs governing equation as the pseudo-labeled evaluation index and selects the highest confidence examples from the sample points to attach the pseudo labels. To our best knowledge, we are the first to incorporate a self-training mechanism into physics-informed learning. We conduct experiments on five PDE problems in different fields and scenarios. The results demonstrate that the proposed method allows the network to learn more physical information and benefit convergence. The ST-PINN outperforms existing physics-informed neural network methods and improves the accuracy by a factor of 1.33x-2.54x. The code of ST-PINN is available at GitHub: https://github.com/junjun-yan/ST-PINN.
[ { "created": "Thu, 15 Jun 2023 15:49:13 GMT", "version": "v1" } ]
2023-06-19
[ [ "Yan", "Junjun", "" ], [ "Chen", "Xinhai", "" ], [ "Wang", "Zhichao", "" ], [ "Zhoui", "Enqiang", "" ], [ "Liu", "Jie", "" ] ]
Partial differential equations (PDEs) are an essential computational kernel in physics and engineering. With the advance of deep learning, physics-informed neural networks (PINNs), as a mesh-free method, have shown great potential for fast PDE solving in various applications. To address the issue of low accuracy and convergence problems of existing PINNs, we propose a self-training physics-informed neural network, ST-PINN. Specifically, ST-PINN introduces a pseudo label based self-learning algorithm during training. It employs governing equation as the pseudo-labeled evaluation index and selects the highest confidence examples from the sample points to attach the pseudo labels. To our best knowledge, we are the first to incorporate a self-training mechanism into physics-informed learning. We conduct experiments on five PDE problems in different fields and scenarios. The results demonstrate that the proposed method allows the network to learn more physical information and benefit convergence. The ST-PINN outperforms existing physics-informed neural network methods and improves the accuracy by a factor of 1.33x-2.54x. The code of ST-PINN is available at GitHub: https://github.com/junjun-yan/ST-PINN.
1906.03380
Sarah Wiegreffe
Sarah Wiegreffe, Edward Choi, Sherry Yan, Jimeng Sun, Jacob Eisenstein
Clinical Concept Extraction for Document-Level Coding
ACL BioNLP workshop (2019)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The text of clinical notes can be a valuable source of patient information and clinical assessments. Historically, the primary approach for exploiting clinical notes has been information extraction: linking spans of text to concepts in a detailed domain ontology. However, recent work has demonstrated the potential of supervised machine learning to extract document-level codes directly from the raw text of clinical notes. We propose to bridge the gap between the two approaches with two novel syntheses: (1) treating extracted concepts as features, which are used to supplement or replace the text of the note; (2) treating extracted concepts as labels, which are used to learn a better representation of the text. Unfortunately, the resulting concepts do not yield performance gains on the document-level clinical coding task. We explore possible explanations and future research directions.
[ { "created": "Sat, 8 Jun 2019 03:32:00 GMT", "version": "v1" } ]
2019-06-11
[ [ "Wiegreffe", "Sarah", "" ], [ "Choi", "Edward", "" ], [ "Yan", "Sherry", "" ], [ "Sun", "Jimeng", "" ], [ "Eisenstein", "Jacob", "" ] ]
The text of clinical notes can be a valuable source of patient information and clinical assessments. Historically, the primary approach for exploiting clinical notes has been information extraction: linking spans of text to concepts in a detailed domain ontology. However, recent work has demonstrated the potential of supervised machine learning to extract document-level codes directly from the raw text of clinical notes. We propose to bridge the gap between the two approaches with two novel syntheses: (1) treating extracted concepts as features, which are used to supplement or replace the text of the note; (2) treating extracted concepts as labels, which are used to learn a better representation of the text. Unfortunately, the resulting concepts do not yield performance gains on the document-level clinical coding task. We explore possible explanations and future research directions.
1908.11366
Karim Banawan
Karim Banawan and Batuhan Arasli and Sennur Ulukus
Improved Storage for Efficient Private Information Retrieval
ITW 2019
null
null
null
cs.IT cs.CR cs.DB math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of private information retrieval from $N$ \emph{storage-constrained} databases. In this problem, a user wishes to retrieve a single message out of $M$ messages (of size $L$) without revealing any information about the identity of the message to individual databases. Each database stores $\mu ML$ symbols, i.e., a $\mu$ fraction of the entire library, where $\frac{1}{N} \leq \mu \leq 1$. Our goal is to characterize the optimal tradeoff curve for the storage cost (captured by $\mu$) and the normalized download cost ($D/L$). We show that the download cost can be reduced by employing a hybrid storage scheme that combines \emph{MDS coding} ideas with \emph{uncoded partial replication} ideas. When there is no coding, our scheme reduces to Attia-Kumar-Tandon storage scheme, which was initially introduced by Maddah-Ali-Niesen in the context of the caching problem, and when there is no uncoded partial replication, our scheme reduces to Banawan-Ulukus storage scheme; in general, our scheme outperforms both.
[ { "created": "Thu, 29 Aug 2019 17:52:01 GMT", "version": "v1" } ]
2019-08-30
[ [ "Banawan", "Karim", "" ], [ "Arasli", "Batuhan", "" ], [ "Ulukus", "Sennur", "" ] ]
We consider the problem of private information retrieval from $N$ \emph{storage-constrained} databases. In this problem, a user wishes to retrieve a single message out of $M$ messages (of size $L$) without revealing any information about the identity of the message to individual databases. Each database stores $\mu ML$ symbols, i.e., a $\mu$ fraction of the entire library, where $\frac{1}{N} \leq \mu \leq 1$. Our goal is to characterize the optimal tradeoff curve for the storage cost (captured by $\mu$) and the normalized download cost ($D/L$). We show that the download cost can be reduced by employing a hybrid storage scheme that combines \emph{MDS coding} ideas with \emph{uncoded partial replication} ideas. When there is no coding, our scheme reduces to Attia-Kumar-Tandon storage scheme, which was initially introduced by Maddah-Ali-Niesen in the context of the caching problem, and when there is no uncoded partial replication, our scheme reduces to Banawan-Ulukus storage scheme; in general, our scheme outperforms both.
0705.3360
Kyriakos Sgarbas
Kyriakos N. Sgarbas
The Road to Quantum Artificial Intelligence
9 pages. Presented at PCI-2007: 11th Panhellenic Conference in Informatics, 18-20 May 2007, Patras, Greece
In: T.S.Papatheodorou, D.N.Christodoulakis and N.N.Karanikolas (eds), "Current Trends in Informatics", Vol.A, pp.469-477, New Technologies Publications, Athens, 2007 (SET 978-960-89784-0-9)
null
null
cs.AI
null
This paper overviews the basic principles and recent advances in the emerging field of Quantum Computation (QC), highlighting its potential application to Artificial Intelligence (AI). The paper provides a very brief introduction to basic QC issues like quantum registers, quantum gates and quantum algorithms and then it presents references, ideas and research guidelines on how QC can be used to deal with some basic AI problems, such as search and pattern matching, as soon as quantum computers become widely available.
[ { "created": "Wed, 23 May 2007 12:31:47 GMT", "version": "v1" } ]
2007-05-24
[ [ "Sgarbas", "Kyriakos N.", "" ] ]
This paper overviews the basic principles and recent advances in the emerging field of Quantum Computation (QC), highlighting its potential application to Artificial Intelligence (AI). The paper provides a very brief introduction to basic QC issues like quantum registers, quantum gates and quantum algorithms and then it presents references, ideas and research guidelines on how QC can be used to deal with some basic AI problems, such as search and pattern matching, as soon as quantum computers become widely available.
2308.14272
Jennifer Hsia
Jennifer Hsia, Danish Pruthi, Aarti Singh, Zachary C. Lipton
Goodhart's Law Applies to NLP's Explanation Benchmarks
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the rising popularity of saliency-based explanations, the research community remains at an impasse, facing doubts concerning their purpose, efficacy, and tendency to contradict each other. Seeking to unite the community's efforts around common goals, several recent works have proposed evaluation metrics. In this paper, we critically examine two sets of metrics: the ERASER metrics (comprehensiveness and sufficiency) and the EVAL-X metrics, focusing our inquiry on natural language processing. First, we show that we can inflate a model's comprehensiveness and sufficiency scores dramatically without altering its predictions or explanations on in-distribution test inputs. Our strategy exploits the tendency for extracted explanations and their complements to be "out-of-support" relative to each other and in-distribution inputs. Next, we demonstrate that the EVAL-X metrics can be inflated arbitrarily by a simple method that encodes the label, even though EVAL-X is precisely motivated to address such exploits. Our results raise doubts about the ability of current metrics to guide explainability research, underscoring the need for a broader reassessment of what precisely these metrics are intended to capture.
[ { "created": "Mon, 28 Aug 2023 03:03:03 GMT", "version": "v1" } ]
2023-08-29
[ [ "Hsia", "Jennifer", "" ], [ "Pruthi", "Danish", "" ], [ "Singh", "Aarti", "" ], [ "Lipton", "Zachary C.", "" ] ]
Despite the rising popularity of saliency-based explanations, the research community remains at an impasse, facing doubts concerning their purpose, efficacy, and tendency to contradict each other. Seeking to unite the community's efforts around common goals, several recent works have proposed evaluation metrics. In this paper, we critically examine two sets of metrics: the ERASER metrics (comprehensiveness and sufficiency) and the EVAL-X metrics, focusing our inquiry on natural language processing. First, we show that we can inflate a model's comprehensiveness and sufficiency scores dramatically without altering its predictions or explanations on in-distribution test inputs. Our strategy exploits the tendency for extracted explanations and their complements to be "out-of-support" relative to each other and in-distribution inputs. Next, we demonstrate that the EVAL-X metrics can be inflated arbitrarily by a simple method that encodes the label, even though EVAL-X is precisely motivated to address such exploits. Our results raise doubts about the ability of current metrics to guide explainability research, underscoring the need for a broader reassessment of what precisely these metrics are intended to capture.
2405.18296
Anchit Jain
Anchit Jain, Rozhin Nobahari, Aristide Baratin, Stefano Sarao Mannelli
Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training
null
null
null
null
cs.LG cond-mat.dis-nn stat.ML
http://creativecommons.org/licenses/by/4.0/
Machine learning systems often acquire biases by leveraging undesired features in the data, impacting accuracy variably across different sub-populations. Current understanding of bias formation mostly focuses on the initial and final stages of learning, leaving a gap in knowledge regarding the transient dynamics. To address this gap, this paper explores the evolution of bias in a teacher-student setup modeling different data sub-populations with a Gaussian-mixture model. We provide an analytical description of the stochastic gradient descent dynamics of a linear classifier in this setting, which we prove to be exact in high dimension. Notably, our analysis reveals how different properties of sub-populations influence bias at different timescales, showing a shifting preference of the classifier during training. Applying our findings to fairness and robustness, we delineate how and when heterogeneous data and spurious features can generate and amplify bias. We empirically validate our results in more complex scenarios by training deeper networks on synthetic and real datasets, including CIFAR10, MNIST, and CelebA.
[ { "created": "Tue, 28 May 2024 15:50:10 GMT", "version": "v1" } ]
2024-05-29
[ [ "Jain", "Anchit", "" ], [ "Nobahari", "Rozhin", "" ], [ "Baratin", "Aristide", "" ], [ "Mannelli", "Stefano Sarao", "" ] ]
Machine learning systems often acquire biases by leveraging undesired features in the data, impacting accuracy variably across different sub-populations. Current understanding of bias formation mostly focuses on the initial and final stages of learning, leaving a gap in knowledge regarding the transient dynamics. To address this gap, this paper explores the evolution of bias in a teacher-student setup modeling different data sub-populations with a Gaussian-mixture model. We provide an analytical description of the stochastic gradient descent dynamics of a linear classifier in this setting, which we prove to be exact in high dimension. Notably, our analysis reveals how different properties of sub-populations influence bias at different timescales, showing a shifting preference of the classifier during training. Applying our findings to fairness and robustness, we delineate how and when heterogeneous data and spurious features can generate and amplify bias. We empirically validate our results in more complex scenarios by training deeper networks on synthetic and real datasets, including CIFAR10, MNIST, and CelebA.
2012.00548
Xinyu Gao
Xinyu Gao, Yuanwei Liu, Xiao Liu and Zhijin Qin
Resource Allocation in IRSs Aided MISO-NOMA Networks: A Machine Learning Approach
6 pages, 5 figures. It will be published in IEEE Global Communications Conference (GC) 2020
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel framework of intelligent reflecting surface (IRS)-aided multiple-input single-output (MISO) non-orthogonal multiple access (NOMA) network is proposed, where a base station (BS) serves multiple clusters with unfixed number of users in each cluster. The goal is to maximize the sum rate of all users by jointly optimizing the passive beamforming vector at the IRS, decoding order and power allocation coefficient vector, subject to the rate requirements of users. In order to tackle the formulated problem, a three-step approach is proposed. More particularly, a long short-term memory (LSTM) based algorithm is first adopted for predicting the mobility of users. Secondly, a K-means based Gaussian mixture model (K-GMM) algorithm is proposed for user clustering. Thirdly, a deep Q-network (DQN) based algorithm is invoked for jointly determining the phase shift matrix and power allocation policy. Simulation results are provided for demonstrating that the proposed algorithm outperforms the benchmarks, while the performance of IRS-NOMA system is better than IRS-OMA system.
[ { "created": "Wed, 18 Nov 2020 12:36:24 GMT", "version": "v1" } ]
2020-12-02
[ [ "Gao", "Xinyu", "" ], [ "Liu", "Yuanwei", "" ], [ "Liu", "Xiao", "" ], [ "Qin", "Zhijin", "" ] ]
A novel framework of intelligent reflecting surface (IRS)-aided multiple-input single-output (MISO) non-orthogonal multiple access (NOMA) network is proposed, where a base station (BS) serves multiple clusters with unfixed number of users in each cluster. The goal is to maximize the sum rate of all users by jointly optimizing the passive beamforming vector at the IRS, decoding order and power allocation coefficient vector, subject to the rate requirements of users. In order to tackle the formulated problem, a three-step approach is proposed. More particularly, a long short-term memory (LSTM) based algorithm is first adopted for predicting the mobility of users. Secondly, a K-means based Gaussian mixture model (K-GMM) algorithm is proposed for user clustering. Thirdly, a deep Q-network (DQN) based algorithm is invoked for jointly determining the phase shift matrix and power allocation policy. Simulation results are provided for demonstrating that the proposed algorithm outperforms the benchmarks, while the performance of IRS-NOMA system is better than IRS-OMA system.
1911.07234
Ali \c{C}ivril
Ali \c{C}ivril
Approximation of Steiner Forest via the Bidirected Cut Relaxation
15 pages, 5 figures
Journal of Combinatorial Optimization, 38(4):1196-1212, 2019
10.1007/s10878-019-00444-8
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The classical algorithm of Agrawal, Klein and Ravi [SIAM J. Comput., 24 (1995), pp. 440-456], stated in the setting of the primal-dual schema by Goemans and Williamson [SIAM J. Comput., 24 (1995), pp. 296-317] uses the undirected cut relaxation for the Steiner forest problem. Its approximation ratio is $2-\frac{1}{k}$, where $k$ is the number of terminal pairs. A variant of this algorithm more recently proposed by K\"onemann et al. [SIAM J. Comput., 37 (2008), pp. 1319-1341] is based on the lifted cut relaxation. In this paper, we continue this line of work and consider the bidirected cut relaxation for the Steiner forest problem, which lends itself to a novel algorithmic idea yielding the same approximation ratio as the classical algorithm. In doing so, we introduce an extension of the primal-dual schema in which we run two different phases to satisfy connectivity requirements in both directions. This reveals more about the combinatorial structure of the problem. In particular, there are examples on which the classical algorithm fails to give a good approximation, but the new algorithm finds a near-optimal solution.
[ { "created": "Sun, 17 Nov 2019 13:34:00 GMT", "version": "v1" } ]
2019-11-19
[ [ "Çivril", "Ali", "" ] ]
The classical algorithm of Agrawal, Klein and Ravi [SIAM J. Comput., 24 (1995), pp. 440-456], stated in the setting of the primal-dual schema by Goemans and Williamson [SIAM J. Comput., 24 (1995), pp. 296-317] uses the undirected cut relaxation for the Steiner forest problem. Its approximation ratio is $2-\frac{1}{k}$, where $k$ is the number of terminal pairs. A variant of this algorithm more recently proposed by K\"onemann et al. [SIAM J. Comput., 37 (2008), pp. 1319-1341] is based on the lifted cut relaxation. In this paper, we continue this line of work and consider the bidirected cut relaxation for the Steiner forest problem, which lends itself to a novel algorithmic idea yielding the same approximation ratio as the classical algorithm. In doing so, we introduce an extension of the primal-dual schema in which we run two different phases to satisfy connectivity requirements in both directions. This reveals more about the combinatorial structure of the problem. In particular, there are examples on which the classical algorithm fails to give a good approximation, but the new algorithm finds a near-optimal solution.
2405.00879
Xiao Li
Xiao Li and Qian Gong and Jaemoon Lee and Scott Klasky and Anand Rangarajan and Sanjay Ranka
Machine Learning Techniques for Data Reduction of Climate Applications
7 pages. arXiv admin note: text overlap with arXiv:2404.18063
null
null
null
cs.LG physics.ao-ph
http://creativecommons.org/licenses/by/4.0/
Scientists conduct large-scale simulations to compute derived quantities-of-interest (QoI) from primary data. Often, QoI are linked to specific features, regions, or time intervals, such that data can be adaptively reduced without compromising the integrity of QoI. For many spatiotemporal applications, these QoI are binary in nature and represent presence or absence of a physical phenomenon. We present a pipelined compression approach that first uses neural-network-based techniques to derive regions where QoI are highly likely to be present. Then, we employ a Guaranteed Autoencoder (GAE) to compress data with differential error bounds. GAE uses QoI information to apply low-error compression to only these regions. This results in overall high compression ratios while still achieving downstream goals of simulation or data collections. Experimental results are presented for climate data generated from the E3SM Simulation model for downstream quantities such as tropical cyclone and atmospheric river detection and tracking. These results show that our approach is superior to comparable methods in the literature.
[ { "created": "Wed, 1 May 2024 21:44:47 GMT", "version": "v1" } ]
2024-05-03
[ [ "Li", "Xiao", "" ], [ "Gong", "Qian", "" ], [ "Lee", "Jaemoon", "" ], [ "Klasky", "Scott", "" ], [ "Rangarajan", "Anand", "" ], [ "Ranka", "Sanjay", "" ] ]
Scientists conduct large-scale simulations to compute derived quantities-of-interest (QoI) from primary data. Often, QoI are linked to specific features, regions, or time intervals, such that data can be adaptively reduced without compromising the integrity of QoI. For many spatiotemporal applications, these QoI are binary in nature and represent presence or absence of a physical phenomenon. We present a pipelined compression approach that first uses neural-network-based techniques to derive regions where QoI are highly likely to be present. Then, we employ a Guaranteed Autoencoder (GAE) to compress data with differential error bounds. GAE uses QoI information to apply low-error compression to only these regions. This results in overall high compression ratios while still achieving downstream goals of simulation or data collections. Experimental results are presented for climate data generated from the E3SM Simulation model for downstream quantities such as tropical cyclone and atmospheric river detection and tracking. These results show that our approach is superior to comparable methods in the literature.
1902.05026
Justice Amoh
Justice Amoh and Kofi Odame
An Optimized Recurrent Unit for Ultra-Low-Power Keyword Spotting
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is growing interest in being able to run neural networks on sensors, wearables and internet-of-things (IoT) devices. However, the computational demands of neural networks make them difficult to deploy on resource-constrained edge devices. To meet this need, our work introduces a new recurrent unit architecture that is specifically adapted for on-device low power acoustic event detection (AED). The proposed architecture is based on the gated recurrent unit (`GRU') but features optimizations that make it implementable on ultra-low power micro-controllers such as the Arm Cortex M0+. Our new architecture, the Embedded Gated Recurrent Unit (eGRU) is demonstrated to be highly efficient and suitable for short-duration AED and keyword spotting tasks. A single eGRU cell is 60x faster and 10x smaller than a GRU cell. Despite its optimizations, eGRU compares well with GRU across tasks of varying complexities. The practicality of eGRU is investigated in a wearable acoustic event detection application. An eGRU model is implemented and tested on the Arm Cortex M0-based Atmel ATSAMD21E18 processor. The Arm M0+ implementation of the eGRU model compares favorably with a full precision GRU that is running on a workstation. The embedded eGRU model achieves a classification accuracy 95.3%, which is only 2% less than the full precision GRU.
[ { "created": "Wed, 13 Feb 2019 17:41:15 GMT", "version": "v1" } ]
2019-02-14
[ [ "Amoh", "Justice", "" ], [ "Odame", "Kofi", "" ] ]
There is growing interest in being able to run neural networks on sensors, wearables and internet-of-things (IoT) devices. However, the computational demands of neural networks make them difficult to deploy on resource-constrained edge devices. To meet this need, our work introduces a new recurrent unit architecture that is specifically adapted for on-device low power acoustic event detection (AED). The proposed architecture is based on the gated recurrent unit (`GRU') but features optimizations that make it implementable on ultra-low power micro-controllers such as the Arm Cortex M0+. Our new architecture, the Embedded Gated Recurrent Unit (eGRU) is demonstrated to be highly efficient and suitable for short-duration AED and keyword spotting tasks. A single eGRU cell is 60x faster and 10x smaller than a GRU cell. Despite its optimizations, eGRU compares well with GRU across tasks of varying complexities. The practicality of eGRU is investigated in a wearable acoustic event detection application. An eGRU model is implemented and tested on the Arm Cortex M0-based Atmel ATSAMD21E18 processor. The Arm M0+ implementation of the eGRU model compares favorably with a full precision GRU that is running on a workstation. The embedded eGRU model achieves a classification accuracy 95.3%, which is only 2% less than the full precision GRU.
2212.07114
Yuchao Chen
Yuchao Chen, Jintao Wang, Xiaoqing Wang, and Jian Song
Age of Information Optimization in Multi-Channel Network with Sided Information
null
null
null
null
cs.IT cs.SI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a discrete-time multi-channel network where the destination collects time-sensitive packets from multiple sources with sided channel information. The popular metric, Age of Information (AoI), is applied to measure the data freshness at the destination. Due to the interference constraint, only disjoint source-channel pairs can be chosen for transmission in each time slot, and the decision maker should choose the optimal scheduling pairs to minimize the average AoI at the destination. To learn the optimal channel selection, we apply the linear contextual bandit (LCB) framework by utilizing the sided information provided by pilots. Concretely, we establish the relationship between AoI regret and sub-optimal channel selection times and propose both age-independent and age-dependent algorithms. The former method is proven to achieve the sub-linear AoI regret but is outperformed by the latter algorithm both in the linear and non-linear contextual model in simulation.
[ { "created": "Wed, 14 Dec 2022 09:06:05 GMT", "version": "v1" } ]
2022-12-15
[ [ "Chen", "Yuchao", "" ], [ "Wang", "Jintao", "" ], [ "Wang", "Xiaoqing", "" ], [ "Song", "Jian", "" ] ]
We consider a discrete-time multi-channel network where the destination collects time-sensitive packets from multiple sources with sided channel information. The popular metric, Age of Information (AoI), is applied to measure the data freshness at the destination. Due to the interference constraint, only disjoint source-channel pairs can be chosen for transmission in each time slot, and the decision maker should choose the optimal scheduling pairs to minimize the average AoI at the destination. To learn the optimal channel selection, we apply the linear contextual bandit (LCB) framework by utilizing the sided information provided by pilots. Concretely, we establish the relationship between AoI regret and sub-optimal channel selection times and propose both age-independent and age-dependent algorithms. The former method is proven to achieve the sub-linear AoI regret but is outperformed by the latter algorithm both in the linear and non-linear contextual model in simulation.
2307.06845
Vincent Schorp
Vincent Schorp, Will Panitch, Kaushik Shivakumar, Vainavi Viswanath, Justin Kerr, Yahav Avigal, Danyal M Fer, Lionel Ott, Ken Goldberg
Self-Supervised Learning for Interactive Perception of Surgical Thread for Autonomous Suture Tail-Shortening
International Conference on Automation Science and Engineering (CASE) 2023, 7 pages
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate 3D sensing of suturing thread is a challenging problem in automated surgical suturing because of the high state-space complexity, thinness and deformability of the thread, and possibility of occlusion by the grippers and tissue. In this work we present a method for tracking surgical thread in 3D which is robust to occlusions and complex thread configurations, and apply it to autonomously perform the surgical suture "tail-shortening" task: pulling thread through tissue until a desired "tail" length remains exposed. The method utilizes a learned 2D surgical thread detection network to segment suturing thread in RGB images. It then identifies the thread path in 2D and reconstructs the thread in 3D as a NURBS spline by triangulating the detections from two stereo cameras. Once a 3D thread model is initialized, the method tracks the thread across subsequent frames. Experiments suggest the method achieves a 1.33 pixel average reprojection error on challenging single-frame 3D thread reconstructions, and an 0.84 pixel average reprojection error on two tracking sequences. On the tail-shortening task, it accomplishes a 90% success rate across 20 trials. Supplemental materials are available at https://sites.google.com/berkeley.edu/autolab-surgical-thread/ .
[ { "created": "Thu, 13 Jul 2023 16:08:03 GMT", "version": "v1" } ]
2023-07-14
[ [ "Schorp", "Vincent", "" ], [ "Panitch", "Will", "" ], [ "Shivakumar", "Kaushik", "" ], [ "Viswanath", "Vainavi", "" ], [ "Kerr", "Justin", "" ], [ "Avigal", "Yahav", "" ], [ "Fer", "Danyal M", "" ], [ "Ott", "Lionel", "" ], [ "Goldberg", "Ken", "" ] ]
Accurate 3D sensing of suturing thread is a challenging problem in automated surgical suturing because of the high state-space complexity, thinness and deformability of the thread, and possibility of occlusion by the grippers and tissue. In this work we present a method for tracking surgical thread in 3D which is robust to occlusions and complex thread configurations, and apply it to autonomously perform the surgical suture "tail-shortening" task: pulling thread through tissue until a desired "tail" length remains exposed. The method utilizes a learned 2D surgical thread detection network to segment suturing thread in RGB images. It then identifies the thread path in 2D and reconstructs the thread in 3D as a NURBS spline by triangulating the detections from two stereo cameras. Once a 3D thread model is initialized, the method tracks the thread across subsequent frames. Experiments suggest the method achieves a 1.33 pixel average reprojection error on challenging single-frame 3D thread reconstructions, and an 0.84 pixel average reprojection error on two tracking sequences. On the tail-shortening task, it accomplishes a 90% success rate across 20 trials. Supplemental materials are available at https://sites.google.com/berkeley.edu/autolab-surgical-thread/ .
2007.05549
Alexander Irpan
Janarthanan Rajendran, Alex Irpan, Eric Jang
Meta-Learning Requires Meta-Augmentation
14 pages, 8 figures. NeurIPS 2020 camera ready. Code at https://github.com/google-research/google-research/tree/master/meta_augmentation
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Meta-learning algorithms aim to learn two components: a model that predicts targets for a task, and a base learner that quickly updates that model when given examples from a new task. This additional level of learning can be powerful, but it also creates another potential source for overfitting, since we can now overfit in either the model or the base learner. We describe both of these forms of metalearning overfitting, and demonstrate that they appear experimentally in common meta-learning benchmarks. We then use an information-theoretic framework to discuss meta-augmentation, a way to add randomness that discourages the base learner and model from learning trivial solutions that do not generalize to new tasks. We demonstrate that meta-augmentation produces large complementary benefits to recently proposed meta-regularization techniques.
[ { "created": "Fri, 10 Jul 2020 18:04:04 GMT", "version": "v1" }, { "created": "Wed, 4 Nov 2020 00:03:33 GMT", "version": "v2" } ]
2020-11-05
[ [ "Rajendran", "Janarthanan", "" ], [ "Irpan", "Alex", "" ], [ "Jang", "Eric", "" ] ]
Meta-learning algorithms aim to learn two components: a model that predicts targets for a task, and a base learner that quickly updates that model when given examples from a new task. This additional level of learning can be powerful, but it also creates another potential source for overfitting, since we can now overfit in either the model or the base learner. We describe both of these forms of metalearning overfitting, and demonstrate that they appear experimentally in common meta-learning benchmarks. We then use an information-theoretic framework to discuss meta-augmentation, a way to add randomness that discourages the base learner and model from learning trivial solutions that do not generalize to new tasks. We demonstrate that meta-augmentation produces large complementary benefits to recently proposed meta-regularization techniques.
1904.13080
Qi Wang
Yuan Yuan and Dong Wang and Qi Wang
Memory-Augmented Temporal Dynamic Learning for Action Recognition
The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human actions captured in video sequences contain two crucial factors for action recognition, i.e., visual appearance and motion dynamics. To model these two aspects, Convolutional and Recurrent Neural Networks (CNNs and RNNs) are adopted in most existing successful methods for recognizing actions. However, CNN based methods are limited in modeling long-term motion dynamics. RNNs are able to learn temporal motion dynamics but lack effective ways to tackle unsteady dynamics in long-duration motion. In this work, we propose a memory-augmented temporal dynamic learning network, which learns to write the most evident information into an external memory module and ignore irrelevant ones. In particular, we present a differential memory controller to make a discrete decision on whether the external memory module should be updated with current feature. The discrete memory controller takes in the memory history, context embedding and current feature as inputs and controls information flow into the external memory module. Additionally, we train this discrete memory controller using straight-through estimator. We evaluate this end-to-end system on benchmark datasets (UCF101 and HMDB51) of human action recognition. The experimental results show consistent improvements on both datasets over prior works and our baselines.
[ { "created": "Tue, 30 Apr 2019 07:19:50 GMT", "version": "v1" } ]
2019-05-01
[ [ "Yuan", "Yuan", "" ], [ "Wang", "Dong", "" ], [ "Wang", "Qi", "" ] ]
Human actions captured in video sequences contain two crucial factors for action recognition, i.e., visual appearance and motion dynamics. To model these two aspects, Convolutional and Recurrent Neural Networks (CNNs and RNNs) are adopted in most existing successful methods for recognizing actions. However, CNN based methods are limited in modeling long-term motion dynamics. RNNs are able to learn temporal motion dynamics but lack effective ways to tackle unsteady dynamics in long-duration motion. In this work, we propose a memory-augmented temporal dynamic learning network, which learns to write the most evident information into an external memory module and ignore irrelevant ones. In particular, we present a differential memory controller to make a discrete decision on whether the external memory module should be updated with current feature. The discrete memory controller takes in the memory history, context embedding and current feature as inputs and controls information flow into the external memory module. Additionally, we train this discrete memory controller using straight-through estimator. We evaluate this end-to-end system on benchmark datasets (UCF101 and HMDB51) of human action recognition. The experimental results show consistent improvements on both datasets over prior works and our baselines.
2404.18191
Yihao Zhang
Chen Cheng, Xinzhi Yu, Haodong Wen, Jingsong Sun, Guanzhang Yue, Yihao Zhang, Zeming Wei
Exploring the Robustness of In-Context Learning with Noisy Labels
ICLR 2024 Workshop on Reliable and Responsible Foundation Models
null
null
null
cs.CL cs.AI cs.CR cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, the mysterious In-Context Learning (ICL) ability exhibited by Transformer architectures, especially in large language models (LLMs), has sparked significant research interest. However, the resilience of Transformers' in-context learning capabilities in the presence of noisy samples, prevalent in both training corpora and prompt demonstrations, remains underexplored. In this paper, inspired by prior research that studies ICL ability using simple function classes, we take a closer look at this problem by investigating the robustness of Transformers against noisy labels. Specifically, we first conduct a thorough evaluation and analysis of the robustness of Transformers against noisy labels during in-context learning and show that they exhibit notable resilience against diverse types of noise in demonstration labels. Furthermore, we delve deeper into this problem by exploring whether introducing noise into the training set, akin to a form of data augmentation, enhances such robustness during inference, and find that such noise can indeed improve the robustness of ICL. Overall, our fruitful analysis and findings provide a comprehensive understanding of the resilience of Transformer models against label noises during ICL and provide valuable insights into the research on Transformers in natural language processing. Our code is available at https://github.com/InezYu0928/in-context-learning.
[ { "created": "Sun, 28 Apr 2024 14:05:23 GMT", "version": "v1" }, { "created": "Wed, 1 May 2024 09:15:16 GMT", "version": "v2" } ]
2024-05-02
[ [ "Cheng", "Chen", "" ], [ "Yu", "Xinzhi", "" ], [ "Wen", "Haodong", "" ], [ "Sun", "Jingsong", "" ], [ "Yue", "Guanzhang", "" ], [ "Zhang", "Yihao", "" ], [ "Wei", "Zeming", "" ] ]
Recently, the mysterious In-Context Learning (ICL) ability exhibited by Transformer architectures, especially in large language models (LLMs), has sparked significant research interest. However, the resilience of Transformers' in-context learning capabilities in the presence of noisy samples, prevalent in both training corpora and prompt demonstrations, remains underexplored. In this paper, inspired by prior research that studies ICL ability using simple function classes, we take a closer look at this problem by investigating the robustness of Transformers against noisy labels. Specifically, we first conduct a thorough evaluation and analysis of the robustness of Transformers against noisy labels during in-context learning and show that they exhibit notable resilience against diverse types of noise in demonstration labels. Furthermore, we delve deeper into this problem by exploring whether introducing noise into the training set, akin to a form of data augmentation, enhances such robustness during inference, and find that such noise can indeed improve the robustness of ICL. Overall, our fruitful analysis and findings provide a comprehensive understanding of the resilience of Transformer models against label noises during ICL and provide valuable insights into the research on Transformers in natural language processing. Our code is available at https://github.com/InezYu0928/in-context-learning.
1708.04352
Peter Henderson
Peter Henderson, Wei-Di Chang, Florian Shkurti, Johanna Hansen, David Meger, Gregory Dudek
Benchmark Environments for Multitask Learning in Continuous Domains
Accepted at Lifelong Learning: A Reinforcement Learning Approach Workshop @ ICML, Sydney, Australia, 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As demand drives systems to generalize to various domains and problems, the study of multitask, transfer and lifelong learning has become an increasingly important pursuit. In discrete domains, performance on the Atari game suite has emerged as the de facto benchmark for assessing multitask learning. However, in continuous domains there is a lack of agreement on standard multitask evaluation environments which makes it difficult to compare different approaches fairly. In this work, we describe a benchmark set of tasks that we have developed in an extendable framework based on OpenAI Gym. We run a simple baseline using Trust Region Policy Optimization and release the framework publicly to be expanded and used for the systematic comparison of multitask, transfer, and lifelong learning in continuous domains.
[ { "created": "Mon, 14 Aug 2017 22:55:03 GMT", "version": "v1" } ]
2017-08-16
[ [ "Henderson", "Peter", "" ], [ "Chang", "Wei-Di", "" ], [ "Shkurti", "Florian", "" ], [ "Hansen", "Johanna", "" ], [ "Meger", "David", "" ], [ "Dudek", "Gregory", "" ] ]
As demand drives systems to generalize to various domains and problems, the study of multitask, transfer and lifelong learning has become an increasingly important pursuit. In discrete domains, performance on the Atari game suite has emerged as the de facto benchmark for assessing multitask learning. However, in continuous domains there is a lack of agreement on standard multitask evaluation environments which makes it difficult to compare different approaches fairly. In this work, we describe a benchmark set of tasks that we have developed in an extendable framework based on OpenAI Gym. We run a simple baseline using Trust Region Policy Optimization and release the framework publicly to be expanded and used for the systematic comparison of multitask, transfer, and lifelong learning in continuous domains.
2309.03084
Qi Ju
Ju Qi, Ting Feng, Falun Hei, Zhemei Fang, Yunfeng Luo
Pure Monte Carlo Counterfactual Regret Minimization
null
null
null
null
cs.AI cs.GT cs.LG
http://creativecommons.org/licenses/by/4.0/
Counterfactual Regret Minimization (CFR) and its variants are the best algorithms so far for solving large-scale incomplete information games. However, we believe that there are two problems with CFR: First, matrix multiplication is required in CFR iteration, and the time complexity of one iteration is too high; Secondly, the game characteristics in the real world are different. Just using one CFR algorithm will not be perfectly suitable for all game problems. For these two problems, this paper proposes a new algorithm called Pure CFR (PCFR) based on CFR. PCFR can be seen as a combination of CFR and Fictitious Play (FP), inheriting the concept of counterfactual regret (value) from CFR, and using the best response strategy instead of the regret matching strategy for the next iteration. This algorithm has three advantages. First, PCFR can be combined with any CFR variant. The resulting Pure MCCFR (PMCCFR) can significantly reduce the time and space complexity of one iteration. Secondly, our experiments show that the convergence speed of the PMCCFR is 2$\sim$3 times that of the MCCFR. Finally, there is a type of game that is very suitable for PCFR. We call this type of game clear-game, which is characterized by a high proportion of dominated strategies. Experiments show that in clear-game, the convergence rate of PMCCFR is two orders of magnitude higher than that of MCCFR.
[ { "created": "Mon, 4 Sep 2023 09:16:49 GMT", "version": "v1" }, { "created": "Thu, 12 Oct 2023 06:24:33 GMT", "version": "v2" }, { "created": "Fri, 13 Oct 2023 06:01:17 GMT", "version": "v3" } ]
2023-10-16
[ [ "Qi", "Ju", "" ], [ "Feng", "Ting", "" ], [ "Hei", "Falun", "" ], [ "Fang", "Zhemei", "" ], [ "Luo", "Yunfeng", "" ] ]
Counterfactual Regret Minimization (CFR) and its variants are the best algorithms so far for solving large-scale incomplete information games. However, we believe that there are two problems with CFR: First, matrix multiplication is required in CFR iteration, and the time complexity of one iteration is too high; Secondly, the game characteristics in the real world are different. Just using one CFR algorithm will not be perfectly suitable for all game problems. For these two problems, this paper proposes a new algorithm called Pure CFR (PCFR) based on CFR. PCFR can be seen as a combination of CFR and Fictitious Play (FP), inheriting the concept of counterfactual regret (value) from CFR, and using the best response strategy instead of the regret matching strategy for the next iteration. This algorithm has three advantages. First, PCFR can be combined with any CFR variant. The resulting Pure MCCFR (PMCCFR) can significantly reduce the time and space complexity of one iteration. Secondly, our experiments show that the convergence speed of the PMCCFR is 2$\sim$3 times that of the MCCFR. Finally, there is a type of game that is very suitable for PCFR. We call this type of game clear-game, which is characterized by a high proportion of dominated strategies. Experiments show that in clear-game, the convergence rate of PMCCFR is two orders of magnitude higher than that of MCCFR.
2004.13891
Jakub Tarnawski
Janardhan Kulkarni, Shi Li, Jakub Tarnawski, Minwei Ye
Hierarchy-Based Algorithms for Minimizing Makespan under Precedence and Communication Constraints
null
Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA), 2020, pages 2770-2789
10.1137/1.9781611975994.169
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the classic problem of scheduling jobs with precedence constraints on a set of identical machines to minimize the makespan objective function. Understanding the exact approximability of the problem when the number of machines is a constant is a well-known question in scheduling theory. Indeed, an outstanding open problem from the classic book of Garey and Johnson asks whether this problem is NP-hard even in the case of 3 machines and unit-length jobs. In a recent breakthrough, Levey and Rothvoss gave a $(1+\epsilon)$-approximation algorithm, which runs in nearly quasi-polynomial time, for the case when job have unit lengths. However, a substantially more difficult case where jobs have arbitrary processing lengths has remained open. We make progress on this more general problem. We show that there exists a $(1+\epsilon)$-approximation algorithm (with similar running time as that of Levey and Rothvoss) for the non-migratory setting: when every job has to be scheduled entirely on a single machine, but within a machine the job need not be scheduled during consecutive time steps. Further, we also show that our algorithmic framework generalizes to another classic scenario where, along with the precedence constraints, the jobs also have communication delay constraints. Both of these fundamental problems are highly relevant to the practice of datacenter scheduling.
[ { "created": "Tue, 28 Apr 2020 23:28:59 GMT", "version": "v1" } ]
2020-04-30
[ [ "Kulkarni", "Janardhan", "" ], [ "Li", "Shi", "" ], [ "Tarnawski", "Jakub", "" ], [ "Ye", "Minwei", "" ] ]
We consider the classic problem of scheduling jobs with precedence constraints on a set of identical machines to minimize the makespan objective function. Understanding the exact approximability of the problem when the number of machines is a constant is a well-known question in scheduling theory. Indeed, an outstanding open problem from the classic book of Garey and Johnson asks whether this problem is NP-hard even in the case of 3 machines and unit-length jobs. In a recent breakthrough, Levey and Rothvoss gave a $(1+\epsilon)$-approximation algorithm, which runs in nearly quasi-polynomial time, for the case when job have unit lengths. However, a substantially more difficult case where jobs have arbitrary processing lengths has remained open. We make progress on this more general problem. We show that there exists a $(1+\epsilon)$-approximation algorithm (with similar running time as that of Levey and Rothvoss) for the non-migratory setting: when every job has to be scheduled entirely on a single machine, but within a machine the job need not be scheduled during consecutive time steps. Further, we also show that our algorithmic framework generalizes to another classic scenario where, along with the precedence constraints, the jobs also have communication delay constraints. Both of these fundamental problems are highly relevant to the practice of datacenter scheduling.
2111.12217
Aida Sheshbolouki
Aida Sheshbolouki, M. Tamer \"Ozsu
Scale-Invariant Strength Assortativity of Streaming Butterflies
Submitted for publication
null
null
null
cs.DS cs.DB cs.DM cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bipartite graphs are rich data structures with prevalent applications and identifier structural features. However, less is known about their growth patterns, particularly in streaming settings. Current works study the patterns of static or aggregated temporal graphs optimized for certain down-stream analytics or ignoring multipartite/non-stationary data distributions, emergence patterns of subgraphs, and streaming paradigms. To address these, we perform statistical network analysis over web log streams and identify the governing patterns underlying the bursty emergence of mesoscopic building blocks, 2,2-bicliques known as butterflies, leading to a phenomenon that we call "scale-invariant strength assortativity of streaming butterflies". We provide the graph-theoretic explanation of this phenomenon. We further introduce a set of micro-mechanics in the body of a streaming growth algorithm, sGrow, to pinpoint the generative origins. sGrow supports streaming paradigms, emergence of 4-vertex graphlets, and provides user-specified configurations for the scale, burstiness, level of strength assortativity, probability of out-of-order records, generation time, and time-sensitive connections. Comprehensive Evaluations on pattern reproducing and stress testing validate the effectiveness, efficiency, and robustness of sGrow in realization of the observed patterns independent of initial conditions, scale, temporal characteristics, and model configurations. Theoretical and experimental analysis verify the robust ability of sGrow in generating streaming graphs based on user-specified configurations that affect the scale and burstiness of the stream, level of strength assortativity, probability of-of-order streaming records, generation time, and time-sensitive connections.
[ { "created": "Wed, 24 Nov 2021 01:24:39 GMT", "version": "v1" } ]
2021-11-25
[ [ "Sheshbolouki", "Aida", "" ], [ "Özsu", "M. Tamer", "" ] ]
Bipartite graphs are rich data structures with prevalent applications and identifier structural features. However, less is known about their growth patterns, particularly in streaming settings. Current works study the patterns of static or aggregated temporal graphs optimized for certain down-stream analytics or ignoring multipartite/non-stationary data distributions, emergence patterns of subgraphs, and streaming paradigms. To address these, we perform statistical network analysis over web log streams and identify the governing patterns underlying the bursty emergence of mesoscopic building blocks, 2,2-bicliques known as butterflies, leading to a phenomenon that we call "scale-invariant strength assortativity of streaming butterflies". We provide the graph-theoretic explanation of this phenomenon. We further introduce a set of micro-mechanics in the body of a streaming growth algorithm, sGrow, to pinpoint the generative origins. sGrow supports streaming paradigms, emergence of 4-vertex graphlets, and provides user-specified configurations for the scale, burstiness, level of strength assortativity, probability of out-of-order records, generation time, and time-sensitive connections. Comprehensive Evaluations on pattern reproducing and stress testing validate the effectiveness, efficiency, and robustness of sGrow in realization of the observed patterns independent of initial conditions, scale, temporal characteristics, and model configurations. Theoretical and experimental analysis verify the robust ability of sGrow in generating streaming graphs based on user-specified configurations that affect the scale and burstiness of the stream, level of strength assortativity, probability of-of-order streaming records, generation time, and time-sensitive connections.
2103.04854
Mohammadhossein Bahari
Mohammadhossein Bahari, Ismail Nejjar, Alexandre Alahi
Injecting Knowledge in Data-driven Vehicle Trajectory Predictors
Published in Transportation Research: Part C
Transportation Research Part C: Emerging Technologies, 2021
10.1016/j.trc.2021.103010
null
cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicle trajectory prediction tasks have been commonly tackled from two distinct perspectives: either with knowledge-driven methods or more recently with data-driven ones. On the one hand, we can explicitly implement domain-knowledge or physical priors such as anticipating that vehicles will follow the middle of the roads. While this perspective leads to feasible outputs, it has limited performance due to the difficulty to hand-craft complex interactions in urban environments. On the other hand, recent works use data-driven approaches which can learn complex interactions from the data leading to superior performance. However, generalization, \textit{i.e.}, having accurate predictions on unseen data, is an issue leading to unrealistic outputs. In this paper, we propose to learn a "Realistic Residual Block" (RRB), which effectively connects these two perspectives. Our RRB takes any off-the-shelf knowledge-driven model and finds the required residuals to add to the knowledge-aware trajectory. Our proposed method outputs realistic predictions by confining the residual range and taking into account its uncertainty. We also constrain our output with Model Predictive Control (MPC) to satisfy kinematic constraints. Using a publicly available dataset, we show that our method outperforms previous works in terms of accuracy and generalization to new scenes. We will release our code and data split here: https://github.com/vita-epfl/RRB.
[ { "created": "Mon, 8 Mar 2021 16:03:09 GMT", "version": "v1" }, { "created": "Fri, 4 Mar 2022 11:22:45 GMT", "version": "v2" } ]
2022-03-07
[ [ "Bahari", "Mohammadhossein", "" ], [ "Nejjar", "Ismail", "" ], [ "Alahi", "Alexandre", "" ] ]
Vehicle trajectory prediction tasks have been commonly tackled from two distinct perspectives: either with knowledge-driven methods or more recently with data-driven ones. On the one hand, we can explicitly implement domain-knowledge or physical priors such as anticipating that vehicles will follow the middle of the roads. While this perspective leads to feasible outputs, it has limited performance due to the difficulty to hand-craft complex interactions in urban environments. On the other hand, recent works use data-driven approaches which can learn complex interactions from the data leading to superior performance. However, generalization, \textit{i.e.}, having accurate predictions on unseen data, is an issue leading to unrealistic outputs. In this paper, we propose to learn a "Realistic Residual Block" (RRB), which effectively connects these two perspectives. Our RRB takes any off-the-shelf knowledge-driven model and finds the required residuals to add to the knowledge-aware trajectory. Our proposed method outputs realistic predictions by confining the residual range and taking into account its uncertainty. We also constrain our output with Model Predictive Control (MPC) to satisfy kinematic constraints. Using a publicly available dataset, we show that our method outperforms previous works in terms of accuracy and generalization to new scenes. We will release our code and data split here: https://github.com/vita-epfl/RRB.
2406.18311
Xingyuan Bu
Yixin Jin, Wenjing Zhou, Meiqi Wang, Meng Li, Xintao Li, Tianyu Hu
Online Learning of Multiple Tasks and Their Relationships : Testing on Spam Email Data and EEG Signals Recorded in Construction Fields
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper examines an online multi-task learning (OMTL) method, which processes data sequentially to predict labels across related tasks. The framework learns task weights and their relatedness concurrently. Unlike previous models that assumed static task relatedness, our approach treats tasks as initially independent, updating their relatedness iteratively using newly calculated weight vectors. We introduced three rules to update the task relatedness matrix: OMTLCOV, OMTLLOG, and OMTLVON, and compared them against a conventional method (CMTL) that uses a fixed relatedness value. Performance evaluations on three datasets a spam dataset and two EEG datasets from construction workers under varying conditions demonstrated that our OMTL methods outperform CMTL, improving accuracy by 1% to 3% on EEG data, and maintaining low error rates around 12% on the spam dataset.
[ { "created": "Wed, 26 Jun 2024 12:50:13 GMT", "version": "v1" }, { "created": "Sun, 30 Jun 2024 03:49:06 GMT", "version": "v2" } ]
2024-07-10
[ [ "Jin", "Yixin", "" ], [ "Zhou", "Wenjing", "" ], [ "Wang", "Meiqi", "" ], [ "Li", "Meng", "" ], [ "Li", "Xintao", "" ], [ "Hu", "Tianyu", "" ] ]
This paper examines an online multi-task learning (OMTL) method, which processes data sequentially to predict labels across related tasks. The framework learns task weights and their relatedness concurrently. Unlike previous models that assumed static task relatedness, our approach treats tasks as initially independent, updating their relatedness iteratively using newly calculated weight vectors. We introduced three rules to update the task relatedness matrix: OMTLCOV, OMTLLOG, and OMTLVON, and compared them against a conventional method (CMTL) that uses a fixed relatedness value. Performance evaluations on three datasets a spam dataset and two EEG datasets from construction workers under varying conditions demonstrated that our OMTL methods outperform CMTL, improving accuracy by 1% to 3% on EEG data, and maintaining low error rates around 12% on the spam dataset.
2305.15383
Emmanuel Esposito
Khaled Eldowa, Emmanuel Esposito, Tommaso Cesari, Nicol\`o Cesa-Bianchi
On the Minimax Regret for Online Learning with Feedback Graphs
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we improve on the upper and lower bounds for the regret of online learning with strongly observable undirected feedback graphs. The best known upper bound for this problem is $\mathcal{O}\bigl(\sqrt{\alpha T\ln K}\bigr)$, where $K$ is the number of actions, $\alpha$ is the independence number of the graph, and $T$ is the time horizon. The $\sqrt{\ln K}$ factor is known to be necessary when $\alpha = 1$ (the experts case). On the other hand, when $\alpha = K$ (the bandits case), the minimax rate is known to be $\Theta\bigl(\sqrt{KT}\bigr)$, and a lower bound $\Omega\bigl(\sqrt{\alpha T}\bigr)$ is known to hold for any $\alpha$. Our improved upper bound $\mathcal{O}\bigl(\sqrt{\alpha T(1+\ln(K/\alpha))}\bigr)$ holds for any $\alpha$ and matches the lower bounds for bandits and experts, while interpolating intermediate cases. To prove this result, we use FTRL with $q$-Tsallis entropy for a carefully chosen value of $q \in [1/2, 1)$ that varies with $\alpha$. The analysis of this algorithm requires a new bound on the variance term in the regret. We also show how to extend our techniques to time-varying graphs, without requiring prior knowledge of their independence numbers. Our upper bound is complemented by an improved $\Omega\bigl(\sqrt{\alpha T(\ln K)/(\ln\alpha)}\bigr)$ lower bound for all $\alpha > 1$, whose analysis relies on a novel reduction to multitask learning. This shows that a logarithmic factor is necessary as soon as $\alpha < K$.
[ { "created": "Wed, 24 May 2023 17:40:57 GMT", "version": "v1" }, { "created": "Sat, 28 Oct 2023 14:11:51 GMT", "version": "v2" } ]
2023-10-31
[ [ "Eldowa", "Khaled", "" ], [ "Esposito", "Emmanuel", "" ], [ "Cesari", "Tommaso", "" ], [ "Cesa-Bianchi", "Nicolò", "" ] ]
In this work, we improve on the upper and lower bounds for the regret of online learning with strongly observable undirected feedback graphs. The best known upper bound for this problem is $\mathcal{O}\bigl(\sqrt{\alpha T\ln K}\bigr)$, where $K$ is the number of actions, $\alpha$ is the independence number of the graph, and $T$ is the time horizon. The $\sqrt{\ln K}$ factor is known to be necessary when $\alpha = 1$ (the experts case). On the other hand, when $\alpha = K$ (the bandits case), the minimax rate is known to be $\Theta\bigl(\sqrt{KT}\bigr)$, and a lower bound $\Omega\bigl(\sqrt{\alpha T}\bigr)$ is known to hold for any $\alpha$. Our improved upper bound $\mathcal{O}\bigl(\sqrt{\alpha T(1+\ln(K/\alpha))}\bigr)$ holds for any $\alpha$ and matches the lower bounds for bandits and experts, while interpolating intermediate cases. To prove this result, we use FTRL with $q$-Tsallis entropy for a carefully chosen value of $q \in [1/2, 1)$ that varies with $\alpha$. The analysis of this algorithm requires a new bound on the variance term in the regret. We also show how to extend our techniques to time-varying graphs, without requiring prior knowledge of their independence numbers. Our upper bound is complemented by an improved $\Omega\bigl(\sqrt{\alpha T(\ln K)/(\ln\alpha)}\bigr)$ lower bound for all $\alpha > 1$, whose analysis relies on a novel reduction to multitask learning. This shows that a logarithmic factor is necessary as soon as $\alpha < K$.
2401.03142
Liangtao Shi
Liangtao Shi, Bineng Zhong, Qihua Liang, Ning Li, Shengping Zhang, Xianxian Li
Explicit Visual Prompts for Visual Object Tracking
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How to effectively exploit spatio-temporal information is crucial to capture target appearance changes in visual tracking. However, most deep learning-based trackers mainly focus on designing a complicated appearance model or template updating strategy, while lacking the exploitation of context between consecutive frames and thus entailing the \textit{when-and-how-to-update} dilemma. To address these issues, we propose a novel explicit visual prompts framework for visual tracking, dubbed \textbf{EVPTrack}. Specifically, we utilize spatio-temporal tokens to propagate information between consecutive frames without focusing on updating templates. As a result, we cannot only alleviate the challenge of \textit{when-to-update}, but also avoid the hyper-parameters associated with updating strategies. Then, we utilize the spatio-temporal tokens to generate explicit visual prompts that facilitate inference in the current frame. The prompts are fed into a transformer encoder together with the image tokens without additional processing. Consequently, the efficiency of our model is improved by avoiding \textit{how-to-update}. In addition, we consider multi-scale information as explicit visual prompts, providing multiscale template features to enhance the EVPTrack's ability to handle target scale changes. Extensive experimental results on six benchmarks (i.e., LaSOT, LaSOT\rm $_{ext}$, GOT-10k, UAV123, TrackingNet, and TNL2K.) validate that our EVPTrack can achieve competitive performance at a real-time speed by effectively exploiting both spatio-temporal and multi-scale information. Code and models are available at https://github.com/GXNU-ZhongLab/EVPTrack.
[ { "created": "Sat, 6 Jan 2024 07:12:07 GMT", "version": "v1" } ]
2024-01-09
[ [ "Shi", "Liangtao", "" ], [ "Zhong", "Bineng", "" ], [ "Liang", "Qihua", "" ], [ "Li", "Ning", "" ], [ "Zhang", "Shengping", "" ], [ "Li", "Xianxian", "" ] ]
How to effectively exploit spatio-temporal information is crucial to capture target appearance changes in visual tracking. However, most deep learning-based trackers mainly focus on designing a complicated appearance model or template updating strategy, while lacking the exploitation of context between consecutive frames and thus entailing the \textit{when-and-how-to-update} dilemma. To address these issues, we propose a novel explicit visual prompts framework for visual tracking, dubbed \textbf{EVPTrack}. Specifically, we utilize spatio-temporal tokens to propagate information between consecutive frames without focusing on updating templates. As a result, we cannot only alleviate the challenge of \textit{when-to-update}, but also avoid the hyper-parameters associated with updating strategies. Then, we utilize the spatio-temporal tokens to generate explicit visual prompts that facilitate inference in the current frame. The prompts are fed into a transformer encoder together with the image tokens without additional processing. Consequently, the efficiency of our model is improved by avoiding \textit{how-to-update}. In addition, we consider multi-scale information as explicit visual prompts, providing multiscale template features to enhance the EVPTrack's ability to handle target scale changes. Extensive experimental results on six benchmarks (i.e., LaSOT, LaSOT\rm $_{ext}$, GOT-10k, UAV123, TrackingNet, and TNL2K.) validate that our EVPTrack can achieve competitive performance at a real-time speed by effectively exploiting both spatio-temporal and multi-scale information. Code and models are available at https://github.com/GXNU-ZhongLab/EVPTrack.
1806.01976
YangQuan Chen Prof.
Sina Dehghan, Tiebiao Zhao, Yang Zhao, Jie Yuan, Abdullah Ates, YangQuan Chen
PID2018 Benchmark Challenge: Model Predictive Control With Conditional Integral Control Using A General Purpose Optimal Control Problem Solver - RIOTS
6 pages, 7 figures, 3rd IFAC Conference on Advances in Proportional-Integral-Derivative Control
null
null
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a multi-variable Model Predictive Control (MPC) based controller for the one-staged refrigeration cycle model described in the PID2018 Benchmark Challenge. This model represents a two-input, two-output system with strong nonlinearities and high coupling between its variables. A general purpose optimal control problem (OCP) solver Matlab toolbox called RIOTS is used as the OCP solver for the proposed MPC scheme which allows for straightforward implementation of the method and for solving a wide range of constrained linear and nonlinear optimal control problems. A conditional integral (CI) compensator is embedded in the controller to compensate for the small steady state errors. This method shows significant improvements in performance compared to both discrete decentralized control (C1) and multi-variable PID controller (C2) originally given in PID2018 Benchmark Challenge as a baseline. Our solution is introduced in detail in this paper and our final results using the overall relative index, $J$, are 0.2 over C1 and 0.3 over C2, respectively. In other words, we achieved 80% improvement over C1 and 70% improvement over C2. We expect to achieve further improvements when some optimized searching efforts are used for MPC and CI parameter tuning.
[ { "created": "Wed, 6 Jun 2018 01:55:02 GMT", "version": "v1" } ]
2018-06-07
[ [ "Dehghan", "Sina", "" ], [ "Zhao", "Tiebiao", "" ], [ "Zhao", "Yang", "" ], [ "Yuan", "Jie", "" ], [ "Ates", "Abdullah", "" ], [ "Chen", "YangQuan", "" ] ]
This paper presents a multi-variable Model Predictive Control (MPC) based controller for the one-staged refrigeration cycle model described in the PID2018 Benchmark Challenge. This model represents a two-input, two-output system with strong nonlinearities and high coupling between its variables. A general purpose optimal control problem (OCP) solver Matlab toolbox called RIOTS is used as the OCP solver for the proposed MPC scheme which allows for straightforward implementation of the method and for solving a wide range of constrained linear and nonlinear optimal control problems. A conditional integral (CI) compensator is embedded in the controller to compensate for the small steady state errors. This method shows significant improvements in performance compared to both discrete decentralized control (C1) and multi-variable PID controller (C2) originally given in PID2018 Benchmark Challenge as a baseline. Our solution is introduced in detail in this paper and our final results using the overall relative index, $J$, are 0.2 over C1 and 0.3 over C2, respectively. In other words, we achieved 80% improvement over C1 and 70% improvement over C2. We expect to achieve further improvements when some optimized searching efforts are used for MPC and CI parameter tuning.
2406.03820
Thai-Hoc Vu
Ons Aouedi, Thai-Hoc Vu, Alessio Sacco, Dinh C. Nguyen, Kandaraj Piamrat, Guido Marchetto, Quoc-Viet Pham
A Survey on Intelligent Internet of Things: Applications, Security, Privacy, and Future Directions
This work has been accepted by IEEE Communications Surveys & Tutorials
null
null
null
cs.NI cs.AI cs.CR cs.ET cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid advances in the Internet of Things (IoT) have promoted a revolution in communication technology and offered various customer services. Artificial intelligence (AI) techniques have been exploited to facilitate IoT operations and maximize their potential in modern application scenarios. In particular, the convergence of IoT and AI has led to a new networking paradigm called Intelligent IoT (IIoT), which has the potential to significantly transform businesses and industrial domains. This paper presents a comprehensive survey of IIoT by investigating its significant applications in mobile networks, as well as its associated security and privacy issues. Specifically, we explore and discuss the roles of IIoT in a wide range of key application domains, from smart healthcare and smart cities to smart transportation and smart industries. Through such extensive discussions, we investigate important security issues in IIoT networks, where network attacks, confidentiality, integrity, and intrusion are analyzed, along with a discussion of potential countermeasures. Privacy issues in IIoT networks were also surveyed and discussed, including data, location, and model privacy leakage. Finally, we outline several key challenges and highlight potential research directions in this important area.
[ { "created": "Thu, 6 Jun 2024 07:55:30 GMT", "version": "v1" }, { "created": "Fri, 21 Jun 2024 14:43:41 GMT", "version": "v2" } ]
2024-06-24
[ [ "Aouedi", "Ons", "" ], [ "Vu", "Thai-Hoc", "" ], [ "Sacco", "Alessio", "" ], [ "Nguyen", "Dinh C.", "" ], [ "Piamrat", "Kandaraj", "" ], [ "Marchetto", "Guido", "" ], [ "Pham", "Quoc-Viet", "" ] ]
The rapid advances in the Internet of Things (IoT) have promoted a revolution in communication technology and offered various customer services. Artificial intelligence (AI) techniques have been exploited to facilitate IoT operations and maximize their potential in modern application scenarios. In particular, the convergence of IoT and AI has led to a new networking paradigm called Intelligent IoT (IIoT), which has the potential to significantly transform businesses and industrial domains. This paper presents a comprehensive survey of IIoT by investigating its significant applications in mobile networks, as well as its associated security and privacy issues. Specifically, we explore and discuss the roles of IIoT in a wide range of key application domains, from smart healthcare and smart cities to smart transportation and smart industries. Through such extensive discussions, we investigate important security issues in IIoT networks, where network attacks, confidentiality, integrity, and intrusion are analyzed, along with a discussion of potential countermeasures. Privacy issues in IIoT networks were also surveyed and discussed, including data, location, and model privacy leakage. Finally, we outline several key challenges and highlight potential research directions in this important area.
2111.13307
Zijian Wang
Zijian Wang, Xingqun Qi, Kun Yuan, Muyi Sun
Self-supervised Correlation Mining Network for Person Image Generation
null
A modified version compared with CVPR2022 version
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Person image generation aims to perform non-rigid deformation on source images, which generally requires unaligned data pairs for training. Recently, self-supervised methods express great prospects in this task by merging the disentangled representations for self-reconstruction. However, such methods fail to exploit the spatial correlation between the disentangled features. In this paper, we propose a Self-supervised Correlation Mining Network (SCM-Net) to rearrange the source images in the feature space, in which two collaborative modules are integrated, Decomposed Style Encoder (DSE) and Correlation Mining Module (CMM). Specifically, the DSE first creates unaligned pairs at the feature level. Then, the CMM establishes the spatial correlation field for feature rearrangement. Eventually, a translation module transforms the rearranged features to realistic results. Meanwhile, for improving the fidelity of cross-scale pose transformation, we propose a graph based Body Structure Retaining Loss (BSR Loss) to preserve reasonable body structures on half body to full body generation. Extensive experiments conducted on DeepFashion dataset demonstrate the superiority of our method compared with other supervised and unsupervised approaches. Furthermore, satisfactory results on face generation show the versatility of our method in other deformation tasks.
[ { "created": "Fri, 26 Nov 2021 03:57:46 GMT", "version": "v1" }, { "created": "Mon, 29 Nov 2021 08:25:03 GMT", "version": "v2" }, { "created": "Wed, 14 Dec 2022 07:27:54 GMT", "version": "v3" } ]
2022-12-15
[ [ "Wang", "Zijian", "" ], [ "Qi", "Xingqun", "" ], [ "Yuan", "Kun", "" ], [ "Sun", "Muyi", "" ] ]
Person image generation aims to perform non-rigid deformation on source images, which generally requires unaligned data pairs for training. Recently, self-supervised methods express great prospects in this task by merging the disentangled representations for self-reconstruction. However, such methods fail to exploit the spatial correlation between the disentangled features. In this paper, we propose a Self-supervised Correlation Mining Network (SCM-Net) to rearrange the source images in the feature space, in which two collaborative modules are integrated, Decomposed Style Encoder (DSE) and Correlation Mining Module (CMM). Specifically, the DSE first creates unaligned pairs at the feature level. Then, the CMM establishes the spatial correlation field for feature rearrangement. Eventually, a translation module transforms the rearranged features to realistic results. Meanwhile, for improving the fidelity of cross-scale pose transformation, we propose a graph based Body Structure Retaining Loss (BSR Loss) to preserve reasonable body structures on half body to full body generation. Extensive experiments conducted on DeepFashion dataset demonstrate the superiority of our method compared with other supervised and unsupervised approaches. Furthermore, satisfactory results on face generation show the versatility of our method in other deformation tasks.
2201.08659
Mads Lindskou
Mads Lindskou, Torben Tvedebrink, Poul Svante Eriksen, S{\o}ren H{\o}jsgaard and Niels Morling
Unity Smoothing for Handling Inconsistent Evidence in Bayesian Networks and Unity Propagation for Faster Inference
null
null
null
null
cs.LG stat.CO
http://creativecommons.org/licenses/by/4.0/
We propose Unity Smoothing (US) for handling inconsistencies between a Bayesian network model and new unseen observations. We show that prediction accuracy, using the junction tree algorithm with US is comparable to that of Laplace smoothing. Moreover, in applications were sparsity of the data structures is utilized, US outperforms Laplace smoothing in terms of memory usage. Furthermore, we detail how to avoid redundant calculations that must otherwise be performed during the message passing scheme in the junction tree algorithm which we refer to as Unity Propagation (UP). Experimental results shows that it is always faster to exploit UP on top of the Lauritzen-Spigelhalter message passing scheme for the junction tree algorithm.
[ { "created": "Fri, 21 Jan 2022 12:03:45 GMT", "version": "v1" } ]
2022-01-24
[ [ "Lindskou", "Mads", "" ], [ "Tvedebrink", "Torben", "" ], [ "Eriksen", "Poul Svante", "" ], [ "Højsgaard", "Søren", "" ], [ "Morling", "Niels", "" ] ]
We propose Unity Smoothing (US) for handling inconsistencies between a Bayesian network model and new unseen observations. We show that prediction accuracy, using the junction tree algorithm with US is comparable to that of Laplace smoothing. Moreover, in applications were sparsity of the data structures is utilized, US outperforms Laplace smoothing in terms of memory usage. Furthermore, we detail how to avoid redundant calculations that must otherwise be performed during the message passing scheme in the junction tree algorithm which we refer to as Unity Propagation (UP). Experimental results shows that it is always faster to exploit UP on top of the Lauritzen-Spigelhalter message passing scheme for the junction tree algorithm.
2401.09885
Jorge Martinez Gil Ph.D.
Jorge Martinez-Gil
Source Code Clone Detection Using Unsupervised Similarity Measures
Accepted for publication as Full Paper in the Software Quality Days 2024, Vienna, Austria
null
10.1007/978-3-031-56281-5_2
null
cs.SE cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Assessing similarity in source code has gained significant attention in recent years due to its importance in software engineering tasks such as clone detection and code search and recommendation. This work presents a comparative analysis of unsupervised similarity measures for identifying source code clone detection. The goal is to overview the current state-of-the-art techniques, their strengths, and weaknesses. To do that, we compile the existing unsupervised strategies and evaluate their performance on a benchmark dataset to guide software engineers in selecting appropriate methods for their specific use cases. The source code of this study is available at https://github.com/jorge-martinez-gil/codesim
[ { "created": "Thu, 18 Jan 2024 10:56:27 GMT", "version": "v1" }, { "created": "Fri, 19 Jan 2024 07:23:04 GMT", "version": "v2" }, { "created": "Tue, 6 Feb 2024 15:09:13 GMT", "version": "v3" } ]
2024-08-13
[ [ "Martinez-Gil", "Jorge", "" ] ]
Assessing similarity in source code has gained significant attention in recent years due to its importance in software engineering tasks such as clone detection and code search and recommendation. This work presents a comparative analysis of unsupervised similarity measures for identifying source code clone detection. The goal is to overview the current state-of-the-art techniques, their strengths, and weaknesses. To do that, we compile the existing unsupervised strategies and evaluate their performance on a benchmark dataset to guide software engineers in selecting appropriate methods for their specific use cases. The source code of this study is available at https://github.com/jorge-martinez-gil/codesim
2305.13235
Oana-Maria Camburu
Jesus Solano, Mardhiyah Sanni, Oana-Maria Camburu, Pasquale Minervini
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations
null
ACL 2024
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Models that generate natural language explanations (NLEs) for their predictions have recently gained increasing interest. However, this approach usually demands large datasets of human-written NLEs for the ground-truth answers at training time, which can be expensive and potentially infeasible for some applications. When only a few NLEs are available (a few-shot setup), fine-tuning pre-trained language models (PLMs) in conjunction with prompt-based learning has recently shown promising results. However, PLMs typically have billions of parameters, making full fine-tuning expensive. We propose SparseFit, a sparse few-shot fine-tuning strategy that leverages discrete prompts to jointly generate predictions and NLEs. We experiment with SparseFit on three sizes of the T5 language model and four datasets and compare it against existing state-of-the-art Parameter-Efficient Fine-Tuning (PEFT) techniques. We find that fine-tuning only 6.8% of the model parameters leads to competitive results for both the task performance and the quality of the generated NLEs compared to full fine-tuning of the model and produces better results on average than other PEFT methods in terms of predictive accuracy and NLE quality.
[ { "created": "Mon, 22 May 2023 17:06:41 GMT", "version": "v1" }, { "created": "Tue, 23 May 2023 09:26:37 GMT", "version": "v2" }, { "created": "Sun, 11 Aug 2024 11:43:23 GMT", "version": "v3" } ]
2024-08-13
[ [ "Solano", "Jesus", "" ], [ "Sanni", "Mardhiyah", "" ], [ "Camburu", "Oana-Maria", "" ], [ "Minervini", "Pasquale", "" ] ]
Models that generate natural language explanations (NLEs) for their predictions have recently gained increasing interest. However, this approach usually demands large datasets of human-written NLEs for the ground-truth answers at training time, which can be expensive and potentially infeasible for some applications. When only a few NLEs are available (a few-shot setup), fine-tuning pre-trained language models (PLMs) in conjunction with prompt-based learning has recently shown promising results. However, PLMs typically have billions of parameters, making full fine-tuning expensive. We propose SparseFit, a sparse few-shot fine-tuning strategy that leverages discrete prompts to jointly generate predictions and NLEs. We experiment with SparseFit on three sizes of the T5 language model and four datasets and compare it against existing state-of-the-art Parameter-Efficient Fine-Tuning (PEFT) techniques. We find that fine-tuning only 6.8% of the model parameters leads to competitive results for both the task performance and the quality of the generated NLEs compared to full fine-tuning of the model and produces better results on average than other PEFT methods in terms of predictive accuracy and NLE quality.
2104.04916
Xutan Peng
Xutan Peng, Chenghua Lin, Mark Stevenson
Cross-Lingual Word Embedding Refinement by $\ell_{1}$ Norm Optimisation
To appear at NAACL 2021
NAACL-HLT 2021
10.18653/v1/2021.naacl-main.214
null
cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-Lingual Word Embeddings (CLWEs) encode words from two or more languages in a shared high-dimensional space in which vectors representing words with similar meaning (regardless of language) are closely located. Existing methods for building high-quality CLWEs learn mappings that minimise the $\ell_{2}$ norm loss function. However, this optimisation objective has been demonstrated to be sensitive to outliers. Based on the more robust Manhattan norm (aka. $\ell_{1}$ norm) goodness-of-fit criterion, this paper proposes a simple post-processing step to improve CLWEs. An advantage of this approach is that it is fully agnostic to the training process of the original CLWEs and can therefore be applied widely. Extensive experiments are performed involving ten diverse languages and embeddings trained on different corpora. Evaluation results based on bilingual lexicon induction and cross-lingual transfer for natural language inference tasks show that the $\ell_{1}$ refinement substantially outperforms four state-of-the-art baselines in both supervised and unsupervised settings. It is therefore recommended that this strategy be adopted as a standard for CLWE methods.
[ { "created": "Sun, 11 Apr 2021 04:37:54 GMT", "version": "v1" } ]
2022-01-25
[ [ "Peng", "Xutan", "" ], [ "Lin", "Chenghua", "" ], [ "Stevenson", "Mark", "" ] ]
Cross-Lingual Word Embeddings (CLWEs) encode words from two or more languages in a shared high-dimensional space in which vectors representing words with similar meaning (regardless of language) are closely located. Existing methods for building high-quality CLWEs learn mappings that minimise the $\ell_{2}$ norm loss function. However, this optimisation objective has been demonstrated to be sensitive to outliers. Based on the more robust Manhattan norm (aka. $\ell_{1}$ norm) goodness-of-fit criterion, this paper proposes a simple post-processing step to improve CLWEs. An advantage of this approach is that it is fully agnostic to the training process of the original CLWEs and can therefore be applied widely. Extensive experiments are performed involving ten diverse languages and embeddings trained on different corpora. Evaluation results based on bilingual lexicon induction and cross-lingual transfer for natural language inference tasks show that the $\ell_{1}$ refinement substantially outperforms four state-of-the-art baselines in both supervised and unsupervised settings. It is therefore recommended that this strategy be adopted as a standard for CLWE methods.
2302.08510
Ting-Hsuan Liao
Ting-Hsuan Liao, Songwei Ge, Yiran Xu, Yao-Chih Lee, Badour AlBahar and Jia-Bin Huang
Text-driven Visual Synthesis with Latent Diffusion Prior
Project website: https://latent-diffusion-prior.github.io/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
There has been tremendous progress in large-scale text-to-image synthesis driven by diffusion models enabling versatile downstream applications such as 3D object synthesis from texts, image editing, and customized generation. We present a generic approach using latent diffusion models as powerful image priors for various visual synthesis tasks. Existing methods that utilize such priors fail to use these models' full capabilities. To improve this, our core ideas are 1) a feature matching loss between features from different layers of the decoder to provide detailed guidance and 2) a KL divergence loss to regularize the predicted latent features and stabilize the training. We demonstrate the efficacy of our approach on three different applications, text-to-3D, StyleGAN adaptation, and layered image editing. Extensive results show our method compares favorably against baselines.
[ { "created": "Thu, 16 Feb 2023 18:59:58 GMT", "version": "v1" }, { "created": "Mon, 3 Apr 2023 18:15:48 GMT", "version": "v2" } ]
2023-04-05
[ [ "Liao", "Ting-Hsuan", "" ], [ "Ge", "Songwei", "" ], [ "Xu", "Yiran", "" ], [ "Lee", "Yao-Chih", "" ], [ "AlBahar", "Badour", "" ], [ "Huang", "Jia-Bin", "" ] ]
There has been tremendous progress in large-scale text-to-image synthesis driven by diffusion models enabling versatile downstream applications such as 3D object synthesis from texts, image editing, and customized generation. We present a generic approach using latent diffusion models as powerful image priors for various visual synthesis tasks. Existing methods that utilize such priors fail to use these models' full capabilities. To improve this, our core ideas are 1) a feature matching loss between features from different layers of the decoder to provide detailed guidance and 2) a KL divergence loss to regularize the predicted latent features and stabilize the training. We demonstrate the efficacy of our approach on three different applications, text-to-3D, StyleGAN adaptation, and layered image editing. Extensive results show our method compares favorably against baselines.
2307.16650
Yong Zheng
Yong Zheng
ChatGPT for Teaching and Learning: An Experience from Data Science Education
null
null
10.1145/3585059.3611431
null
cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
ChatGPT, an implementation and application of large language models, has gained significant popularity since its initial release. Researchers have been exploring ways to harness the practical benefits of ChatGPT in real-world scenarios. Educational researchers have investigated its potential in various subjects, e.g., programming, mathematics, finance, clinical decision support, etc. However, there has been limited attention given to its application in data science education. This paper aims to bridge that gap by utilizing ChatGPT in a data science course, gathering perspectives from students, and presenting our experiences and feedback on using ChatGPT for teaching and learning in data science education. The findings not only distinguish data science education from other disciplines but also uncover new opportunities and challenges associated with incorporating ChatGPT into the data science curriculum.
[ { "created": "Mon, 31 Jul 2023 13:31:19 GMT", "version": "v1" } ]
2023-08-01
[ [ "Zheng", "Yong", "" ] ]
ChatGPT, an implementation and application of large language models, has gained significant popularity since its initial release. Researchers have been exploring ways to harness the practical benefits of ChatGPT in real-world scenarios. Educational researchers have investigated its potential in various subjects, e.g., programming, mathematics, finance, clinical decision support, etc. However, there has been limited attention given to its application in data science education. This paper aims to bridge that gap by utilizing ChatGPT in a data science course, gathering perspectives from students, and presenting our experiences and feedback on using ChatGPT for teaching and learning in data science education. The findings not only distinguish data science education from other disciplines but also uncover new opportunities and challenges associated with incorporating ChatGPT into the data science curriculum.
1902.06656
Xavier Coiteux-Roy
Xavier Coiteux-Roy and Stefan Wolf
Proving Erasure
5 pages, 3 figures
null
10.1109/ISIT.2019.8849661
null
cs.CR quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It seems impossible to certify that a remote hosting service does not leak its users' data --- or does quantum mechanics make it possible? We investigate if a server hosting data can information-theoretically prove its definite deletion using a "BB84-like" protocol. To do so, we first rigorously introduce an alternative to privacy by encryption: privacy delegation. We then apply this novel concept to provable deletion and remote data storage. For both tasks, we present a protocol, sketch its partial security, and display its vulnerability to eavesdropping attacks targeting only a few bits.
[ { "created": "Mon, 18 Feb 2019 17:23:52 GMT", "version": "v1" }, { "created": "Fri, 3 May 2019 08:55:15 GMT", "version": "v2" } ]
2020-01-15
[ [ "Coiteux-Roy", "Xavier", "" ], [ "Wolf", "Stefan", "" ] ]
It seems impossible to certify that a remote hosting service does not leak its users' data --- or does quantum mechanics make it possible? We investigate if a server hosting data can information-theoretically prove its definite deletion using a "BB84-like" protocol. To do so, we first rigorously introduce an alternative to privacy by encryption: privacy delegation. We then apply this novel concept to provable deletion and remote data storage. For both tasks, we present a protocol, sketch its partial security, and display its vulnerability to eavesdropping attacks targeting only a few bits.
2210.06242
Sang-Hyun Je
Sang-Hyun Je
Entity Aware Negative Sampling with Auxiliary Loss of False Negative Prediction for Knowledge Graph Embedding
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge graph (KG) embedding is widely used in many downstream applications using KGs. Generally, since KGs contain only ground truth triples, it is necessary to construct arbitrary negative samples for representation learning of KGs. Recently, various methods for sampling high-quality negatives have been studied because the quality of negative triples has great effect on KG embedding. In this paper, we propose a novel method called Entity Aware Negative Sampling (EANS), which is able to sample negative entities resemble to positive one by adopting Gaussian distribution to the aligned entity index space. Additionally, we introduce auxiliary loss for false negative prediction that can alleviate the impact of the sampled false negative triples. The proposed method can generate high-quality negative samples regardless of negative sample size and effectively mitigate the influence of false negative samples. The experimental results on standard benchmarks show that our EANS outperforms existing the state-of-the-art methods of negative sampling on several knowledge graph embedding models. Moreover, the proposed method achieves competitive performance even when the number of negative samples is limited to only one.
[ { "created": "Wed, 12 Oct 2022 14:27:51 GMT", "version": "v1" } ]
2022-10-13
[ [ "Je", "Sang-Hyun", "" ] ]
Knowledge graph (KG) embedding is widely used in many downstream applications using KGs. Generally, since KGs contain only ground truth triples, it is necessary to construct arbitrary negative samples for representation learning of KGs. Recently, various methods for sampling high-quality negatives have been studied because the quality of negative triples has great effect on KG embedding. In this paper, we propose a novel method called Entity Aware Negative Sampling (EANS), which is able to sample negative entities resemble to positive one by adopting Gaussian distribution to the aligned entity index space. Additionally, we introduce auxiliary loss for false negative prediction that can alleviate the impact of the sampled false negative triples. The proposed method can generate high-quality negative samples regardless of negative sample size and effectively mitigate the influence of false negative samples. The experimental results on standard benchmarks show that our EANS outperforms existing the state-of-the-art methods of negative sampling on several knowledge graph embedding models. Moreover, the proposed method achieves competitive performance even when the number of negative samples is limited to only one.
2104.14679
Harshayu Girase
Harshayu Girase, Jerrick Hoang, Sai Yalamanchi, and Micol Marchetti-Bowick
Physically Feasible Vehicle Trajectory Prediction
null
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
Predicting the future motion of actors in a traffic scene is a crucial part of any autonomous driving system. Recent research in this area has focused on trajectory prediction approaches that optimize standard trajectory error metrics. In this work, we describe three important properties -- physical realism guarantees, system maintainability, and sample efficiency -- which we believe are equally important for developing a self-driving system that can operate safely and practically in the real world. Furthermore, we introduce PTNet (PathTrackingNet), a novel approach for vehicle trajectory prediction that is a hybrid of the classical pure pursuit path tracking algorithm and modern graph-based neural networks. By combining a structured robotics technique with a flexible learning approach, we are able to produce a system that not only achieves the same level of performance as other state-of-the-art methods on traditional trajectory error metrics, but also provides strong guarantees about the physical realism of the predicted trajectories while requiring half the amount of data. We believe focusing on this new class of hybrid approaches is an useful direction for developing and maintaining a safety-critical autonomous driving system.
[ { "created": "Thu, 29 Apr 2021 22:13:41 GMT", "version": "v1" } ]
2021-05-03
[ [ "Girase", "Harshayu", "" ], [ "Hoang", "Jerrick", "" ], [ "Yalamanchi", "Sai", "" ], [ "Marchetti-Bowick", "Micol", "" ] ]
Predicting the future motion of actors in a traffic scene is a crucial part of any autonomous driving system. Recent research in this area has focused on trajectory prediction approaches that optimize standard trajectory error metrics. In this work, we describe three important properties -- physical realism guarantees, system maintainability, and sample efficiency -- which we believe are equally important for developing a self-driving system that can operate safely and practically in the real world. Furthermore, we introduce PTNet (PathTrackingNet), a novel approach for vehicle trajectory prediction that is a hybrid of the classical pure pursuit path tracking algorithm and modern graph-based neural networks. By combining a structured robotics technique with a flexible learning approach, we are able to produce a system that not only achieves the same level of performance as other state-of-the-art methods on traditional trajectory error metrics, but also provides strong guarantees about the physical realism of the predicted trajectories while requiring half the amount of data. We believe focusing on this new class of hybrid approaches is an useful direction for developing and maintaining a safety-critical autonomous driving system.
2103.14475
Jianyuan Guo
Jianyuan Guo, Kai Han, Yunhe Wang, Han Wu, Xinghao Chen, Chunjing Xu and Chang Xu
Distilling Object Detectors via Decoupled Features
Accepted in CVPR 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge distillation is a widely used paradigm for inheriting information from a complicated teacher network to a compact student network and maintaining the strong performance. Different from image classification, object detectors are much more sophisticated with multiple loss functions in which features that semantic information rely on are tangled. In this paper, we point out that the information of features derived from regions excluding objects are also essential for distilling the student detector, which is usually ignored in existing approaches. In addition, we elucidate that features from different regions should be assigned with different importance during distillation. To this end, we present a novel distillation algorithm via decoupled features (DeFeat) for learning a better student detector. Specifically, two levels of decoupled features will be processed for embedding useful information into the student, i.e., decoupled features from neck and decoupled proposals from classification head. Extensive experiments on various detectors with different backbones show that the proposed DeFeat is able to surpass the state-of-the-art distillation methods for object detection. For example, DeFeat improves ResNet50 based Faster R-CNN from 37.4% to 40.9% mAP, and improves ResNet50 based RetinaNet from 36.5% to 39.7% mAP on COCO benchmark. Our implementation is available at https://github.com/ggjy/DeFeat.pytorch.
[ { "created": "Fri, 26 Mar 2021 13:58:49 GMT", "version": "v1" } ]
2021-03-29
[ [ "Guo", "Jianyuan", "" ], [ "Han", "Kai", "" ], [ "Wang", "Yunhe", "" ], [ "Wu", "Han", "" ], [ "Chen", "Xinghao", "" ], [ "Xu", "Chunjing", "" ], [ "Xu", "Chang", "" ] ]
Knowledge distillation is a widely used paradigm for inheriting information from a complicated teacher network to a compact student network and maintaining the strong performance. Different from image classification, object detectors are much more sophisticated with multiple loss functions in which features that semantic information rely on are tangled. In this paper, we point out that the information of features derived from regions excluding objects are also essential for distilling the student detector, which is usually ignored in existing approaches. In addition, we elucidate that features from different regions should be assigned with different importance during distillation. To this end, we present a novel distillation algorithm via decoupled features (DeFeat) for learning a better student detector. Specifically, two levels of decoupled features will be processed for embedding useful information into the student, i.e., decoupled features from neck and decoupled proposals from classification head. Extensive experiments on various detectors with different backbones show that the proposed DeFeat is able to surpass the state-of-the-art distillation methods for object detection. For example, DeFeat improves ResNet50 based Faster R-CNN from 37.4% to 40.9% mAP, and improves ResNet50 based RetinaNet from 36.5% to 39.7% mAP on COCO benchmark. Our implementation is available at https://github.com/ggjy/DeFeat.pytorch.
1803.05849
Renzo Andri
Andrawes Al Bahou, Geethan Karunaratne, Renzo Andri, Lukas Cavigelli, Luca Benini
XNORBIN: A 95 TOp/s/W Hardware Accelerator for Binary Convolutional Neural Networks
null
null
null
null
cs.CV cs.AI cs.AR cs.NE eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deploying state-of-the-art CNNs requires power-hungry processors and off-chip memory. This precludes the implementation of CNNs in low-power embedded systems. Recent research shows CNNs sustain extreme quantization, binarizing their weights and intermediate feature maps, thereby saving 8-32\x memory and collapsing energy-intensive sum-of-products into XNOR-and-popcount operations. We present XNORBIN, an accelerator for binary CNNs with computation tightly coupled to memory for aggressive data reuse. Implemented in UMC 65nm technology XNORBIN achieves an energy efficiency of 95 TOp/s/W and an area efficiency of 2.0 TOp/s/MGE at 0.8 V.
[ { "created": "Mon, 5 Mar 2018 15:41:28 GMT", "version": "v1" } ]
2018-09-12
[ [ "Bahou", "Andrawes Al", "" ], [ "Karunaratne", "Geethan", "" ], [ "Andri", "Renzo", "" ], [ "Cavigelli", "Lukas", "" ], [ "Benini", "Luca", "" ] ]
Deploying state-of-the-art CNNs requires power-hungry processors and off-chip memory. This precludes the implementation of CNNs in low-power embedded systems. Recent research shows CNNs sustain extreme quantization, binarizing their weights and intermediate feature maps, thereby saving 8-32\x memory and collapsing energy-intensive sum-of-products into XNOR-and-popcount operations. We present XNORBIN, an accelerator for binary CNNs with computation tightly coupled to memory for aggressive data reuse. Implemented in UMC 65nm technology XNORBIN achieves an energy efficiency of 95 TOp/s/W and an area efficiency of 2.0 TOp/s/MGE at 0.8 V.
1905.00134
Maximilian Haas-Heger
Maximilian Haas-Heger and Matei Ciocarlie
Accurate Energetic Constraints for Passive Grasp Stability Analysis
18 pages, 13 figures, 2 tables, 1 algorithm
null
10.1109/TRO.2020.2974108
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Passive reaction effects in grasp stability analysis occur when the contact forces and joint torques applied by a grasp change in response to external disturbances applied to the grasped object. For example, nonbackdrivable actuators (e.g. highly geared servos) will passively resist external disturbances without an actively applied command; for numerous robot hands using such motors, these effects can be highly beneficial as they increase grasp resistance without requiring active control. We introduce a grasp stability analysis method that can model these effects, and, for a given grasp, distinguish between disturbances that will be passively resisted and those that will not. We find that, in order to achieve this, the grasp model must include accurate energetic constraints. One way to achieve this is to consider the Maximum Dissipation Principle (MDP), a part of the Coulomb friction model that is rarely used in grasp stability analysis. However, the MDP constraints are non-convex, and difficult to solve efficiently. We thus introduce a convex relaxation method, along with an algorithm that successively refines this relaxation locally in order to obtain solutions to arbitrary accuracy efficiently. Our resulting algorithm can determine if a grasp is passively stable, solve for equilibrium contact forces and compute optimal actuator commands for stability. Its implementation is publicly available as part of the open-source GraspIt! simulator.
[ { "created": "Tue, 30 Apr 2019 23:27:04 GMT", "version": "v1" }, { "created": "Mon, 10 Jun 2019 22:04:58 GMT", "version": "v2" }, { "created": "Mon, 18 Nov 2019 21:08:42 GMT", "version": "v3" }, { "created": "Thu, 9 Jul 2020 03:02:38 GMT", "version": "v4" } ]
2020-07-10
[ [ "Haas-Heger", "Maximilian", "" ], [ "Ciocarlie", "Matei", "" ] ]
Passive reaction effects in grasp stability analysis occur when the contact forces and joint torques applied by a grasp change in response to external disturbances applied to the grasped object. For example, nonbackdrivable actuators (e.g. highly geared servos) will passively resist external disturbances without an actively applied command; for numerous robot hands using such motors, these effects can be highly beneficial as they increase grasp resistance without requiring active control. We introduce a grasp stability analysis method that can model these effects, and, for a given grasp, distinguish between disturbances that will be passively resisted and those that will not. We find that, in order to achieve this, the grasp model must include accurate energetic constraints. One way to achieve this is to consider the Maximum Dissipation Principle (MDP), a part of the Coulomb friction model that is rarely used in grasp stability analysis. However, the MDP constraints are non-convex, and difficult to solve efficiently. We thus introduce a convex relaxation method, along with an algorithm that successively refines this relaxation locally in order to obtain solutions to arbitrary accuracy efficiently. Our resulting algorithm can determine if a grasp is passively stable, solve for equilibrium contact forces and compute optimal actuator commands for stability. Its implementation is publicly available as part of the open-source GraspIt! simulator.
2307.08187
Hiroki Naganuma
Hiroki Naganuma, Ryuichiro Hataya, Ioannis Mitliagkas
An Empirical Study of Pre-trained Model Selection for Out-of-Distribution Generalization and Calibration
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
In out-of-distribution (OOD) generalization tasks, fine-tuning pre-trained models has become a prevalent strategy. Different from most prior work that has focused on advancing learning algorithms, we systematically examined how pre-trained model size, pre-training dataset size, and training strategies impact generalization and uncertainty calibration on downstream tasks. We evaluated 100 models across diverse pre-trained model sizes, \update{five} pre-training datasets, and five data augmentations through extensive experiments on four distribution shift datasets totaling over 120,000 GPU hours. Our results demonstrate the significant impact of pre-trained model selection, with optimal choices substantially improving OOD accuracy over algorithm improvement alone. We find larger models and bigger pre-training data improve OOD performance and calibration, in contrast to some prior studies that found modern deep networks to calibrate worse than classical shallow models. Our work underscores the overlooked importance of pre-trained model selection for out-of-distribution generalization and calibration.
[ { "created": "Mon, 17 Jul 2023 01:27:10 GMT", "version": "v1" }, { "created": "Mon, 20 Nov 2023 02:22:22 GMT", "version": "v2" }, { "created": "Thu, 30 May 2024 23:30:02 GMT", "version": "v3" } ]
2024-06-03
[ [ "Naganuma", "Hiroki", "" ], [ "Hataya", "Ryuichiro", "" ], [ "Mitliagkas", "Ioannis", "" ] ]
In out-of-distribution (OOD) generalization tasks, fine-tuning pre-trained models has become a prevalent strategy. Different from most prior work that has focused on advancing learning algorithms, we systematically examined how pre-trained model size, pre-training dataset size, and training strategies impact generalization and uncertainty calibration on downstream tasks. We evaluated 100 models across diverse pre-trained model sizes, \update{five} pre-training datasets, and five data augmentations through extensive experiments on four distribution shift datasets totaling over 120,000 GPU hours. Our results demonstrate the significant impact of pre-trained model selection, with optimal choices substantially improving OOD accuracy over algorithm improvement alone. We find larger models and bigger pre-training data improve OOD performance and calibration, in contrast to some prior studies that found modern deep networks to calibrate worse than classical shallow models. Our work underscores the overlooked importance of pre-trained model selection for out-of-distribution generalization and calibration.
2004.03954
Jian-Jia Weng
Jian-Jia Weng, Fady Alajaji, and Tam\'as Linder
A Simple Capacity Outer Bound for Two-Way Channels and Capacity Approximation Results
an error in Eq. (2) corrected
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Channel symmetry properties that imply the tightness of Shannon's random coding inner bound have recently been used to determine the capacity region of discrete-memoryless two-way channels (DM-TWCs). For channels without such symmetry properties, outer bounds are often needed to estimate the capacity region. However, validating symmetry conditions and/or evaluating non-trivial outer bounds are computationally demanding, especially for channels with large input and output alphabets. In this paper, three easy-to-check conditions that identify DM-TWCs with no such symmetry properties as well as an easy-to-compute outer bound are derived. The bound is obtained from Shannon's inner bound computation but is non-trivial. Using this outer bound, approximate capacity results can be established for certain DM-TWCs. The results are illustrated by two examples.
[ { "created": "Wed, 8 Apr 2020 12:01:09 GMT", "version": "v1" }, { "created": "Wed, 12 Aug 2020 13:26:56 GMT", "version": "v2" }, { "created": "Thu, 24 Sep 2020 21:26:05 GMT", "version": "v3" } ]
2020-09-28
[ [ "Weng", "Jian-Jia", "" ], [ "Alajaji", "Fady", "" ], [ "Linder", "Tamás", "" ] ]
Channel symmetry properties that imply the tightness of Shannon's random coding inner bound have recently been used to determine the capacity region of discrete-memoryless two-way channels (DM-TWCs). For channels without such symmetry properties, outer bounds are often needed to estimate the capacity region. However, validating symmetry conditions and/or evaluating non-trivial outer bounds are computationally demanding, especially for channels with large input and output alphabets. In this paper, three easy-to-check conditions that identify DM-TWCs with no such symmetry properties as well as an easy-to-compute outer bound are derived. The bound is obtained from Shannon's inner bound computation but is non-trivial. Using this outer bound, approximate capacity results can be established for certain DM-TWCs. The results are illustrated by two examples.
2312.09058
Yixuan Even Xu
Yixuan Even Xu, Chun Kai Ling, Fei Fang
Learning Coalition Structures with Games
13 pages, 4 figures, 3 tables, aaai 2024
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coalitions naturally exist in many real-world systems involving multiple decision makers such as ridesharing, security, and online ad auctions, but the coalition structure among the agents is often unknown. We propose and study an important yet previously overseen problem -- Coalition Structure Learning (CSL), where we aim to carefully design a series of games for the agents and infer the underlying coalition structure by observing their interactions in those games. We establish a lower bound on the sample complexity -- defined as the number of games needed to learn the structure -- of any algorithms for CSL and propose the Iterative Grouping (IG) algorithm for designing normal-form games to achieve the lower bound. We show that IG can be extended to other succinct games such as congestion games and graphical games. Moreover, we solve CSL in a more restrictive and practical setting: auctions. We show a variant of IG to solve CSL in the auction setting even if we cannot design the bidder valuations. Finally, we conduct experiments to evaluate IG in the auction setting and the results align with our theoretical analysis.
[ { "created": "Thu, 14 Dec 2023 15:54:55 GMT", "version": "v1" }, { "created": "Tue, 19 Dec 2023 02:54:23 GMT", "version": "v2" } ]
2023-12-20
[ [ "Xu", "Yixuan Even", "" ], [ "Ling", "Chun Kai", "" ], [ "Fang", "Fei", "" ] ]
Coalitions naturally exist in many real-world systems involving multiple decision makers such as ridesharing, security, and online ad auctions, but the coalition structure among the agents is often unknown. We propose and study an important yet previously overseen problem -- Coalition Structure Learning (CSL), where we aim to carefully design a series of games for the agents and infer the underlying coalition structure by observing their interactions in those games. We establish a lower bound on the sample complexity -- defined as the number of games needed to learn the structure -- of any algorithms for CSL and propose the Iterative Grouping (IG) algorithm for designing normal-form games to achieve the lower bound. We show that IG can be extended to other succinct games such as congestion games and graphical games. Moreover, we solve CSL in a more restrictive and practical setting: auctions. We show a variant of IG to solve CSL in the auction setting even if we cannot design the bidder valuations. Finally, we conduct experiments to evaluate IG in the auction setting and the results align with our theoretical analysis.
2002.06987
Wei Deng
Wei Deng and Junwei Pan and Tian Zhou and Deguang Kong and Aaron Flores and Guang Lin
DeepLight: Deep Lightweight Feature Interactions for Accelerating CTR Predictions in Ad Serving
Accepted by WSDM 2021; Source code: https://github.com/WayneDW/DeepLight_Deep-Lightweight-Feature-Interactions
null
null
null
cs.LG cs.IR stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Click-through rate (CTR) prediction is a crucial task in online display advertising. The embedding-based neural networks have been proposed to learn both explicit feature interactions through a shallow component and deep feature interactions using a deep neural network (DNN) component. These sophisticated models, however, slow down the prediction inference by at least hundreds of times. To address the issue of significantly increased serving delay and high memory usage for ad serving in production, this paper presents \emph{DeepLight}: a framework to accelerate the CTR predictions in three aspects: 1) accelerate the model inference via explicitly searching informative feature interactions in the shallow component; 2) prune redundant layers and parameters at intra-layer and inter-layer level in the DNN component; 3) promote the sparsity of the embedding layer to preserve the most discriminant signals. By combining the above efforts, the proposed approach accelerates the model inference by 46X on Criteo dataset and 27X on Avazu dataset without any loss on the prediction accuracy. This paves the way for successfully deploying complicated embedding-based neural networks in production for ad serving.
[ { "created": "Mon, 17 Feb 2020 14:51:31 GMT", "version": "v1" }, { "created": "Tue, 18 Aug 2020 01:46:08 GMT", "version": "v2" }, { "created": "Wed, 6 Jan 2021 22:13:51 GMT", "version": "v3" } ]
2021-01-08
[ [ "Deng", "Wei", "" ], [ "Pan", "Junwei", "" ], [ "Zhou", "Tian", "" ], [ "Kong", "Deguang", "" ], [ "Flores", "Aaron", "" ], [ "Lin", "Guang", "" ] ]
Click-through rate (CTR) prediction is a crucial task in online display advertising. The embedding-based neural networks have been proposed to learn both explicit feature interactions through a shallow component and deep feature interactions using a deep neural network (DNN) component. These sophisticated models, however, slow down the prediction inference by at least hundreds of times. To address the issue of significantly increased serving delay and high memory usage for ad serving in production, this paper presents \emph{DeepLight}: a framework to accelerate the CTR predictions in three aspects: 1) accelerate the model inference via explicitly searching informative feature interactions in the shallow component; 2) prune redundant layers and parameters at intra-layer and inter-layer level in the DNN component; 3) promote the sparsity of the embedding layer to preserve the most discriminant signals. By combining the above efforts, the proposed approach accelerates the model inference by 46X on Criteo dataset and 27X on Avazu dataset without any loss on the prediction accuracy. This paves the way for successfully deploying complicated embedding-based neural networks in production for ad serving.
1806.01830
Adam Santoro
Vinicius Zambaldi, David Raposo, Adam Santoro, Victor Bapst, Yujia Li, Igor Babuschkin, Karl Tuyls, David Reichert, Timothy Lillicrap, Edward Lockhart, Murray Shanahan, Victoria Langston, Razvan Pascanu, Matthew Botvinick, Oriol Vinyals, Peter Battaglia
Relational Deep Reinforcement Learning
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce an approach for deep reinforcement learning (RL) that improves upon the efficiency, generalization capacity, and interpretability of conventional approaches through structured perception and relational reasoning. It uses self-attention to iteratively reason about the relations between entities in a scene and to guide a model-free policy. Our results show that in a novel navigation and planning task called Box-World, our agent finds interpretable solutions that improve upon baselines in terms of sample complexity, ability to generalize to more complex scenes than experienced during training, and overall performance. In the StarCraft II Learning Environment, our agent achieves state-of-the-art performance on six mini-games -- surpassing human grandmaster performance on four. By considering architectural inductive biases, our work opens new directions for overcoming important, but stubborn, challenges in deep RL.
[ { "created": "Tue, 5 Jun 2018 17:39:12 GMT", "version": "v1" }, { "created": "Thu, 28 Jun 2018 14:59:32 GMT", "version": "v2" } ]
2018-06-29
[ [ "Zambaldi", "Vinicius", "" ], [ "Raposo", "David", "" ], [ "Santoro", "Adam", "" ], [ "Bapst", "Victor", "" ], [ "Li", "Yujia", "" ], [ "Babuschkin", "Igor", "" ], [ "Tuyls", "Karl", "" ], [ "Reichert", "David", "" ], [ "Lillicrap", "Timothy", "" ], [ "Lockhart", "Edward", "" ], [ "Shanahan", "Murray", "" ], [ "Langston", "Victoria", "" ], [ "Pascanu", "Razvan", "" ], [ "Botvinick", "Matthew", "" ], [ "Vinyals", "Oriol", "" ], [ "Battaglia", "Peter", "" ] ]
We introduce an approach for deep reinforcement learning (RL) that improves upon the efficiency, generalization capacity, and interpretability of conventional approaches through structured perception and relational reasoning. It uses self-attention to iteratively reason about the relations between entities in a scene and to guide a model-free policy. Our results show that in a novel navigation and planning task called Box-World, our agent finds interpretable solutions that improve upon baselines in terms of sample complexity, ability to generalize to more complex scenes than experienced during training, and overall performance. In the StarCraft II Learning Environment, our agent achieves state-of-the-art performance on six mini-games -- surpassing human grandmaster performance on four. By considering architectural inductive biases, our work opens new directions for overcoming important, but stubborn, challenges in deep RL.
2207.12764
Anahita Farhang Ghahfarokhi
Anahita Farhang Ghahfarokhi, Fatemeh Akoochekian, Fareed Zandkarimi, Wil M.P. van der Aalst
Clustering Object-Centric Event Logs
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Process mining provides various algorithms to analyze process executions based on event data. Process discovery, the most prominent category of process mining techniques, aims to discover process models from event logs, however, it leads to spaghetti models when working with real-life data. Therefore, several clustering techniques have been proposed on top of traditional event logs (i.e., event logs with a single case notion) to reduce the complexity of process models and discover homogeneous subsets of cases. Nevertheless, in real-life processes, particularly in the context of Business-to-Business (B2B) processes, multiple objects are involved in a process. Recently, Object-Centric Event Logs (OCELs) have been introduced to capture the information of such processes, and several process discovery techniques have been developed on top of OCELs. Yet, the output of the proposed discovery techniques on real OCELs leads to more informative but also more complex models. In this paper, we propose a clustering-based approach to cluster similar objects in OCELs to simplify the obtained process models. Using a case study of a real B2B process, we demonstrate that our approach reduces the complexity of the process models and generates coherent subsets of objects which help the end-users gain insights into the process.
[ { "created": "Tue, 26 Jul 2022 09:16:39 GMT", "version": "v1" } ]
2022-07-27
[ [ "Ghahfarokhi", "Anahita Farhang", "" ], [ "Akoochekian", "Fatemeh", "" ], [ "Zandkarimi", "Fareed", "" ], [ "van der Aalst", "Wil M. P.", "" ] ]
Process mining provides various algorithms to analyze process executions based on event data. Process discovery, the most prominent category of process mining techniques, aims to discover process models from event logs, however, it leads to spaghetti models when working with real-life data. Therefore, several clustering techniques have been proposed on top of traditional event logs (i.e., event logs with a single case notion) to reduce the complexity of process models and discover homogeneous subsets of cases. Nevertheless, in real-life processes, particularly in the context of Business-to-Business (B2B) processes, multiple objects are involved in a process. Recently, Object-Centric Event Logs (OCELs) have been introduced to capture the information of such processes, and several process discovery techniques have been developed on top of OCELs. Yet, the output of the proposed discovery techniques on real OCELs leads to more informative but also more complex models. In this paper, we propose a clustering-based approach to cluster similar objects in OCELs to simplify the obtained process models. Using a case study of a real B2B process, we demonstrate that our approach reduces the complexity of the process models and generates coherent subsets of objects which help the end-users gain insights into the process.
2011.06825
Md Saif Hassan Onim
Md. Saif Hassan Onim, Aiman Rafeed Ehtesham, Amreen Anbar, A. K. M. Nazrul Islam, A. K. M. Mahbubur Rahman
LULC classification by semantic segmentation of satellite images using FastFCN
null
null
10.1109/ICAICT51780.2020.9333522
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper analyses how well a Fast Fully Convolutional Network (FastFCN) semantically segments satellite images and thus classifies Land Use/Land Cover(LULC) classes. Fast-FCN was used on Gaofen-2 Image Dataset (GID-2) to segment them in five different classes: BuiltUp, Meadow, Farmland, Water and Forest. The results showed better accuracy (0.93), precision (0.99), recall (0.98) and mean Intersection over Union (mIoU)(0.97) than other approaches like using FCN-8 or eCognition, a readily available software. We presented a comparison between the results. We propose FastFCN to be both faster and more accurate automated method than other existing methods for LULC classification.
[ { "created": "Fri, 13 Nov 2020 09:33:03 GMT", "version": "v1" }, { "created": "Thu, 3 Dec 2020 19:50:31 GMT", "version": "v2" } ]
2022-02-25
[ [ "Onim", "Md. Saif Hassan", "" ], [ "Ehtesham", "Aiman Rafeed", "" ], [ "Anbar", "Amreen", "" ], [ "Islam", "A. K. M. Nazrul", "" ], [ "Rahman", "A. K. M. Mahbubur", "" ] ]
This paper analyses how well a Fast Fully Convolutional Network (FastFCN) semantically segments satellite images and thus classifies Land Use/Land Cover(LULC) classes. Fast-FCN was used on Gaofen-2 Image Dataset (GID-2) to segment them in five different classes: BuiltUp, Meadow, Farmland, Water and Forest. The results showed better accuracy (0.93), precision (0.99), recall (0.98) and mean Intersection over Union (mIoU)(0.97) than other approaches like using FCN-8 or eCognition, a readily available software. We presented a comparison between the results. We propose FastFCN to be both faster and more accurate automated method than other existing methods for LULC classification.
2103.06352
Kevin Lybarger
Kevin Lybarger, Linzee Mabrey, Matthew Thau, Pavan K. Bhatraju, Mark Wurfel, Meliha Yetisgen
Identifying ARDS using the Hierarchical Attention Network with Sentence Objectives Framework
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Acute respiratory distress syndrome (ARDS) is a life-threatening condition that is often undiagnosed or diagnosed late. ARDS is especially prominent in those infected with COVID-19. We explore the automatic identification of ARDS indicators and confounding factors in free-text chest radiograph reports. We present a new annotated corpus of chest radiograph reports and introduce the Hierarchical Attention Network with Sentence Objectives (HANSO) text classification framework. HANSO utilizes fine-grained annotations to improve document classification performance. HANSO can extract ARDS-related information with high performance by leveraging relation annotations, even if the annotated spans are noisy. Using annotated chest radiograph images as a gold standard, HANSO identifies bilateral infiltrates, an indicator of ARDS, in chest radiograph reports with performance (0.87 F1) comparable to human annotations (0.84 F1). This algorithm could facilitate more efficient and expeditious identification of ARDS by clinicians and researchers and contribute to the development of new therapies to improve patient care.
[ { "created": "Wed, 10 Mar 2021 21:50:11 GMT", "version": "v1" } ]
2021-03-12
[ [ "Lybarger", "Kevin", "" ], [ "Mabrey", "Linzee", "" ], [ "Thau", "Matthew", "" ], [ "Bhatraju", "Pavan K.", "" ], [ "Wurfel", "Mark", "" ], [ "Yetisgen", "Meliha", "" ] ]
Acute respiratory distress syndrome (ARDS) is a life-threatening condition that is often undiagnosed or diagnosed late. ARDS is especially prominent in those infected with COVID-19. We explore the automatic identification of ARDS indicators and confounding factors in free-text chest radiograph reports. We present a new annotated corpus of chest radiograph reports and introduce the Hierarchical Attention Network with Sentence Objectives (HANSO) text classification framework. HANSO utilizes fine-grained annotations to improve document classification performance. HANSO can extract ARDS-related information with high performance by leveraging relation annotations, even if the annotated spans are noisy. Using annotated chest radiograph images as a gold standard, HANSO identifies bilateral infiltrates, an indicator of ARDS, in chest radiograph reports with performance (0.87 F1) comparable to human annotations (0.84 F1). This algorithm could facilitate more efficient and expeditious identification of ARDS by clinicians and researchers and contribute to the development of new therapies to improve patient care.
1909.06872
Gilad Cohen
Gilad Cohen, Guillermo Sapiro, Raja Giryes
Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors
Paper accepted to CVPR 2020
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) are notorious for their vulnerability to adversarial attacks, which are small perturbations added to their input images to mislead their prediction. Detection of adversarial examples is, therefore, a fundamental requirement for robust classification frameworks. In this work, we present a method for detecting such adversarial attacks, which is suitable for any pre-trained neural network classifier. We use influence functions to measure the impact of every training sample on the validation set data. From the influence scores, we find the most supportive training samples for any given validation example. A k-nearest neighbor (k-NN) model fitted on the DNN's activation layers is employed to search for the ranking of these supporting training samples. We observe that these samples are highly correlated with the nearest neighbors of the normal inputs, while this correlation is much weaker for adversarial inputs. We train an adversarial detector using the k-NN ranks and distances and show that it successfully distinguishes adversarial examples, getting state-of-the-art results on six attack methods with three datasets. Code is available at https://github.com/giladcohen/NNIF_adv_defense.
[ { "created": "Sun, 15 Sep 2019 20:07:48 GMT", "version": "v1" }, { "created": "Thu, 19 Mar 2020 10:41:34 GMT", "version": "v2" } ]
2020-03-20
[ [ "Cohen", "Gilad", "" ], [ "Sapiro", "Guillermo", "" ], [ "Giryes", "Raja", "" ] ]
Deep neural networks (DNNs) are notorious for their vulnerability to adversarial attacks, which are small perturbations added to their input images to mislead their prediction. Detection of adversarial examples is, therefore, a fundamental requirement for robust classification frameworks. In this work, we present a method for detecting such adversarial attacks, which is suitable for any pre-trained neural network classifier. We use influence functions to measure the impact of every training sample on the validation set data. From the influence scores, we find the most supportive training samples for any given validation example. A k-nearest neighbor (k-NN) model fitted on the DNN's activation layers is employed to search for the ranking of these supporting training samples. We observe that these samples are highly correlated with the nearest neighbors of the normal inputs, while this correlation is much weaker for adversarial inputs. We train an adversarial detector using the k-NN ranks and distances and show that it successfully distinguishes adversarial examples, getting state-of-the-art results on six attack methods with three datasets. Code is available at https://github.com/giladcohen/NNIF_adv_defense.
2406.11880
Jason Martin
Jason Martin, Kenneth Yeung
Knowledge Return Oriented Prompting (KROP)
null
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Many Large Language Models (LLMs) and LLM-powered apps deployed today use some form of prompt filter or alignment to protect their integrity. However, these measures aren't foolproof. This paper introduces KROP, a prompt injection technique capable of obfuscating prompt injection attacks, rendering them virtually undetectable to most of these security measures.
[ { "created": "Tue, 11 Jun 2024 23:58:37 GMT", "version": "v1" } ]
2024-06-19
[ [ "Martin", "Jason", "" ], [ "Yeung", "Kenneth", "" ] ]
Many Large Language Models (LLMs) and LLM-powered apps deployed today use some form of prompt filter or alignment to protect their integrity. However, these measures aren't foolproof. This paper introduces KROP, a prompt injection technique capable of obfuscating prompt injection attacks, rendering them virtually undetectable to most of these security measures.
2312.10616
Sijie Wang
Sijie Wang, Rui She, Qiyu Kang, Xingchao Jian, Kai Zhao, Yang Song, Wee Peng Tay
DistilVPR: Cross-Modal Knowledge Distillation for Visual Place Recognition
Accepted by AAAI 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The utilization of multi-modal sensor data in visual place recognition (VPR) has demonstrated enhanced performance compared to single-modal counterparts. Nonetheless, integrating additional sensors comes with elevated costs and may not be feasible for systems that demand lightweight operation, thereby impacting the practical deployment of VPR. To address this issue, we resort to knowledge distillation, which empowers single-modal students to learn from cross-modal teachers without introducing additional sensors during inference. Despite the notable advancements achieved by current distillation approaches, the exploration of feature relationships remains an under-explored area. In order to tackle the challenge of cross-modal distillation in VPR, we present DistilVPR, a novel distillation pipeline for VPR. We propose leveraging feature relationships from multiple agents, including self-agents and cross-agents for teacher and student neural networks. Furthermore, we integrate various manifolds, characterized by different space curvatures for exploring feature relationships. This approach enhances the diversity of feature relationships, including Euclidean, spherical, and hyperbolic relationship modules, thereby enhancing the overall representational capacity. The experiments demonstrate that our proposed pipeline achieves state-of-the-art performance compared to other distillation baselines. We also conduct necessary ablation studies to show design effectiveness. The code is released at: https://github.com/sijieaaa/DistilVPR
[ { "created": "Sun, 17 Dec 2023 05:59:06 GMT", "version": "v1" } ]
2023-12-19
[ [ "Wang", "Sijie", "" ], [ "She", "Rui", "" ], [ "Kang", "Qiyu", "" ], [ "Jian", "Xingchao", "" ], [ "Zhao", "Kai", "" ], [ "Song", "Yang", "" ], [ "Tay", "Wee Peng", "" ] ]
The utilization of multi-modal sensor data in visual place recognition (VPR) has demonstrated enhanced performance compared to single-modal counterparts. Nonetheless, integrating additional sensors comes with elevated costs and may not be feasible for systems that demand lightweight operation, thereby impacting the practical deployment of VPR. To address this issue, we resort to knowledge distillation, which empowers single-modal students to learn from cross-modal teachers without introducing additional sensors during inference. Despite the notable advancements achieved by current distillation approaches, the exploration of feature relationships remains an under-explored area. In order to tackle the challenge of cross-modal distillation in VPR, we present DistilVPR, a novel distillation pipeline for VPR. We propose leveraging feature relationships from multiple agents, including self-agents and cross-agents for teacher and student neural networks. Furthermore, we integrate various manifolds, characterized by different space curvatures for exploring feature relationships. This approach enhances the diversity of feature relationships, including Euclidean, spherical, and hyperbolic relationship modules, thereby enhancing the overall representational capacity. The experiments demonstrate that our proposed pipeline achieves state-of-the-art performance compared to other distillation baselines. We also conduct necessary ablation studies to show design effectiveness. The code is released at: https://github.com/sijieaaa/DistilVPR
2011.07542
Ina Kodrasi
I. Kodrasi and M. Pernon and M. Laganaro and H. Bourlard
Automatic and perceptual discrimination between dysarthria, apraxia of speech, and neurotypical speech
ICASSP 2021
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic techniques in the context of motor speech disorders (MSDs) are typically two-class techniques aiming to discriminate between dysarthria and neurotypical speech or between dysarthria and apraxia of speech (AoS). Further, although such techniques are proposed to support the perceptual assessment of clinicians, the automatic and perceptual classification accuracy has never been compared. In this paper, we investigate a three-class automatic technique and a set of handcrafted features for the discrimination of dysarthria, AoS and neurotypical speech. Instead of following the commonly used One-versus-One or One-versus-Rest approaches for multi-class classification, a hierarchical approach is proposed. Further, a perceptual study is conducted where speech and language pathologists are asked to listen to recordings of dysarthria, AoS, and neurotypical speech and decide which class the recordings belong to. The proposed automatic technique is evaluated on the same recordings and the automatic and perceptual classification performance are compared. The presented results show that the hierarchical classification approach yields a higher classification accuracy than baseline One-versus-One and One-versus-Rest approaches. Further, the presented results show that the automatic approach yields a higher classification accuracy than the perceptual assessment of speech and language pathologists, demonstrating the potential advantages of integrating automatic tools in clinical practice.
[ { "created": "Sun, 15 Nov 2020 14:48:28 GMT", "version": "v1" }, { "created": "Mon, 8 Feb 2021 12:16:22 GMT", "version": "v2" }, { "created": "Wed, 2 Jun 2021 08:36:58 GMT", "version": "v3" } ]
2021-06-03
[ [ "Kodrasi", "I.", "" ], [ "Pernon", "M.", "" ], [ "Laganaro", "M.", "" ], [ "Bourlard", "H.", "" ] ]
Automatic techniques in the context of motor speech disorders (MSDs) are typically two-class techniques aiming to discriminate between dysarthria and neurotypical speech or between dysarthria and apraxia of speech (AoS). Further, although such techniques are proposed to support the perceptual assessment of clinicians, the automatic and perceptual classification accuracy has never been compared. In this paper, we investigate a three-class automatic technique and a set of handcrafted features for the discrimination of dysarthria, AoS and neurotypical speech. Instead of following the commonly used One-versus-One or One-versus-Rest approaches for multi-class classification, a hierarchical approach is proposed. Further, a perceptual study is conducted where speech and language pathologists are asked to listen to recordings of dysarthria, AoS, and neurotypical speech and decide which class the recordings belong to. The proposed automatic technique is evaluated on the same recordings and the automatic and perceptual classification performance are compared. The presented results show that the hierarchical classification approach yields a higher classification accuracy than baseline One-versus-One and One-versus-Rest approaches. Further, the presented results show that the automatic approach yields a higher classification accuracy than the perceptual assessment of speech and language pathologists, demonstrating the potential advantages of integrating automatic tools in clinical practice.
1407.6989
Benjamin Schweinhart
Benjamin Schweinhart, Jeremy Mason, and Robert MacPherson
Topological Similarity of Random Cell Complexes and Applications
null
Phys. Rev. E 93, 062111 (2016)
10.1103/PhysRevE.93.062111
null
cs.CG cond-mat.mtrl-sci
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although random cell complexes occur throughout the physical sciences, there does not appear to be a standard way to quantify their statistical similarities and differences. The various proposals in the literature are usually motivated by the analysis of particular physical systems and do not necessarily apply to general situations. The central concepts in this paper---the swatch and the cloth---provide a description of the local topology of a cell complex that is general (any physical system that may be represented as a cell complex is admissible) and complete (any statistical question about the local topology may be answered from the cloth). Furthermore, this approach allows a distance to be defined that measures the similarity of the local topology of two cell complexes. The distance is used to identify a steady state of a model dislocation network evolving by energy minimization, and then to rigorously quantify the approach of the simulation to this steady state.
[ { "created": "Fri, 25 Jul 2014 17:52:41 GMT", "version": "v1" }, { "created": "Mon, 4 Jan 2016 22:23:32 GMT", "version": "v2" } ]
2016-06-15
[ [ "Schweinhart", "Benjamin", "" ], [ "Mason", "Jeremy", "" ], [ "MacPherson", "Robert", "" ] ]
Although random cell complexes occur throughout the physical sciences, there does not appear to be a standard way to quantify their statistical similarities and differences. The various proposals in the literature are usually motivated by the analysis of particular physical systems and do not necessarily apply to general situations. The central concepts in this paper---the swatch and the cloth---provide a description of the local topology of a cell complex that is general (any physical system that may be represented as a cell complex is admissible) and complete (any statistical question about the local topology may be answered from the cloth). Furthermore, this approach allows a distance to be defined that measures the similarity of the local topology of two cell complexes. The distance is used to identify a steady state of a model dislocation network evolving by energy minimization, and then to rigorously quantify the approach of the simulation to this steady state.
2307.01994
Xusheng Zhu
Xusheng Zhu, Wen Chen, Qingqing Wu, Liwei Wang
Performance Analysis of RIS-Aided Space Shift Keying With Channel Estimation Errors
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate the reconfigurable intelligent surface (RIS) assisted space shift keying (SSK) downlink communication systems under the imperfect channel state information (CSI), where the channel between the base station to RIS follows the Rayleigh fading, while the channel between the RIS to user equipment obeys the Rician fading. Based on the maximum likelihood detector, the conditional pairwise error probability of the composite channel is derived. Then, the probability density function for a non-central chi-square distribution with one degree of freedom is derived. Based on this, the closed-form analytical expression of the RIS-SSK scheme with imperfect CSI is derived. To gain more valuable insights, the asymptotic ABEP expression is also given. Finally, we validate the derived closed-form and asymptotic expressions by Monte Carlo simulations.
[ { "created": "Wed, 5 Jul 2023 02:53:19 GMT", "version": "v1" } ]
2023-07-06
[ [ "Zhu", "Xusheng", "" ], [ "Chen", "Wen", "" ], [ "Wu", "Qingqing", "" ], [ "Wang", "Liwei", "" ] ]
In this paper, we investigate the reconfigurable intelligent surface (RIS) assisted space shift keying (SSK) downlink communication systems under the imperfect channel state information (CSI), where the channel between the base station to RIS follows the Rayleigh fading, while the channel between the RIS to user equipment obeys the Rician fading. Based on the maximum likelihood detector, the conditional pairwise error probability of the composite channel is derived. Then, the probability density function for a non-central chi-square distribution with one degree of freedom is derived. Based on this, the closed-form analytical expression of the RIS-SSK scheme with imperfect CSI is derived. To gain more valuable insights, the asymptotic ABEP expression is also given. Finally, we validate the derived closed-form and asymptotic expressions by Monte Carlo simulations.
2201.12848
Axel Brando Guillaumes
Axel Brando, Joan Gimeno, Jose A. Rodr\'iguez-Serrano, Jordi Vitri\`a
Deep Non-Crossing Quantiles through the Partial Derivative
In the Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS)
null
null
null
cs.LG cs.AI math.PR stat.ML
http://creativecommons.org/licenses/by-sa/4.0/
Quantile Regression (QR) provides a way to approximate a single conditional quantile. To have a more informative description of the conditional distribution, QR can be merged with deep learning techniques to simultaneously estimate multiple quantiles. However, the minimisation of the QR-loss function does not guarantee non-crossing quantiles, which affects the validity of such predictions and introduces a critical issue in certain scenarios. In this article, we propose a generic deep learning algorithm for predicting an arbitrary number of quantiles that ensures the quantile monotonicity constraint up to the machine precision and maintains its modelling performance with respect to alternative models. The presented method is evaluated over several real-world datasets obtaining state-of-the-art results as well as showing that it scales to large-size data sets.
[ { "created": "Sun, 30 Jan 2022 15:35:21 GMT", "version": "v1" } ]
2022-02-01
[ [ "Brando", "Axel", "" ], [ "Gimeno", "Joan", "" ], [ "Rodríguez-Serrano", "Jose A.", "" ], [ "Vitrià", "Jordi", "" ] ]
Quantile Regression (QR) provides a way to approximate a single conditional quantile. To have a more informative description of the conditional distribution, QR can be merged with deep learning techniques to simultaneously estimate multiple quantiles. However, the minimisation of the QR-loss function does not guarantee non-crossing quantiles, which affects the validity of such predictions and introduces a critical issue in certain scenarios. In this article, we propose a generic deep learning algorithm for predicting an arbitrary number of quantiles that ensures the quantile monotonicity constraint up to the machine precision and maintains its modelling performance with respect to alternative models. The presented method is evaluated over several real-world datasets obtaining state-of-the-art results as well as showing that it scales to large-size data sets.
1709.02556
EPTCS
L\'aszl\'o Z. Varga (ELTE E\"otv\"os Lor\'and University)
Game Theory Models for the Verification of the Collective Behaviour of Autonomous Cars
In Proceedings FVAV 2017, arXiv:1709.02126
EPTCS 257, 2017, pp. 27-34
10.4204/EPTCS.257.4
null
cs.MA cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The collective of autonomous cars is expected to generate almost optimal traffic. In this position paper we discuss the multi-agent models and the verification results of the collective behaviour of autonomous cars. We argue that non-cooperative autonomous adaptation cannot guarantee optimal behaviour. The conjecture is that intention aware adaptation with a constraint on simultaneous decision making has the potential to avoid unwanted behaviour. The online routing game model is expected to be the basis to formally prove this conjecture.
[ { "created": "Fri, 8 Sep 2017 06:35:10 GMT", "version": "v1" } ]
2017-09-11
[ [ "Varga", "László Z.", "", "ELTE Eötvös Loránd University" ] ]
The collective of autonomous cars is expected to generate almost optimal traffic. In this position paper we discuss the multi-agent models and the verification results of the collective behaviour of autonomous cars. We argue that non-cooperative autonomous adaptation cannot guarantee optimal behaviour. The conjecture is that intention aware adaptation with a constraint on simultaneous decision making has the potential to avoid unwanted behaviour. The online routing game model is expected to be the basis to formally prove this conjecture.
2210.08883
Lena Mulansky
Lena Mulansky and R\"udiger Pryss and Caroline Cohrdes and Harald Baumeister and Felix Beierle
Social Media App Usage in Relation with PHQ-9 Depression Scores during the COVID-19 Pandemic
Accepted for the UbiComp/ISWC 2022 conference
null
10.1145/3544793.3563411
null
cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With about 300 million affected people, major depressive disorder (MDD) is one of the most common diseases worldwide. During the COVID-19 pandemic, the number of cases increased even further, by 28%. Many factors may be correlated with MDD, including the excessive use of social media apps. In this paper, we investigated the relationship between the use of social media and communication apps and depressive symptoms during the COVID-19 pandemic. The pandemic and social distancing like lockdowns probably changed smartphone usage times and usage patterns. While previous studies have shown an association between depression and social media usage, we report about the situation during these special circumstances.We employed a log-linear regression to examine the association of social media and communication app usage and depression. To quantify the usage, we applied the total usage time in hours of social media apps (e.g., WhatsApp, Facebook) as well as communication apps (Phone and Messaging) within one week. To measure depressive symptoms, we used the PHQ-9 score. We discovered a significant association between the usage time and the PHQ-9 score (beta=0.0084, p-value=0.010). We conclude that social media usage is a robust marker for depression severity and future research should focus on a better understanding of the underlying causality and potential counter-measures.
[ { "created": "Mon, 17 Oct 2022 09:27:24 GMT", "version": "v1" } ]
2022-10-18
[ [ "Mulansky", "Lena", "" ], [ "Pryss", "Rüdiger", "" ], [ "Cohrdes", "Caroline", "" ], [ "Baumeister", "Harald", "" ], [ "Beierle", "Felix", "" ] ]
With about 300 million affected people, major depressive disorder (MDD) is one of the most common diseases worldwide. During the COVID-19 pandemic, the number of cases increased even further, by 28%. Many factors may be correlated with MDD, including the excessive use of social media apps. In this paper, we investigated the relationship between the use of social media and communication apps and depressive symptoms during the COVID-19 pandemic. The pandemic and social distancing like lockdowns probably changed smartphone usage times and usage patterns. While previous studies have shown an association between depression and social media usage, we report about the situation during these special circumstances.We employed a log-linear regression to examine the association of social media and communication app usage and depression. To quantify the usage, we applied the total usage time in hours of social media apps (e.g., WhatsApp, Facebook) as well as communication apps (Phone and Messaging) within one week. To measure depressive symptoms, we used the PHQ-9 score. We discovered a significant association between the usage time and the PHQ-9 score (beta=0.0084, p-value=0.010). We conclude that social media usage is a robust marker for depression severity and future research should focus on a better understanding of the underlying causality and potential counter-measures.
2402.06576
Abhijin Adiga
Abhijin Adiga, Yohai Trabelsi, Tanvir Ferdousi, Madhav Marathe, S. S. Ravi, Samarth Swarup, Anil Kumar Vullikanti, Mandy L. Wilson, Sarit Kraus, Reetwika Basu, Supriya Savalkar, Matthew Yourek, Michael Brady, Kirti Rajagopalan, Jonathan Yoder
Value-based Resource Matching with Fairness Criteria: Application to Agricultural Water Trading
null
null
null
null
cs.DS cs.MA
http://creativecommons.org/licenses/by/4.0/
Optimal allocation of agricultural water in the event of droughts is an important global problem. In addressing this problem, many aspects, including the welfare of farmers, the economy, and the environment, must be considered. Under this backdrop, our work focuses on several resource-matching problems accounting for agents with multi-crop portfolios, geographic constraints, and fairness. First, we address a matching problem where the goal is to maximize a welfare function in two-sided markets where buyers' requirements and sellers' supplies are represented by value functions that assign prices (or costs) to specified volumes of water. For the setting where the value functions satisfy certain monotonicity properties, we present an efficient algorithm that maximizes a social welfare function. When there are minimum water requirement constraints, we present a randomized algorithm which ensures that the constraints are satisfied in expectation. For a single seller--multiple buyers setting with fairness constraints, we design an efficient algorithm that maximizes the minimum level of satisfaction of any buyer. We also present computational complexity results that highlight the limits on the generalizability of our results. We evaluate the algorithms developed in our work with experiments on both real-world and synthetic data sets with respect to drought severity, value functions, and seniority of agents.
[ { "created": "Fri, 9 Feb 2024 17:50:40 GMT", "version": "v1" }, { "created": "Mon, 12 Feb 2024 02:17:04 GMT", "version": "v2" } ]
2024-02-13
[ [ "Adiga", "Abhijin", "" ], [ "Trabelsi", "Yohai", "" ], [ "Ferdousi", "Tanvir", "" ], [ "Marathe", "Madhav", "" ], [ "Ravi", "S. S.", "" ], [ "Swarup", "Samarth", "" ], [ "Vullikanti", "Anil Kumar", "" ], [ "Wilson", "Mandy L.", "" ], [ "Kraus", "Sarit", "" ], [ "Basu", "Reetwika", "" ], [ "Savalkar", "Supriya", "" ], [ "Yourek", "Matthew", "" ], [ "Brady", "Michael", "" ], [ "Rajagopalan", "Kirti", "" ], [ "Yoder", "Jonathan", "" ] ]
Optimal allocation of agricultural water in the event of droughts is an important global problem. In addressing this problem, many aspects, including the welfare of farmers, the economy, and the environment, must be considered. Under this backdrop, our work focuses on several resource-matching problems accounting for agents with multi-crop portfolios, geographic constraints, and fairness. First, we address a matching problem where the goal is to maximize a welfare function in two-sided markets where buyers' requirements and sellers' supplies are represented by value functions that assign prices (or costs) to specified volumes of water. For the setting where the value functions satisfy certain monotonicity properties, we present an efficient algorithm that maximizes a social welfare function. When there are minimum water requirement constraints, we present a randomized algorithm which ensures that the constraints are satisfied in expectation. For a single seller--multiple buyers setting with fairness constraints, we design an efficient algorithm that maximizes the minimum level of satisfaction of any buyer. We also present computational complexity results that highlight the limits on the generalizability of our results. We evaluate the algorithms developed in our work with experiments on both real-world and synthetic data sets with respect to drought severity, value functions, and seniority of agents.
2010.12408
Hande Dong
Hande Dong, Jiawei Chen, Fuli Feng, Xiangnan He, Shuxian Bi, Zhaolin Ding, Peng Cui
On the Equivalence of Decoupled Graph Convolution Network and Label Propagation
Accepted by WWW 2021
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The original design of Graph Convolution Network (GCN) couples feature transformation and neighborhood aggregation for node representation learning. Recently, some work shows that coupling is inferior to decoupling, which supports deep graph propagation better and has become the latest paradigm of GCN (e.g., APPNP and SGCN). Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood. In this paper, we explore the decoupled GCN for semi-supervised node classification from a novel and fundamental perspective -- label propagation. We conduct thorough theoretical analyses, proving that the decoupled GCN is essentially the same as the two-step label propagation: first, propagating the known labels along the graph to generate pseudo-labels for the unlabeled nodes, and second, training normal neural network classifiers on the augmented pseudo-labeled data. More interestingly, we reveal the effectiveness of decoupled GCN: going beyond the conventional label propagation, it could automatically assign structure- and model- aware weights to the pseudo-label data. This explains why the decoupled GCN is relatively robust to the structure noise and over-smoothing, but sensitive to the label noise and model initialization. Based on this insight, we propose a new label propagation method named Propagation then Training Adaptively (PTA), which overcomes the flaws of the decoupled GCN with a dynamic and adaptive weighting strategy. Our PTA is simple yet more effective and robust than decoupled GCN. We empirically validate our findings on four benchmark datasets, demonstrating the advantages of our method. The code is available at https://github.com/DongHande/PT_propagation_then_training.
[ { "created": "Fri, 23 Oct 2020 13:57:39 GMT", "version": "v1" }, { "created": "Mon, 15 Feb 2021 12:23:39 GMT", "version": "v2" } ]
2021-02-16
[ [ "Dong", "Hande", "" ], [ "Chen", "Jiawei", "" ], [ "Feng", "Fuli", "" ], [ "He", "Xiangnan", "" ], [ "Bi", "Shuxian", "" ], [ "Ding", "Zhaolin", "" ], [ "Cui", "Peng", "" ] ]
The original design of Graph Convolution Network (GCN) couples feature transformation and neighborhood aggregation for node representation learning. Recently, some work shows that coupling is inferior to decoupling, which supports deep graph propagation better and has become the latest paradigm of GCN (e.g., APPNP and SGCN). Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood. In this paper, we explore the decoupled GCN for semi-supervised node classification from a novel and fundamental perspective -- label propagation. We conduct thorough theoretical analyses, proving that the decoupled GCN is essentially the same as the two-step label propagation: first, propagating the known labels along the graph to generate pseudo-labels for the unlabeled nodes, and second, training normal neural network classifiers on the augmented pseudo-labeled data. More interestingly, we reveal the effectiveness of decoupled GCN: going beyond the conventional label propagation, it could automatically assign structure- and model- aware weights to the pseudo-label data. This explains why the decoupled GCN is relatively robust to the structure noise and over-smoothing, but sensitive to the label noise and model initialization. Based on this insight, we propose a new label propagation method named Propagation then Training Adaptively (PTA), which overcomes the flaws of the decoupled GCN with a dynamic and adaptive weighting strategy. Our PTA is simple yet more effective and robust than decoupled GCN. We empirically validate our findings on four benchmark datasets, demonstrating the advantages of our method. The code is available at https://github.com/DongHande/PT_propagation_then_training.
2402.08023
Yijun Tian
Yijun Tian, Chuxu Zhang, Ziyi Kou, Zheyuan Liu, Xiangliang Zhang, Nitesh V. Chawla
UGMAE: A Unified Framework for Graph Masked Autoencoders
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative self-supervised learning on graphs, particularly graph masked autoencoders, has emerged as a popular learning paradigm and demonstrated its efficacy in handling non-Euclidean data. However, several remaining issues limit the capability of existing methods: 1) the disregard of uneven node significance in masking, 2) the underutilization of holistic graph information, 3) the ignorance of semantic knowledge in the representation space due to the exclusive use of reconstruction loss in the output space, and 4) the unstable reconstructions caused by the large volume of masked contents. In light of this, we propose UGMAE, a unified framework for graph masked autoencoders to address these issues from the perspectives of adaptivity, integrity, complementarity, and consistency. Specifically, we first develop an adaptive feature mask generator to account for the unique significance of nodes and sample informative masks (adaptivity). We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information and emphasize the topological proximity between neighbors (integrity). After that, we present a bootstrapping-based similarity module to encode the high-level semantic knowledge in the representation space, complementary to the low-level reconstruction in the output space (complementarity). Finally, we build a consistency assurance module to provide reconstruction objectives with extra stabilized consistency targets (consistency). Extensive experiments demonstrate that UGMAE outperforms both contrastive and generative state-of-the-art baselines on several tasks across multiple datasets.
[ { "created": "Mon, 12 Feb 2024 19:39:26 GMT", "version": "v1" } ]
2024-02-14
[ [ "Tian", "Yijun", "" ], [ "Zhang", "Chuxu", "" ], [ "Kou", "Ziyi", "" ], [ "Liu", "Zheyuan", "" ], [ "Zhang", "Xiangliang", "" ], [ "Chawla", "Nitesh V.", "" ] ]
Generative self-supervised learning on graphs, particularly graph masked autoencoders, has emerged as a popular learning paradigm and demonstrated its efficacy in handling non-Euclidean data. However, several remaining issues limit the capability of existing methods: 1) the disregard of uneven node significance in masking, 2) the underutilization of holistic graph information, 3) the ignorance of semantic knowledge in the representation space due to the exclusive use of reconstruction loss in the output space, and 4) the unstable reconstructions caused by the large volume of masked contents. In light of this, we propose UGMAE, a unified framework for graph masked autoencoders to address these issues from the perspectives of adaptivity, integrity, complementarity, and consistency. Specifically, we first develop an adaptive feature mask generator to account for the unique significance of nodes and sample informative masks (adaptivity). We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information and emphasize the topological proximity between neighbors (integrity). After that, we present a bootstrapping-based similarity module to encode the high-level semantic knowledge in the representation space, complementary to the low-level reconstruction in the output space (complementarity). Finally, we build a consistency assurance module to provide reconstruction objectives with extra stabilized consistency targets (consistency). Extensive experiments demonstrate that UGMAE outperforms both contrastive and generative state-of-the-art baselines on several tasks across multiple datasets.
2311.17135
Weilin Wan
Weilin Wan, Zhiyang Dou, Taku Komura, Wenping Wang, Dinesh Jayaraman, Lingjie Liu
TLControl: Trajectory and Language Control for Human Motion Synthesis
null
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Controllable human motion synthesis is essential for applications in AR/VR, gaming and embodied AI. Existing methods often focus solely on either language or full trajectory control, lacking precision in synthesizing motions aligned with user-specified trajectories, especially for multi-joint control. To address these issues, we present TLControl, a novel method for realistic human motion synthesis, incorporating both low-level Trajectory and high-level Language semantics controls, through the integration of neural-based and optimization-based techniques. Specifically, we begin with training a VQ-VAE for a compact and well-structured latent motion space organized by body parts. We then propose a Masked Trajectories Transformer (MTT) for predicting a motion distribution conditioned on language and trajectory. Once trained, we use MTT to sample initial motion predictions given user-specified partial trajectories and text descriptions as conditioning. Finally, we introduce a test-time optimization to refine these coarse predictions for precise trajectory control, which offers flexibility by allowing users to specify various optimization goals and ensures high runtime efficiency. Comprehensive experiments show that TLControl significantly outperforms the state-of-the-art in trajectory accuracy and time efficiency, making it practical for interactive and high-quality animation generation.
[ { "created": "Tue, 28 Nov 2023 18:54:16 GMT", "version": "v1" }, { "created": "Thu, 30 Nov 2023 20:36:16 GMT", "version": "v2" }, { "created": "Tue, 12 Dec 2023 22:18:18 GMT", "version": "v3" }, { "created": "Wed, 24 Jul 2024 13:55:48 GMT", "version": "v4" } ]
2024-07-25
[ [ "Wan", "Weilin", "" ], [ "Dou", "Zhiyang", "" ], [ "Komura", "Taku", "" ], [ "Wang", "Wenping", "" ], [ "Jayaraman", "Dinesh", "" ], [ "Liu", "Lingjie", "" ] ]
Controllable human motion synthesis is essential for applications in AR/VR, gaming and embodied AI. Existing methods often focus solely on either language or full trajectory control, lacking precision in synthesizing motions aligned with user-specified trajectories, especially for multi-joint control. To address these issues, we present TLControl, a novel method for realistic human motion synthesis, incorporating both low-level Trajectory and high-level Language semantics controls, through the integration of neural-based and optimization-based techniques. Specifically, we begin with training a VQ-VAE for a compact and well-structured latent motion space organized by body parts. We then propose a Masked Trajectories Transformer (MTT) for predicting a motion distribution conditioned on language and trajectory. Once trained, we use MTT to sample initial motion predictions given user-specified partial trajectories and text descriptions as conditioning. Finally, we introduce a test-time optimization to refine these coarse predictions for precise trajectory control, which offers flexibility by allowing users to specify various optimization goals and ensures high runtime efficiency. Comprehensive experiments show that TLControl significantly outperforms the state-of-the-art in trajectory accuracy and time efficiency, making it practical for interactive and high-quality animation generation.
2312.02509
Ero Balsa
Ero Balsa and Yan Shvartzshnaider
When PETs misbehave: A Contextual Integrity analysis
null
null
null
null
cs.CR cs.CY cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Privacy enhancing technologies, or PETs, have been hailed as a promising means to protect privacy without compromising on the functionality of digital services. At the same time, and partly because they may encode a narrow conceptualization of privacy as confidentiality that is popular among policymakers, engineers and the public, PETs risk being co-opted to promote privacy-invasive practices. In this paper, we resort to the theory of Contextual Integrity to explain how privacy technologies may be misused to erode privacy. To illustrate, we consider three PETs and scenarios: anonymous credentials for age verification, client-side scanning for illegal content detection, and homomorphic encryption for machine learning model training. Using the theory of Contextual Integrity, we reason about the notion of privacy that these PETs encode, and show that CI enables us to identify and reason about the limitations of PETs and their misuse, and which may ultimately lead to privacy violations.
[ { "created": "Tue, 5 Dec 2023 05:27:43 GMT", "version": "v1" } ]
2023-12-06
[ [ "Balsa", "Ero", "" ], [ "Shvartzshnaider", "Yan", "" ] ]
Privacy enhancing technologies, or PETs, have been hailed as a promising means to protect privacy without compromising on the functionality of digital services. At the same time, and partly because they may encode a narrow conceptualization of privacy as confidentiality that is popular among policymakers, engineers and the public, PETs risk being co-opted to promote privacy-invasive practices. In this paper, we resort to the theory of Contextual Integrity to explain how privacy technologies may be misused to erode privacy. To illustrate, we consider three PETs and scenarios: anonymous credentials for age verification, client-side scanning for illegal content detection, and homomorphic encryption for machine learning model training. Using the theory of Contextual Integrity, we reason about the notion of privacy that these PETs encode, and show that CI enables us to identify and reason about the limitations of PETs and their misuse, and which may ultimately lead to privacy violations.
1404.4888
Isadora Nun Ms
Isadora Nun, Karim Pichara, Pavlos Protopapas, Dae-Won Kim
Supervised detection of anomalous light-curves in massive astronomical catalogs
16 pages, 18 figures, published in The Astrophysical Journal
2014, ApJ, 793, 23
10.1088/0004-637X/793/1/23
null
cs.CE astro-ph.IM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. To process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new method to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all the information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each of the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. Our method is suitable for exploring massive datasets given that the training process is performed offline. We tested our algorithm on 20 millions light-curves from the MACHO catalog and generated a list of anomalous candidates. We divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post analysis stage by perfoming a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables and X-ray sources. For some outliers there were no additional information. Among them we identified three unknown variability types and few individual outliers that will be followed up for a deeper analysis.
[ { "created": "Fri, 18 Apr 2014 21:12:13 GMT", "version": "v1" }, { "created": "Wed, 3 Sep 2014 15:50:49 GMT", "version": "v2" }, { "created": "Wed, 27 May 2015 21:27:11 GMT", "version": "v3" } ]
2015-05-29
[ [ "Nun", "Isadora", "" ], [ "Pichara", "Karim", "" ], [ "Protopapas", "Pavlos", "" ], [ "Kim", "Dae-Won", "" ] ]
The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. To process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new method to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all the information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each of the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. Our method is suitable for exploring massive datasets given that the training process is performed offline. We tested our algorithm on 20 millions light-curves from the MACHO catalog and generated a list of anomalous candidates. We divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post analysis stage by perfoming a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables and X-ray sources. For some outliers there were no additional information. Among them we identified three unknown variability types and few individual outliers that will be followed up for a deeper analysis.
1509.04273
Konrad Dabrowski
Andreas Brandst\"adt, Konrad K. Dabrowski, Shenwei Huang, Dani\"el Paulusma
Bounding the Clique-Width of $H$-free Split Graphs
17 pages, 5 figures. An extended abstract of this paper appeared in the proceedings of EuroComb 2015
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A graph is $H$-free if it has no induced subgraph isomorphic to $H$. We continue a study into the boundedness of clique-width of subclasses of perfect graphs. We identify five new classes of $H$-free split graphs whose clique-width is bounded. Our main result, obtained by combining new and known results, provides a classification of all but two stubborn cases, that is, with two potential exceptions we determine all graphs $H$ for which the class of $H$-free split graphs has bounded clique-width.
[ { "created": "Mon, 14 Sep 2015 20:07:31 GMT", "version": "v1" } ]
2015-09-16
[ [ "Brandstädt", "Andreas", "" ], [ "Dabrowski", "Konrad K.", "" ], [ "Huang", "Shenwei", "" ], [ "Paulusma", "Daniël", "" ] ]
A graph is $H$-free if it has no induced subgraph isomorphic to $H$. We continue a study into the boundedness of clique-width of subclasses of perfect graphs. We identify five new classes of $H$-free split graphs whose clique-width is bounded. Our main result, obtained by combining new and known results, provides a classification of all but two stubborn cases, that is, with two potential exceptions we determine all graphs $H$ for which the class of $H$-free split graphs has bounded clique-width.
2206.01966
Sajjad Karimian
B Rahimi, S Karimian, A Ghaznavi, M Jafari Heydarlou
Development and Evaluation of Dental Image Exchange and Management System: A User-Centered Perspective
3 figures, 5 tables
null
null
null
cs.HC cs.IR cs.MM cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Introduction: Systems that exist in the hospital or clinic settings are capable of providing services in the physical environment. These systems (e.g., Picture Archiving and communication systems) provide remote service for patients. To design such systems, we need some unique methods such as software development life cycle and different methods such as prototyping. Clinical setting: This study designs an image exchange system in the private dental sector of Urmia city using user-centered methods and prototyping. Methods: Information was collected based on each stage's software development life cycle. Interviews and observations were used to gather user-needs data, such as object-oriented programming for developing a Prototype. Results: The users' needs were determined to consider at the beginning. Ease of use, security, and mobile apps were their most essential needs. Then, the prototype was designed and evaluated in the focus group session. These steps continued until users were satisfied in the focus group. Eventually, after the users' consent, the prototype became the final system. Discussion: Instant access to Information, volunteering, user interface design, and usefulness were the most critical variables users considered. The advantage of this system also includes less radiation to the patient due to not losing and missing the clips of the patient's images. Conclusion: The success of such a system requires the consideration of end-users needs and their application to the system. In addition to this system, having an electronic health record can improve the treatment process and improve the work of the medical staff.
[ { "created": "Sat, 4 Jun 2022 11:16:43 GMT", "version": "v1" } ]
2024-04-23
[ [ "Rahimi", "B", "" ], [ "Karimian", "S", "" ], [ "Ghaznavi", "A", "" ], [ "Heydarlou", "M Jafari", "" ] ]
Introduction: Systems that exist in the hospital or clinic settings are capable of providing services in the physical environment. These systems (e.g., Picture Archiving and communication systems) provide remote service for patients. To design such systems, we need some unique methods such as software development life cycle and different methods such as prototyping. Clinical setting: This study designs an image exchange system in the private dental sector of Urmia city using user-centered methods and prototyping. Methods: Information was collected based on each stage's software development life cycle. Interviews and observations were used to gather user-needs data, such as object-oriented programming for developing a Prototype. Results: The users' needs were determined to consider at the beginning. Ease of use, security, and mobile apps were their most essential needs. Then, the prototype was designed and evaluated in the focus group session. These steps continued until users were satisfied in the focus group. Eventually, after the users' consent, the prototype became the final system. Discussion: Instant access to Information, volunteering, user interface design, and usefulness were the most critical variables users considered. The advantage of this system also includes less radiation to the patient due to not losing and missing the clips of the patient's images. Conclusion: The success of such a system requires the consideration of end-users needs and their application to the system. In addition to this system, having an electronic health record can improve the treatment process and improve the work of the medical staff.
2405.13094
Kun Xie
Yusong Zhang, Kun Xie, Xingyi Zhang, Xiangyu Dong, Sibo Wang
KPG: Key Propagation Graph Generator for Rumor Detection based on Reinforcement Learning
null
null
null
null
cs.SI cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The proliferation of rumors on social media platforms during significant events, such as the US elections and the COVID-19 pandemic, has a profound impact on social stability and public health. Existing approaches for rumor detection primarily rely on propagation graphs to enhance model effectiveness. However, the presence of noisy and irrelevant structures during the propagation process limits the efficacy of these approaches. To tackle this issue, techniques such as weight adjustment and data augmentation have been proposed. However, these techniques heavily depend on rich original propagation structures, thus hindering performance when dealing with rumors that lack sufficient propagation information in the early propagation stages. In this paper, we propose Key Propagation Graph Generator (KPG), a novel reinforcement learning-based rumor detection framework that generates contextually coherent and informative propagation patterns for events with insufficient topology information, while also identifies indicative substructures for events with redundant and noisy propagation structures. KPG consists of two key components: the Candidate Response Generator (CRG) and the Ending Node Selector (ENS). CRG learns the latent distribution from refined propagation patterns, filtering out noise and generating new candidates for ENS. Simultaneously, ENS identifies the most influential substructures within propagation graphs and generates training data for CRG. Moreover, we introduce an end-to-end framework that utilizes rewards to guide the entire training process via a pre-trained graph neural network. Extensive experiments conducted on four datasets demonstrate the superiority of our KPG compared to the state-of-the-art approaches.
[ { "created": "Tue, 21 May 2024 13:13:43 GMT", "version": "v1" } ]
2024-05-24
[ [ "Zhang", "Yusong", "" ], [ "Xie", "Kun", "" ], [ "Zhang", "Xingyi", "" ], [ "Dong", "Xiangyu", "" ], [ "Wang", "Sibo", "" ] ]
The proliferation of rumors on social media platforms during significant events, such as the US elections and the COVID-19 pandemic, has a profound impact on social stability and public health. Existing approaches for rumor detection primarily rely on propagation graphs to enhance model effectiveness. However, the presence of noisy and irrelevant structures during the propagation process limits the efficacy of these approaches. To tackle this issue, techniques such as weight adjustment and data augmentation have been proposed. However, these techniques heavily depend on rich original propagation structures, thus hindering performance when dealing with rumors that lack sufficient propagation information in the early propagation stages. In this paper, we propose Key Propagation Graph Generator (KPG), a novel reinforcement learning-based rumor detection framework that generates contextually coherent and informative propagation patterns for events with insufficient topology information, while also identifies indicative substructures for events with redundant and noisy propagation structures. KPG consists of two key components: the Candidate Response Generator (CRG) and the Ending Node Selector (ENS). CRG learns the latent distribution from refined propagation patterns, filtering out noise and generating new candidates for ENS. Simultaneously, ENS identifies the most influential substructures within propagation graphs and generates training data for CRG. Moreover, we introduce an end-to-end framework that utilizes rewards to guide the entire training process via a pre-trained graph neural network. Extensive experiments conducted on four datasets demonstrate the superiority of our KPG compared to the state-of-the-art approaches.
2402.08690
Dobromir Dotov
Dobromir Dotov, Dante Camarena, Zack Harris, Joanna Spyra, Pietro Gagliano, Laurel Trainor
If Turing played piano with an artificial partner
null
null
null
null
cs.SI cs.AI cs.LG cs.SD
http://creativecommons.org/licenses/by-nc-nd/4.0/
Music is an inherently social activity that allows people to share experiences and feel connected with one another. There has been little progress in designing artificial partners exhibiting a similar social experience as playing with another person. Neural network architectures that implement generative models, such as large language models, are suited for producing musical scores. Playing music socially, however, involves more than playing a score; it must complement the other musicians' ideas and keep time correctly. We addressed the question of whether a convincing social experience is made possible by a generative model trained to produce musical scores, not necessarily optimized for synchronization and continuation. The network, a variational autoencoder trained on a large corpus of digital scores, was adapted for a timed call-and-response task with a human partner. Participants played piano with a human or artificial partner-in various configurations-and rated the performance quality and first-person experience of self-other integration. Overall, the artificial partners held promise but were rated lower than human partners. The artificial partner with simplest design and highest similarity parameter was not rated differently from the human partners on some measures, suggesting that interactive rather than generative sophistication is important in enabling social AI.
[ { "created": "Fri, 9 Feb 2024 18:43:48 GMT", "version": "v1" } ]
2024-02-15
[ [ "Dotov", "Dobromir", "" ], [ "Camarena", "Dante", "" ], [ "Harris", "Zack", "" ], [ "Spyra", "Joanna", "" ], [ "Gagliano", "Pietro", "" ], [ "Trainor", "Laurel", "" ] ]
Music is an inherently social activity that allows people to share experiences and feel connected with one another. There has been little progress in designing artificial partners exhibiting a similar social experience as playing with another person. Neural network architectures that implement generative models, such as large language models, are suited for producing musical scores. Playing music socially, however, involves more than playing a score; it must complement the other musicians' ideas and keep time correctly. We addressed the question of whether a convincing social experience is made possible by a generative model trained to produce musical scores, not necessarily optimized for synchronization and continuation. The network, a variational autoencoder trained on a large corpus of digital scores, was adapted for a timed call-and-response task with a human partner. Participants played piano with a human or artificial partner-in various configurations-and rated the performance quality and first-person experience of self-other integration. Overall, the artificial partners held promise but were rated lower than human partners. The artificial partner with simplest design and highest similarity parameter was not rated differently from the human partners on some measures, suggesting that interactive rather than generative sophistication is important in enabling social AI.
2402.09216
Sankalan Pal Chowdhury
Sankalan Pal Chowdhury, Vil\'em Zouhar, Mrinmaya Sachan
AutoTutor meets Large Language Models: A Language Model Tutor with Rich Pedagogy and Guardrails
To be presented at Learning@Scale 2024
null
null
null
cs.CL cs.HC
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) have found several use cases in education, ranging from automatic question generation to essay evaluation. In this paper, we explore the potential of using Large Language Models (LLMs) to author Intelligent Tutoring Systems. A common pitfall of LLMs is their straying from desired pedagogical strategies such as leaking the answer to the student, and in general, providing no guarantees. We posit that while LLMs with certain guardrails can take the place of subject experts, the overall pedagogical design still needs to be handcrafted for the best learning results. Based on this principle, we create a sample end-to-end tutoring system named MWPTutor, which uses LLMs to fill in the state space of a pre-defined finite state transducer. This approach retains the structure and the pedagogy of traditional tutoring systems that has been developed over the years by learning scientists but brings in additional flexibility of LLM-based approaches. Through a human evaluation study on two datasets based on math word problems, we show that our hybrid approach achieves a better overall tutoring score than an instructed, but otherwise free-form, GPT-4. MWPTutor is completely modular and opens up the scope for the community to improve its performance by improving individual modules or using different teaching strategies that it can follow.
[ { "created": "Wed, 14 Feb 2024 14:53:56 GMT", "version": "v1" }, { "created": "Tue, 27 Feb 2024 11:27:27 GMT", "version": "v2" }, { "created": "Thu, 25 Apr 2024 13:15:55 GMT", "version": "v3" } ]
2024-04-26
[ [ "Chowdhury", "Sankalan Pal", "" ], [ "Zouhar", "Vilém", "" ], [ "Sachan", "Mrinmaya", "" ] ]
Large Language Models (LLMs) have found several use cases in education, ranging from automatic question generation to essay evaluation. In this paper, we explore the potential of using Large Language Models (LLMs) to author Intelligent Tutoring Systems. A common pitfall of LLMs is their straying from desired pedagogical strategies such as leaking the answer to the student, and in general, providing no guarantees. We posit that while LLMs with certain guardrails can take the place of subject experts, the overall pedagogical design still needs to be handcrafted for the best learning results. Based on this principle, we create a sample end-to-end tutoring system named MWPTutor, which uses LLMs to fill in the state space of a pre-defined finite state transducer. This approach retains the structure and the pedagogy of traditional tutoring systems that has been developed over the years by learning scientists but brings in additional flexibility of LLM-based approaches. Through a human evaluation study on two datasets based on math word problems, we show that our hybrid approach achieves a better overall tutoring score than an instructed, but otherwise free-form, GPT-4. MWPTutor is completely modular and opens up the scope for the community to improve its performance by improving individual modules or using different teaching strategies that it can follow.
1905.04727
Milan Gritta
Milan Gritta
A Comparison of Techniques for Sentiment Classification of Film Reviews
A short paper from my MPhil in Advanced Computer Science (2014-15)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We undertake the task of comparing lexicon-based sentiment classification of film reviews with machine learning approaches. We look at existing methodologies and attempt to emulate and improve on them using a 'given' lexicon and a bag-of-words approach. We also utilise syntactical information such as part-of-speech and dependency relations. We will show that a simple lexicon-based classification achieves good results however machine learning techniques prove to be the superior tool. We also show that more features do not necessarily deliver better performance as well as elaborate on three further enhancements not tested in this article.
[ { "created": "Sun, 12 May 2019 14:19:28 GMT", "version": "v1" } ]
2019-05-14
[ [ "Gritta", "Milan", "" ] ]
We undertake the task of comparing lexicon-based sentiment classification of film reviews with machine learning approaches. We look at existing methodologies and attempt to emulate and improve on them using a 'given' lexicon and a bag-of-words approach. We also utilise syntactical information such as part-of-speech and dependency relations. We will show that a simple lexicon-based classification achieves good results however machine learning techniques prove to be the superior tool. We also show that more features do not necessarily deliver better performance as well as elaborate on three further enhancements not tested in this article.
2001.08680
Zijie Zhuang
Zijie Zhuang, Longhui Wei, Lingxi Xie, Tianyu Zhang, Hengheng Zhang, Haozhe Wu, Haizhou Ai, and Qi Tian
Rethinking the Distribution Gap of Person Re-identification with Camera-based Batch Normalization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The fundamental difficulty in person re-identification (ReID) lies in learning the correspondence among individual cameras. It strongly demands costly inter-camera annotations, yet the trained models are not guaranteed to transfer well to previously unseen cameras. These problems significantly limit the application of ReID. This paper rethinks the working mechanism of conventional ReID approaches and puts forward a new solution. With an effective operator named Camera-based Batch Normalization (CBN), we force the image data of all cameras to fall onto the same subspace, so that the distribution gap between any camera pair is largely shrunk. This alignment brings two benefits. First, the trained model enjoys better abilities to generalize across scenarios with unseen cameras as well as transfer across multiple training sets. Second, we can rely on intra-camera annotations, which have been undervalued before due to the lack of cross-camera information, to achieve competitive ReID performance. Experiments on a wide range of ReID tasks demonstrate the effectiveness of our approach. The code is available at https://github.com/automan000/Camera-based-Person-ReID.
[ { "created": "Thu, 23 Jan 2020 17:22:34 GMT", "version": "v1" }, { "created": "Tue, 31 Mar 2020 14:42:17 GMT", "version": "v2" }, { "created": "Sat, 18 Jul 2020 15:37:06 GMT", "version": "v3" } ]
2020-07-21
[ [ "Zhuang", "Zijie", "" ], [ "Wei", "Longhui", "" ], [ "Xie", "Lingxi", "" ], [ "Zhang", "Tianyu", "" ], [ "Zhang", "Hengheng", "" ], [ "Wu", "Haozhe", "" ], [ "Ai", "Haizhou", "" ], [ "Tian", "Qi", "" ] ]
The fundamental difficulty in person re-identification (ReID) lies in learning the correspondence among individual cameras. It strongly demands costly inter-camera annotations, yet the trained models are not guaranteed to transfer well to previously unseen cameras. These problems significantly limit the application of ReID. This paper rethinks the working mechanism of conventional ReID approaches and puts forward a new solution. With an effective operator named Camera-based Batch Normalization (CBN), we force the image data of all cameras to fall onto the same subspace, so that the distribution gap between any camera pair is largely shrunk. This alignment brings two benefits. First, the trained model enjoys better abilities to generalize across scenarios with unseen cameras as well as transfer across multiple training sets. Second, we can rely on intra-camera annotations, which have been undervalued before due to the lack of cross-camera information, to achieve competitive ReID performance. Experiments on a wide range of ReID tasks demonstrate the effectiveness of our approach. The code is available at https://github.com/automan000/Camera-based-Person-ReID.
1903.10343
Yassir Jedra
Yassir Jedra and Alexandre Proutiere
Sample Complexity Lower Bounds for Linear System Identification
null
null
null
null
cs.SY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper establishes problem-specific sample complexity lower bounds for linear system identification problems. The sample complexity is defined in the PAC framework: it corresponds to the time it takes to identify the system parameters with prescribed accuracy and confidence levels. By problem-specific, we mean that the lower bound explicitly depends on the system to be identified (which contrasts with minimax lower bounds), and hence really captures the identification hardness specific to the system. We consider both uncontrolled and controlled systems. For uncontrolled systems, the lower bounds are valid for any linear system, stable or not, and only depend of the system finite-time controllability gramian. A simplified lower bound depending on the spectrum of the system only is also derived. In view of recent finitetime analysis of classical estimation methods (e.g. ordinary least squares), our sample complexity lower bounds are tight for many systems. For controlled systems, our lower bounds are not as explicit as in the case of uncontrolled systems, but could well provide interesting insights into the design of control policy with minimal sample complexity.
[ { "created": "Mon, 25 Mar 2019 14:06:27 GMT", "version": "v1" } ]
2019-03-26
[ [ "Jedra", "Yassir", "" ], [ "Proutiere", "Alexandre", "" ] ]
This paper establishes problem-specific sample complexity lower bounds for linear system identification problems. The sample complexity is defined in the PAC framework: it corresponds to the time it takes to identify the system parameters with prescribed accuracy and confidence levels. By problem-specific, we mean that the lower bound explicitly depends on the system to be identified (which contrasts with minimax lower bounds), and hence really captures the identification hardness specific to the system. We consider both uncontrolled and controlled systems. For uncontrolled systems, the lower bounds are valid for any linear system, stable or not, and only depend of the system finite-time controllability gramian. A simplified lower bound depending on the spectrum of the system only is also derived. In view of recent finitetime analysis of classical estimation methods (e.g. ordinary least squares), our sample complexity lower bounds are tight for many systems. For controlled systems, our lower bounds are not as explicit as in the case of uncontrolled systems, but could well provide interesting insights into the design of control policy with minimal sample complexity.
2312.13913
Xianfang Zeng
Xianfang Zeng, Xin Chen, Zhongqi Qi, Wen Liu, Zibo Zhao, Zhibin Wang, Bin Fu, Yong Liu, Gang Yu
Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models
Project Website: https://github.com/OpenTexture/Paint3D
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents Paint3D, a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs. The key challenge addressed is generating high-quality textures without embedded illumination information, which allows the textures to be re-lighted or re-edited within modern graphics pipelines. To achieve this, our method first leverages a pre-trained depth-aware 2D diffusion model to generate view-conditional images and perform multi-view texture fusion, producing an initial coarse texture map. However, as 2D models cannot fully represent 3D shapes and disable lighting effects, the coarse texture map exhibits incomplete areas and illumination artifacts. To resolve this, we train separate UV Inpainting and UVHD diffusion models specialized for the shape-aware refinement of incomplete areas and the removal of illumination artifacts. Through this coarse-to-fine process, Paint3D can produce high-quality 2K UV textures that maintain semantic consistency while being lighting-less, significantly advancing the state-of-the-art in texturing 3D objects.
[ { "created": "Thu, 21 Dec 2023 15:01:47 GMT", "version": "v1" }, { "created": "Fri, 22 Dec 2023 06:27:43 GMT", "version": "v2" } ]
2023-12-25
[ [ "Zeng", "Xianfang", "" ], [ "Chen", "Xin", "" ], [ "Qi", "Zhongqi", "" ], [ "Liu", "Wen", "" ], [ "Zhao", "Zibo", "" ], [ "Wang", "Zhibin", "" ], [ "Fu", "Bin", "" ], [ "Liu", "Yong", "" ], [ "Yu", "Gang", "" ] ]
This paper presents Paint3D, a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs. The key challenge addressed is generating high-quality textures without embedded illumination information, which allows the textures to be re-lighted or re-edited within modern graphics pipelines. To achieve this, our method first leverages a pre-trained depth-aware 2D diffusion model to generate view-conditional images and perform multi-view texture fusion, producing an initial coarse texture map. However, as 2D models cannot fully represent 3D shapes and disable lighting effects, the coarse texture map exhibits incomplete areas and illumination artifacts. To resolve this, we train separate UV Inpainting and UVHD diffusion models specialized for the shape-aware refinement of incomplete areas and the removal of illumination artifacts. Through this coarse-to-fine process, Paint3D can produce high-quality 2K UV textures that maintain semantic consistency while being lighting-less, significantly advancing the state-of-the-art in texturing 3D objects.
2312.08822
Fengheng Li
Zhaochen Li, Fengheng Li, Wei Feng, Honghe Zhu, An Liu, Yaoyu Li, Zheng Zhang, Jingjing Lv, Xin Zhu, Junjie Shen, Zhangang Lin, Jingping Shao, Zhenglu Yang
Planning and Rendering: Towards End-to-End Product Poster Generation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
End-to-end product poster generation significantly optimizes design efficiency and reduces production costs. Prevailing methods predominantly rely on image-inpainting methods to generate clean background images for given products. Subsequently, poster layout generation methods are employed to produce corresponding layout results. However, the background images may not be suitable for accommodating textual content due to their complexity, and the fixed location of products limits the diversity of layout results. To alleviate these issues, we propose a novel product poster generation framework named P\&R. The P\&R draws inspiration from the workflow of designers in creating posters, which consists of two stages: Planning and Rendering. At the planning stage, we propose a PlanNet to generate the layout of the product and other visual components considering both the appearance features of the product and semantic features of the text, which improves the diversity and rationality of the layouts. At the rendering stage, we propose a RenderNet to generate the background for the product while considering the generated layout, where a spatial fusion module is introduced to fuse the layout of different visual components. To foster the advancement of this field, we propose the first end-to-end product poster generation dataset PPG30k, comprising 30k exquisite product poster images along with comprehensive image and text annotations. Our method outperforms the state-of-the-art product poster generation methods on PPG30k. The PPG30k will be released soon.
[ { "created": "Thu, 14 Dec 2023 11:11:50 GMT", "version": "v1" } ]
2023-12-15
[ [ "Li", "Zhaochen", "" ], [ "Li", "Fengheng", "" ], [ "Feng", "Wei", "" ], [ "Zhu", "Honghe", "" ], [ "Liu", "An", "" ], [ "Li", "Yaoyu", "" ], [ "Zhang", "Zheng", "" ], [ "Lv", "Jingjing", "" ], [ "Zhu", "Xin", "" ], [ "Shen", "Junjie", "" ], [ "Lin", "Zhangang", "" ], [ "Shao", "Jingping", "" ], [ "Yang", "Zhenglu", "" ] ]
End-to-end product poster generation significantly optimizes design efficiency and reduces production costs. Prevailing methods predominantly rely on image-inpainting methods to generate clean background images for given products. Subsequently, poster layout generation methods are employed to produce corresponding layout results. However, the background images may not be suitable for accommodating textual content due to their complexity, and the fixed location of products limits the diversity of layout results. To alleviate these issues, we propose a novel product poster generation framework named P\&R. The P\&R draws inspiration from the workflow of designers in creating posters, which consists of two stages: Planning and Rendering. At the planning stage, we propose a PlanNet to generate the layout of the product and other visual components considering both the appearance features of the product and semantic features of the text, which improves the diversity and rationality of the layouts. At the rendering stage, we propose a RenderNet to generate the background for the product while considering the generated layout, where a spatial fusion module is introduced to fuse the layout of different visual components. To foster the advancement of this field, we propose the first end-to-end product poster generation dataset PPG30k, comprising 30k exquisite product poster images along with comprehensive image and text annotations. Our method outperforms the state-of-the-art product poster generation methods on PPG30k. The PPG30k will be released soon.
2112.00331
Ruiyang Liu
Ruiyang Liu, Predrag K. Nikolic
Mutltimodal AI Companion for Interactive Fairytale Co-creation
null
null
null
null
cs.MM
http://creativecommons.org/licenses/by-nc-nd/4.0/
AI fairy tale companions play an important role in early childhood education as an augmentation for parents' efforts to close the participation gap and boost kids' mental and language development. Existing systems are generally designed to provide vivid materials as unidirectional entertaining reading environments, e.g, visualizing inputting texts. However, due to the limited vocabulary of kids, these systems failed to afford effective interaction to motivate kids to write their own fairy tales. In this work, we propose AI.R Taletorium, an illustrative, immersive, and inclusive multimodal AI companion, for interactive fairy tale co-creation that actively involves kids to create fairy tales with both the AI agent and their normal peers. AI.R Taletorium consists a neural story generator and a doodler-based fairy tale visualizer. We design a character-centric bidirectional connection mechanism between the story generator and visualizer equipped with Contrastive Language Image Pretraining (CLIP), thus enabling kids to participant in the story generation process by simple sketching. Extensive experiments and user studies show that our system was able to generate and visualize meaningful and vivid fairy tales with limited training data and complete the full interaction cycle under various inputs (text, doodler) through the bidirectional connection.
[ { "created": "Wed, 1 Dec 2021 07:53:38 GMT", "version": "v1" } ]
2021-12-02
[ [ "Liu", "Ruiyang", "" ], [ "Nikolic", "Predrag K.", "" ] ]
AI fairy tale companions play an important role in early childhood education as an augmentation for parents' efforts to close the participation gap and boost kids' mental and language development. Existing systems are generally designed to provide vivid materials as unidirectional entertaining reading environments, e.g, visualizing inputting texts. However, due to the limited vocabulary of kids, these systems failed to afford effective interaction to motivate kids to write their own fairy tales. In this work, we propose AI.R Taletorium, an illustrative, immersive, and inclusive multimodal AI companion, for interactive fairy tale co-creation that actively involves kids to create fairy tales with both the AI agent and their normal peers. AI.R Taletorium consists a neural story generator and a doodler-based fairy tale visualizer. We design a character-centric bidirectional connection mechanism between the story generator and visualizer equipped with Contrastive Language Image Pretraining (CLIP), thus enabling kids to participant in the story generation process by simple sketching. Extensive experiments and user studies show that our system was able to generate and visualize meaningful and vivid fairy tales with limited training data and complete the full interaction cycle under various inputs (text, doodler) through the bidirectional connection.
2101.00443
Sourav Garg
Sourav Garg, Niko S\"underhauf, Feras Dayoub, Douglas Morrison, Akansel Cosgun, Gustavo Carneiro, Qi Wu, Tat-Jun Chin, Ian Reid, Stephen Gould, Peter Corke, Michael Milford
Semantics for Robotic Mapping, Perception and Interaction: A Survey
81 pages, 1 figure, published in Foundations and Trends in Robotics, 2020
Foundations and Trends in Robotics: Vol. 8: No. 1-2, pp 1-224 (2020)
10.1561/2300000059
null
cs.RO cs.CV cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For robots to navigate and interact more richly with the world around them, they will likely require a deeper understanding of the world in which they operate. In robotics and related research fields, the study of understanding is often referred to as semantics, which dictates what does the world "mean" to a robot, and is strongly tied to the question of how to represent that meaning. With humans and robots increasingly operating in the same world, the prospects of human-robot interaction also bring semantics and ontology of natural language into the picture. Driven by need, as well as by enablers like increasing availability of training data and computational resources, semantics is a rapidly growing research area in robotics. The field has received significant attention in the research literature to date, but most reviews and surveys have focused on particular aspects of the topic: the technical research issues regarding its use in specific robotic topics like mapping or segmentation, or its relevance to one particular application domain like autonomous driving. A new treatment is therefore required, and is also timely because so much relevant research has occurred since many of the key surveys were published. This survey therefore provides an overarching snapshot of where semantics in robotics stands today. We establish a taxonomy for semantics research in or relevant to robotics, split into four broad categories of activity, in which semantics are extracted, used, or both. Within these broad categories we survey dozens of major topics including fundamentals from the computer vision field and key robotics research areas utilizing semantics, including mapping, navigation and interaction with the world. The survey also covers key practical considerations, including enablers like increased data availability and improved computational hardware, and major application areas where...
[ { "created": "Sat, 2 Jan 2021 12:34:39 GMT", "version": "v1" } ]
2021-01-05
[ [ "Garg", "Sourav", "" ], [ "Sünderhauf", "Niko", "" ], [ "Dayoub", "Feras", "" ], [ "Morrison", "Douglas", "" ], [ "Cosgun", "Akansel", "" ], [ "Carneiro", "Gustavo", "" ], [ "Wu", "Qi", "" ], [ "Chin", "Tat-Jun", "" ], [ "Reid", "Ian", "" ], [ "Gould", "Stephen", "" ], [ "Corke", "Peter", "" ], [ "Milford", "Michael", "" ] ]
For robots to navigate and interact more richly with the world around them, they will likely require a deeper understanding of the world in which they operate. In robotics and related research fields, the study of understanding is often referred to as semantics, which dictates what does the world "mean" to a robot, and is strongly tied to the question of how to represent that meaning. With humans and robots increasingly operating in the same world, the prospects of human-robot interaction also bring semantics and ontology of natural language into the picture. Driven by need, as well as by enablers like increasing availability of training data and computational resources, semantics is a rapidly growing research area in robotics. The field has received significant attention in the research literature to date, but most reviews and surveys have focused on particular aspects of the topic: the technical research issues regarding its use in specific robotic topics like mapping or segmentation, or its relevance to one particular application domain like autonomous driving. A new treatment is therefore required, and is also timely because so much relevant research has occurred since many of the key surveys were published. This survey therefore provides an overarching snapshot of where semantics in robotics stands today. We establish a taxonomy for semantics research in or relevant to robotics, split into four broad categories of activity, in which semantics are extracted, used, or both. Within these broad categories we survey dozens of major topics including fundamentals from the computer vision field and key robotics research areas utilizing semantics, including mapping, navigation and interaction with the world. The survey also covers key practical considerations, including enablers like increased data availability and improved computational hardware, and major application areas where...
2204.03538
David Puljiz
David Puljiz, Bj\"orn Hein
Updating Industrial Robots for Emerging Technologies
As accepted to the 2nd International Workshop on Designerly HRI; HRI 2022
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Industrial arms need to evolve beyond their standard shape to embrace new and emerging technologies. In this paper, we shall first perform an analysis of four popular but different modern industrial robot arms. By seeing the common trends we will try to extrapolate and expand these trends for the future. Here, particular focus will be on interaction based on augmented reality (AR) through head-mounted displays (HMD), but also through smartphones. Long-term human-robot interaction and personalization of said interaction will also be considered. The use of AR in human-robot interaction has proven to enhance communication and information exchange. A basic addition to industrial arm design would be the integration of QR markers on the robot, both for accessing information and adding tracking capabilities to more easily display AR overlays. In a recent example of information access, Mercedes Benz added QR markers on their cars to help rescue workers estimate the best places to cut and evacuate people after car crashes. One has also to deal with safety in an environment that will be more and more about collaboration. The QR markers can therefore be combined with RF-based ranging modules, developed in the EU-project SafeLog, that can be used both for safety as well as for tracking of human positions while in close proximity interactions with the industrial arms. The industrial arms of the future should also be intuitive to program and interact with. This would be achieved through AR and head mounted displays as well as the already mentioned RF-based person tracking. Finally, a more personalized interaction between the robots and humans can be achieved through life-long learning AI and disembodied, personalized agents. We propose a design that not only exists in the physical world, but also partly in the digital world of mixed reality.
[ { "created": "Thu, 7 Apr 2022 16:08:02 GMT", "version": "v1" }, { "created": "Tue, 28 Mar 2023 15:36:57 GMT", "version": "v2" } ]
2023-03-29
[ [ "Puljiz", "David", "" ], [ "Hein", "Björn", "" ] ]
Industrial arms need to evolve beyond their standard shape to embrace new and emerging technologies. In this paper, we shall first perform an analysis of four popular but different modern industrial robot arms. By seeing the common trends we will try to extrapolate and expand these trends for the future. Here, particular focus will be on interaction based on augmented reality (AR) through head-mounted displays (HMD), but also through smartphones. Long-term human-robot interaction and personalization of said interaction will also be considered. The use of AR in human-robot interaction has proven to enhance communication and information exchange. A basic addition to industrial arm design would be the integration of QR markers on the robot, both for accessing information and adding tracking capabilities to more easily display AR overlays. In a recent example of information access, Mercedes Benz added QR markers on their cars to help rescue workers estimate the best places to cut and evacuate people after car crashes. One has also to deal with safety in an environment that will be more and more about collaboration. The QR markers can therefore be combined with RF-based ranging modules, developed in the EU-project SafeLog, that can be used both for safety as well as for tracking of human positions while in close proximity interactions with the industrial arms. The industrial arms of the future should also be intuitive to program and interact with. This would be achieved through AR and head mounted displays as well as the already mentioned RF-based person tracking. Finally, a more personalized interaction between the robots and humans can be achieved through life-long learning AI and disembodied, personalized agents. We propose a design that not only exists in the physical world, but also partly in the digital world of mixed reality.
2407.01905
Jiawei Zhan
Jiawei Zhan, Jinxiang Lai, Bin-Bin Gao, Jun Liu, Xiaochen Chen, Chengjie Wang
Enhancing Multi-Class Anomaly Detection via Diffusion Refinement with Dual Conditioning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anomaly detection, the technique of identifying abnormal samples using only normal samples, has attracted widespread interest in industry. Existing one-model-per-category methods often struggle with limited generalization capabilities due to their focus on a single category, and can fail when encountering variations in product. Recent feature reconstruction methods, as representatives in one-model-all-categories schemes, face challenges including reconstructing anomalous samples and blurry reconstructions. In this paper, we creatively combine a diffusion model and a transformer for multi-class anomaly detection. This approach leverages diffusion to obtain high-frequency information for refinement, greatly alleviating the blurry reconstruction problem while maintaining the sampling efficiency of the reverse diffusion process. The task is transformed into image inpainting to disconnect the input-output correlation, thereby mitigating the "identical shortcuts" problem and avoiding the model from reconstructing anomalous samples. Besides, we introduce category-awareness using dual conditioning to ensure the accuracy of prediction and reconstruction in the reverse diffusion process, preventing excessive deviation from the target category, thus effectively enabling multi-class anomaly detection. Futhermore, Spatio-temporal fusion is also employed to fuse heatmaps predicted at different timesteps and scales, enhancing the performance of multi-class anomaly detection. Extensive experiments on benchmark datasets demonstrate the superior performance and exceptional multi-class anomaly detection capabilities of our proposed method compared to others.
[ { "created": "Tue, 2 Jul 2024 03:09:40 GMT", "version": "v1" } ]
2024-07-03
[ [ "Zhan", "Jiawei", "" ], [ "Lai", "Jinxiang", "" ], [ "Gao", "Bin-Bin", "" ], [ "Liu", "Jun", "" ], [ "Chen", "Xiaochen", "" ], [ "Wang", "Chengjie", "" ] ]
Anomaly detection, the technique of identifying abnormal samples using only normal samples, has attracted widespread interest in industry. Existing one-model-per-category methods often struggle with limited generalization capabilities due to their focus on a single category, and can fail when encountering variations in product. Recent feature reconstruction methods, as representatives in one-model-all-categories schemes, face challenges including reconstructing anomalous samples and blurry reconstructions. In this paper, we creatively combine a diffusion model and a transformer for multi-class anomaly detection. This approach leverages diffusion to obtain high-frequency information for refinement, greatly alleviating the blurry reconstruction problem while maintaining the sampling efficiency of the reverse diffusion process. The task is transformed into image inpainting to disconnect the input-output correlation, thereby mitigating the "identical shortcuts" problem and avoiding the model from reconstructing anomalous samples. Besides, we introduce category-awareness using dual conditioning to ensure the accuracy of prediction and reconstruction in the reverse diffusion process, preventing excessive deviation from the target category, thus effectively enabling multi-class anomaly detection. Futhermore, Spatio-temporal fusion is also employed to fuse heatmaps predicted at different timesteps and scales, enhancing the performance of multi-class anomaly detection. Extensive experiments on benchmark datasets demonstrate the superior performance and exceptional multi-class anomaly detection capabilities of our proposed method compared to others.
1506.03378
Lihong Li
Che-Yu Liu and Lihong Li
On the Prior Sensitivity of Thompson Sampling
Appears in the 27th International Conference on Algorithmic Learning Theory (ALT), 2016
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The empirically successful Thompson Sampling algorithm for stochastic bandits has drawn much interest in understanding its theoretical properties. One important benefit of the algorithm is that it allows domain knowledge to be conveniently encoded as a prior distribution to balance exploration and exploitation more effectively. While it is generally believed that the algorithm's regret is low (high) when the prior is good (bad), little is known about the exact dependence. In this paper, we fully characterize the algorithm's worst-case dependence of regret on the choice of prior, focusing on a special yet representative case. These results also provide insights into the general sensitivity of the algorithm to the choice of priors. In particular, with $p$ being the prior probability mass of the true reward-generating model, we prove $O(\sqrt{T/p})$ and $O(\sqrt{(1-p)T})$ regret upper bounds for the bad- and good-prior cases, respectively, as well as \emph{matching} lower bounds. Our proofs rely on the discovery of a fundamental property of Thompson Sampling and make heavy use of martingale theory, both of which appear novel in the literature, to the best of our knowledge.
[ { "created": "Wed, 10 Jun 2015 16:22:26 GMT", "version": "v1" }, { "created": "Thu, 21 Jul 2016 01:43:09 GMT", "version": "v2" } ]
2016-07-22
[ [ "Liu", "Che-Yu", "" ], [ "Li", "Lihong", "" ] ]
The empirically successful Thompson Sampling algorithm for stochastic bandits has drawn much interest in understanding its theoretical properties. One important benefit of the algorithm is that it allows domain knowledge to be conveniently encoded as a prior distribution to balance exploration and exploitation more effectively. While it is generally believed that the algorithm's regret is low (high) when the prior is good (bad), little is known about the exact dependence. In this paper, we fully characterize the algorithm's worst-case dependence of regret on the choice of prior, focusing on a special yet representative case. These results also provide insights into the general sensitivity of the algorithm to the choice of priors. In particular, with $p$ being the prior probability mass of the true reward-generating model, we prove $O(\sqrt{T/p})$ and $O(\sqrt{(1-p)T})$ regret upper bounds for the bad- and good-prior cases, respectively, as well as \emph{matching} lower bounds. Our proofs rely on the discovery of a fundamental property of Thompson Sampling and make heavy use of martingale theory, both of which appear novel in the literature, to the best of our knowledge.
2005.12409
Petar Radanliev
Petar Radanliev, David De Roure, Max Van Kleek
Digitalization of COVID-19 pandemic management and cyber risk from connected systems
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
What makes cyber risks arising from connected systems challenging during the management of a pandemic? Assuming that a variety of cyber-physical systems are already operational-collecting, analyzing, and acting on data autonomously-what risks might arise in their application to pandemic management? We already have these systems operational, collecting, and analyzing data autonomously, so how would a pandemic monitoring app be different or riskier? In this review article, we discuss the digitalization of COVID-19 pandemic management and cyber risk from connected systems.
[ { "created": "Mon, 25 May 2020 21:19:28 GMT", "version": "v1" } ]
2020-05-27
[ [ "Radanliev", "Petar", "" ], [ "De Roure", "David", "" ], [ "Van Kleek", "Max", "" ] ]
What makes cyber risks arising from connected systems challenging during the management of a pandemic? Assuming that a variety of cyber-physical systems are already operational-collecting, analyzing, and acting on data autonomously-what risks might arise in their application to pandemic management? We already have these systems operational, collecting, and analyzing data autonomously, so how would a pandemic monitoring app be different or riskier? In this review article, we discuss the digitalization of COVID-19 pandemic management and cyber risk from connected systems.
1507.04537
Nils Vortmeier
Thomas Schwentick, Nils Vortmeier, Thomas Zeume
Static Analysis for Logic-Based Dynamic Programs
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A dynamic program, as introduced by Patnaik and Immerman (1994), maintains the result of a fixed query for an input database which is subject to tuple insertions and deletions. It can use an auxiliary database whose relations are updated via first-order formulas upon modifications of the input database. This paper studies static analysis problems for dynamic programs and investigates, more specifically, the decidability of the following three questions. Is the answer relation of a given dynamic program always empty? Does a program actually maintain a query? Is the content of auxiliary relations independent of the modification sequence that lead to an input database? In general, all these problems can easily be seen to be undecidable for full first-order programs. Therefore the paper aims at pinpointing the exact decidability borderline for programs with restricted arity (of the input and/or auxiliary database) and restricted quantification.
[ { "created": "Thu, 16 Jul 2015 12:07:18 GMT", "version": "v1" } ]
2015-07-17
[ [ "Schwentick", "Thomas", "" ], [ "Vortmeier", "Nils", "" ], [ "Zeume", "Thomas", "" ] ]
A dynamic program, as introduced by Patnaik and Immerman (1994), maintains the result of a fixed query for an input database which is subject to tuple insertions and deletions. It can use an auxiliary database whose relations are updated via first-order formulas upon modifications of the input database. This paper studies static analysis problems for dynamic programs and investigates, more specifically, the decidability of the following three questions. Is the answer relation of a given dynamic program always empty? Does a program actually maintain a query? Is the content of auxiliary relations independent of the modification sequence that lead to an input database? In general, all these problems can easily be seen to be undecidable for full first-order programs. Therefore the paper aims at pinpointing the exact decidability borderline for programs with restricted arity (of the input and/or auxiliary database) and restricted quantification.
1905.03633
Jan Kotera
Jan Kotera, Denys Rozumnyi, Filip \v{S}roubek, Ji\v{r}\'i Matas
Intra-frame Object Tracking by Deblatting
null
2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
10.1109/ICCVW.2019.00283
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objects moving at high speed along complex trajectories often appear in videos, especially videos of sports. Such objects elapse non-negligible distance during exposure time of a single frame and therefore their position in the frame is not well defined. They appear as semi-transparent streaks due to the motion blur and cannot be reliably tracked by standard trackers. We propose a novel approach called Tracking by Deblatting based on the observation that motion blur is directly related to the intra-frame trajectory of an object. Blur is estimated by solving two intertwined inverse problems, blind deblurring and image matting, which we call deblatting. The trajectory is then estimated by fitting a piecewise quadratic curve, which models physically justifiable trajectories. As a result, tracked objects are precisely localized with higher temporal resolution than by conventional trackers. The proposed TbD tracker was evaluated on a newly created dataset of videos with ground truth obtained by a high-speed camera using a novel Trajectory-IoU metric that generalizes the traditional Intersection over Union and measures the accuracy of the intra-frame trajectory. The proposed method outperforms baseline both in recall and trajectory accuracy.
[ { "created": "Thu, 9 May 2019 13:48:01 GMT", "version": "v1" }, { "created": "Thu, 20 Jun 2019 11:17:14 GMT", "version": "v2" } ]
2020-06-03
[ [ "Kotera", "Jan", "" ], [ "Rozumnyi", "Denys", "" ], [ "Šroubek", "Filip", "" ], [ "Matas", "Jiří", "" ] ]
Objects moving at high speed along complex trajectories often appear in videos, especially videos of sports. Such objects elapse non-negligible distance during exposure time of a single frame and therefore their position in the frame is not well defined. They appear as semi-transparent streaks due to the motion blur and cannot be reliably tracked by standard trackers. We propose a novel approach called Tracking by Deblatting based on the observation that motion blur is directly related to the intra-frame trajectory of an object. Blur is estimated by solving two intertwined inverse problems, blind deblurring and image matting, which we call deblatting. The trajectory is then estimated by fitting a piecewise quadratic curve, which models physically justifiable trajectories. As a result, tracked objects are precisely localized with higher temporal resolution than by conventional trackers. The proposed TbD tracker was evaluated on a newly created dataset of videos with ground truth obtained by a high-speed camera using a novel Trajectory-IoU metric that generalizes the traditional Intersection over Union and measures the accuracy of the intra-frame trajectory. The proposed method outperforms baseline both in recall and trajectory accuracy.
2305.14243
Manuel Tran
Manuel Tran, Yashin Dicente Cid, Amal Lahiani, Fabian J. Theis, Tingying Peng, Eldad Klaiman
Training Transitive and Commutative Multimodal Transformers with LoReTTa
Accepted at NeurIPS 2023 (poster). Camera-ready version
null
null
null
cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Training multimodal foundation models is challenging due to the limited availability of multimodal datasets. While many public datasets pair images with text, few combine images with audio or text with audio. Even rarer are datasets that align all three modalities at once. Critical domains such as healthcare, infrastructure, or transportation are particularly affected by missing modalities. This makes it difficult to integrate all modalities into a large pre-trained neural network that can be used out-of-the-box or fine-tuned for different downstream tasks. We introduce LoReTTa (Linking mOdalities with a tRansitive and commutativE pre-Training sTrAtegy) to address this understudied problem. Our self-supervised framework unifies causal modeling and masked modeling with the rules of commutativity and transitivity. This allows us to transition within and between modalities. As a result, our pre-trained models are better at exploring the true underlying joint probability distribution. Given a dataset containing only the disjoint combinations (A, B) and (B, C), LoReTTa can model the relation A <-> C with A <-> B <-> C. In particular, we show that a transformer pre-trained with LoReTTa can handle any mixture of modalities at inference time, including the never-seen pair (A, C) and the triplet (A, B, C). We extensively evaluate our approach on a synthetic, medical, and reinforcement learning dataset. Across different domains, our universal multimodal transformer consistently outperforms strong baselines such as GPT, BERT, and CLIP on tasks involving the missing modality tuple.
[ { "created": "Tue, 23 May 2023 16:58:55 GMT", "version": "v1" }, { "created": "Mon, 29 May 2023 08:37:21 GMT", "version": "v2" }, { "created": "Tue, 27 Jun 2023 09:00:35 GMT", "version": "v3" }, { "created": "Sun, 24 Sep 2023 13:01:29 GMT", "version": "v4" }, { "created": "Tue, 16 Jan 2024 22:34:04 GMT", "version": "v5" } ]
2024-01-18
[ [ "Tran", "Manuel", "" ], [ "Cid", "Yashin Dicente", "" ], [ "Lahiani", "Amal", "" ], [ "Theis", "Fabian J.", "" ], [ "Peng", "Tingying", "" ], [ "Klaiman", "Eldad", "" ] ]
Training multimodal foundation models is challenging due to the limited availability of multimodal datasets. While many public datasets pair images with text, few combine images with audio or text with audio. Even rarer are datasets that align all three modalities at once. Critical domains such as healthcare, infrastructure, or transportation are particularly affected by missing modalities. This makes it difficult to integrate all modalities into a large pre-trained neural network that can be used out-of-the-box or fine-tuned for different downstream tasks. We introduce LoReTTa (Linking mOdalities with a tRansitive and commutativE pre-Training sTrAtegy) to address this understudied problem. Our self-supervised framework unifies causal modeling and masked modeling with the rules of commutativity and transitivity. This allows us to transition within and between modalities. As a result, our pre-trained models are better at exploring the true underlying joint probability distribution. Given a dataset containing only the disjoint combinations (A, B) and (B, C), LoReTTa can model the relation A <-> C with A <-> B <-> C. In particular, we show that a transformer pre-trained with LoReTTa can handle any mixture of modalities at inference time, including the never-seen pair (A, C) and the triplet (A, B, C). We extensively evaluate our approach on a synthetic, medical, and reinforcement learning dataset. Across different domains, our universal multimodal transformer consistently outperforms strong baselines such as GPT, BERT, and CLIP on tasks involving the missing modality tuple.
2107.06744
Reshma Rastogi
Reshma Rastogi (nee. Khemchandani) and Aman Pal
Efficient Learning of Pinball TWSVM using Privileged Information and its applications
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
In any learning framework, an expert knowledge always plays a crucial role. But, in the field of machine learning, the knowledge offered by an expert is rarely used. Moreover, machine learning algorithms (SVM based) generally use hinge loss function which is sensitive towards the noise. Thus, in order to get the advantage from an expert knowledge and to reduce the sensitivity towards the noise, in this paper, we propose privileged information based Twin Pinball Support Vector Machine classifier (Pin-TWSVMPI) where expert's knowledge is in the form of privileged information. The proposed Pin-TWSVMPI incorporates privileged information by using correcting function so as to obtain two nonparallel decision hyperplanes. Further, in order to make computations more efficient and fast, we use Sequential Minimal Optimization (SMO) technique for obtaining the classifier and have also shown its application for Pedestrian detection and Handwritten digit recognition. Further, for UCI datasets, we first implement a procedure which extracts privileged information from the features of the dataset which are then further utilized by Pin-TWSVMPI that leads to enhancement in classification accuracy with comparatively lesser computational time.
[ { "created": "Wed, 14 Jul 2021 14:42:07 GMT", "version": "v1" } ]
2021-07-15
[ [ "Rastogi", "Reshma", "", "nee. Khemchandani" ], [ "Pal", "Aman", "" ] ]
In any learning framework, an expert knowledge always plays a crucial role. But, in the field of machine learning, the knowledge offered by an expert is rarely used. Moreover, machine learning algorithms (SVM based) generally use hinge loss function which is sensitive towards the noise. Thus, in order to get the advantage from an expert knowledge and to reduce the sensitivity towards the noise, in this paper, we propose privileged information based Twin Pinball Support Vector Machine classifier (Pin-TWSVMPI) where expert's knowledge is in the form of privileged information. The proposed Pin-TWSVMPI incorporates privileged information by using correcting function so as to obtain two nonparallel decision hyperplanes. Further, in order to make computations more efficient and fast, we use Sequential Minimal Optimization (SMO) technique for obtaining the classifier and have also shown its application for Pedestrian detection and Handwritten digit recognition. Further, for UCI datasets, we first implement a procedure which extracts privileged information from the features of the dataset which are then further utilized by Pin-TWSVMPI that leads to enhancement in classification accuracy with comparatively lesser computational time.
1109.6273
Robert Simmons
Robert J. Simmons
Structural focalization
A Twelf formalization is included and an Agda formalization is available at https://github.com/robsimmons/agda-lib/tree/focalization
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Focusing, introduced by Jean-Marc Andreoli in the context of classical linear logic, defines a normal form for sequent calculus derivations that cuts down on the number of possible derivations by eagerly applying invertible rules and grouping sequences of non-invertible rules. A focused sequent calculus is defined relative to some non-focused sequent calculus; focalization is the property that every non-focused derivation can be transformed into a focused derivation. In this paper, we present a focused sequent calculus for propositional intuitionistic logic and prove the focalization property relative to a standard presentation of propositional intuitionistic logic. Compared to existing approaches, the proof is quite concise, depending only on the internal soundness and completeness of the focused logic. In turn, both of these properties can be established (and mechanically verified) by structural induction in the style of Pfenning's structural cut elimination without the need for any tedious and repetitious invertibility lemmas. The proof of cut admissibility for the focused system, which establishes internal soundness, is not particularly novel. The proof of identity expansion, which establishes internal completeness, is a major contribution of this work.
[ { "created": "Wed, 28 Sep 2011 17:01:11 GMT", "version": "v1" }, { "created": "Sat, 7 Jan 2012 23:52:03 GMT", "version": "v2" }, { "created": "Sat, 14 Jan 2012 00:57:07 GMT", "version": "v3" }, { "created": "Mon, 13 Aug 2012 21:02:09 GMT", "version": "v4" }, { "created": "Thu, 18 Apr 2013 17:19:51 GMT", "version": "v5" }, { "created": "Mon, 6 Jan 2014 01:24:16 GMT", "version": "v6" }, { "created": "Mon, 17 Mar 2014 01:00:51 GMT", "version": "v7" } ]
2014-03-18
[ [ "Simmons", "Robert J.", "" ] ]
Focusing, introduced by Jean-Marc Andreoli in the context of classical linear logic, defines a normal form for sequent calculus derivations that cuts down on the number of possible derivations by eagerly applying invertible rules and grouping sequences of non-invertible rules. A focused sequent calculus is defined relative to some non-focused sequent calculus; focalization is the property that every non-focused derivation can be transformed into a focused derivation. In this paper, we present a focused sequent calculus for propositional intuitionistic logic and prove the focalization property relative to a standard presentation of propositional intuitionistic logic. Compared to existing approaches, the proof is quite concise, depending only on the internal soundness and completeness of the focused logic. In turn, both of these properties can be established (and mechanically verified) by structural induction in the style of Pfenning's structural cut elimination without the need for any tedious and repetitious invertibility lemmas. The proof of cut admissibility for the focused system, which establishes internal soundness, is not particularly novel. The proof of identity expansion, which establishes internal completeness, is a major contribution of this work.
1407.0039
Edinah Gnang K
Edinah K. Gnang
Integer formula encoding SageTeX package
null
null
null
null
cs.MS math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper describes a SageTeX implementation of an integer encoding procedures.
[ { "created": "Fri, 27 Jun 2014 00:13:14 GMT", "version": "v1" } ]
2014-07-02
[ [ "Gnang", "Edinah K.", "" ] ]
The paper describes a SageTeX implementation of an integer encoding procedures.
2303.12506
Eden Hartman
Eden Hartman, Avinatan Hassidim, Yonatan Aumann and Erel Segal-Halevi
Leximin Approximation: From Single-Objective to Multi-Objective
null
null
null
null
cs.GT cs.DS cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Leximin is a common approach to multi-objective optimization, frequently employed in fair division applications. In leximin optimization, one first aims to maximize the smallest objective value; subject to this, one maximizes the second-smallest objective; and so on. Often, even the single-objective problem of maximizing the smallest value cannot be solved accurately. What can we hope to accomplish for leximin optimization in this situation? Recently, Henzinger et al. (2022) defined a notion of \emph{approximate} leximin optimality. Their definition, however, considers only an additive approximation. In this work, we first define the notion of approximate leximin optimality, allowing both multiplicative and additive errors. We then show how to compute, in polynomial time, such an approximate leximin solution, using an oracle that finds an approximation to a single-objective problem. The approximation factors of the algorithms are closely related: an $(\alpha,\epsilon)$-approximation for the single-objective problem (where $\alpha \in (0,1]$ and $\epsilon \geq 0$ are the multiplicative and additive factors respectively) translates into an $\left(\frac{\alpha^2}{1-\alpha + \alpha^2}, \frac{\epsilon}{1-\alpha +\alpha^2}\right)$-approximation for the multi-objective leximin problem, regardless of the number of objectives.
[ { "created": "Wed, 22 Mar 2023 12:27:15 GMT", "version": "v1" }, { "created": "Sun, 4 Jun 2023 00:28:11 GMT", "version": "v2" }, { "created": "Mon, 31 Jul 2023 09:17:37 GMT", "version": "v3" }, { "created": "Thu, 28 Sep 2023 14:51:57 GMT", "version": "v4" } ]
2023-09-29
[ [ "Hartman", "Eden", "" ], [ "Hassidim", "Avinatan", "" ], [ "Aumann", "Yonatan", "" ], [ "Segal-Halevi", "Erel", "" ] ]
Leximin is a common approach to multi-objective optimization, frequently employed in fair division applications. In leximin optimization, one first aims to maximize the smallest objective value; subject to this, one maximizes the second-smallest objective; and so on. Often, even the single-objective problem of maximizing the smallest value cannot be solved accurately. What can we hope to accomplish for leximin optimization in this situation? Recently, Henzinger et al. (2022) defined a notion of \emph{approximate} leximin optimality. Their definition, however, considers only an additive approximation. In this work, we first define the notion of approximate leximin optimality, allowing both multiplicative and additive errors. We then show how to compute, in polynomial time, such an approximate leximin solution, using an oracle that finds an approximation to a single-objective problem. The approximation factors of the algorithms are closely related: an $(\alpha,\epsilon)$-approximation for the single-objective problem (where $\alpha \in (0,1]$ and $\epsilon \geq 0$ are the multiplicative and additive factors respectively) translates into an $\left(\frac{\alpha^2}{1-\alpha + \alpha^2}, \frac{\epsilon}{1-\alpha +\alpha^2}\right)$-approximation for the multi-objective leximin problem, regardless of the number of objectives.
1204.1726
Edoardo Di Napoli
Lukas Kr\"amer, Edoardo Di Napoli, Martin Galgon, Bruno Lang, and Paolo Bientinesi
Dissecting the FEAST algorithm for generalized eigenproblems
11 Pages, 5 Figures. Submitted to Journal of Computational and Applied Mathematics
null
null
null
cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze the FEAST method for computing selected eigenvalues and eigenvectors of large sparse matrix pencils. After establishing the close connection between FEAST and the well-known Rayleigh-Ritz method, we identify several critical issues that influence convergence and accuracy of the solver: the choice of the starting vector space, the stopping criterion, how the inner linear systems impact the quality of the solution, and the use of FEAST for computing eigenpairs from multiple intervals. We complement the study with numerical examples, and hint at possible improvements to overcome the existing problems.
[ { "created": "Sun, 8 Apr 2012 10:32:18 GMT", "version": "v1" } ]
2012-04-10
[ [ "Krämer", "Lukas", "" ], [ "Di Napoli", "Edoardo", "" ], [ "Galgon", "Martin", "" ], [ "Lang", "Bruno", "" ], [ "Bientinesi", "Paolo", "" ] ]
We analyze the FEAST method for computing selected eigenvalues and eigenvectors of large sparse matrix pencils. After establishing the close connection between FEAST and the well-known Rayleigh-Ritz method, we identify several critical issues that influence convergence and accuracy of the solver: the choice of the starting vector space, the stopping criterion, how the inner linear systems impact the quality of the solution, and the use of FEAST for computing eigenpairs from multiple intervals. We complement the study with numerical examples, and hint at possible improvements to overcome the existing problems.
2312.03579
Minna Hirvonen
Minna Hirvonen
The Implication Problem for Functional Dependencies and Variants of Marginal Distribution Equivalences
23 pages, improved version after reviewer comments, results unchanged
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study functional dependencies together with two different probabilistic dependency notions: unary marginal identity and unary marginal distribution equivalence. A unary marginal identity states that two variables x and y are identically distributed. A unary marginal distribution equivalence states that the multiset consisting of the marginal probabilities of all the values for variable x is the same as the corresponding multiset for y. We present a sound and complete axiomatization for the class of these dependencies and show that it has Armstrong relations. The axiomatization is infinite, but we show that there can be no finite axiomatization. The implication problem for the subclass that contains only functional dependencies and unary marginal identities can be simulated with functional dependencies and unary inclusion atoms, and therefore the problem is in polynomial-time. This complexity bound also holds in the case of the full class, which we show by constructing a polynomial-time algorithm.
[ { "created": "Wed, 6 Dec 2023 16:15:16 GMT", "version": "v1" }, { "created": "Fri, 17 May 2024 15:50:55 GMT", "version": "v2" } ]
2024-05-20
[ [ "Hirvonen", "Minna", "" ] ]
We study functional dependencies together with two different probabilistic dependency notions: unary marginal identity and unary marginal distribution equivalence. A unary marginal identity states that two variables x and y are identically distributed. A unary marginal distribution equivalence states that the multiset consisting of the marginal probabilities of all the values for variable x is the same as the corresponding multiset for y. We present a sound and complete axiomatization for the class of these dependencies and show that it has Armstrong relations. The axiomatization is infinite, but we show that there can be no finite axiomatization. The implication problem for the subclass that contains only functional dependencies and unary marginal identities can be simulated with functional dependencies and unary inclusion atoms, and therefore the problem is in polynomial-time. This complexity bound also holds in the case of the full class, which we show by constructing a polynomial-time algorithm.
cs/0001019
Joseph O'Rourke
Erik D. Demaine, Martin L. Demaine, Joseph O'Rourke
PushPush is NP-hard in 2D
18 pages, 13 figures, 1 table. Improves cs.CG/9911013
null
null
Smith Technical Report 065
cs.CG cs.DM
null
We prove that a particular pushing-blocks puzzle is intractable in 2D, improving an earlier result that established intractability in 3D [OS99]. The puzzle, inspired by the game *PushPush*, consists of unit square blocks on an integer lattice. An agent may push blocks (but never pull them) in attempting to move between given start and goal positions. In the PushPush version, the agent can only push one block at a time, and moreover, each block, when pushed, slides the maximal extent of its free range. We prove this version is NP-hard in 2D by reduction from SAT.
[ { "created": "Mon, 24 Jan 2000 14:04:42 GMT", "version": "v1" } ]
2007-05-23
[ [ "Demaine", "Erik D.", "" ], [ "Demaine", "Martin L.", "" ], [ "O'Rourke", "Joseph", "" ] ]
We prove that a particular pushing-blocks puzzle is intractable in 2D, improving an earlier result that established intractability in 3D [OS99]. The puzzle, inspired by the game *PushPush*, consists of unit square blocks on an integer lattice. An agent may push blocks (but never pull them) in attempting to move between given start and goal positions. In the PushPush version, the agent can only push one block at a time, and moreover, each block, when pushed, slides the maximal extent of its free range. We prove this version is NP-hard in 2D by reduction from SAT.
2202.06140
Negin Nikafrooz
Negin Nikafrooz, Zachary Fuge, Alexander Leonessa
Grasp Control of a Cable-Driven Robotic Hand Using a PVDF Slip Detection Sensor
null
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting and preventing slip is a major challenge in robotic hand operation, underpinning the robot's ability to perform safe and reliable grasps. Using the robotic hand design from the authors' earlier work, a sensing and control strategy is proposed here to prevent object slippage. The robotic hand is cable-driven, single-actuated, has five fingers, and is capable of replicating most human hand motions. The slip sensing approach utilizes a piezoelectric vibration sensor, namely, polyvinylidene fluoride (PVDF), which is a flexible, thin, cheap, and highly sensitive material. The power of the filtered PVDF signal is shown to exhibit identifiable signatures during slip, thus providing a suitable slip detection mechanism. Using the PVDF feedback, an integral controller is implemented to prevent the grasped object from falling and ensure a safe, powerful, and reliable grasp. The extension movement of the robotic hand is controlled using a bend sensor, through a proportional-integral (PI) controller. The robotic hand weights 338 gr. The functionality and robustness of the proposed slip-detection sensory system and control logic implementation are evaluated through experiments.
[ { "created": "Sat, 12 Feb 2022 20:51:12 GMT", "version": "v1" } ]
2022-02-15
[ [ "Nikafrooz", "Negin", "" ], [ "Fuge", "Zachary", "" ], [ "Leonessa", "Alexander", "" ] ]
Detecting and preventing slip is a major challenge in robotic hand operation, underpinning the robot's ability to perform safe and reliable grasps. Using the robotic hand design from the authors' earlier work, a sensing and control strategy is proposed here to prevent object slippage. The robotic hand is cable-driven, single-actuated, has five fingers, and is capable of replicating most human hand motions. The slip sensing approach utilizes a piezoelectric vibration sensor, namely, polyvinylidene fluoride (PVDF), which is a flexible, thin, cheap, and highly sensitive material. The power of the filtered PVDF signal is shown to exhibit identifiable signatures during slip, thus providing a suitable slip detection mechanism. Using the PVDF feedback, an integral controller is implemented to prevent the grasped object from falling and ensure a safe, powerful, and reliable grasp. The extension movement of the robotic hand is controlled using a bend sensor, through a proportional-integral (PI) controller. The robotic hand weights 338 gr. The functionality and robustness of the proposed slip-detection sensory system and control logic implementation are evaluated through experiments.
2203.01497
Shubham Singh
Shubham Singh, Ryan P. Russell and Patrick M. Wensing
Analytical Second-Order Partial Derivatives of Rigid-Body Inverse Dynamics
Accepted for IROS 2022 (Oct 23-27, 2022 Kyoto)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimization-based robot control strategies often rely on first-order dynamics approximation methods, as in iLQR. Using second-order approximations of the dynamics is expensive due to the costly second-order partial derivatives of the dynamics with respect to the state and control. Current approaches for calculating these derivatives typically use automatic differentiation (AD) and chain-rule accumulation or finite-difference. In this paper, for the first time, we present analytical expressions for the second-order partial derivatives of inverse dynamics for open-chain rigid-body systems with floating base and multi-DoF joints. A new extension of spatial vector algebra is proposed that enables the analysis. A recursive algorithm with complexity of $\mathcal{O}(Nd^2)$ is also provided where $N$ is the number of bodies and $d$ is the depth of the kinematic tree. A comparison with AD in CasADi shows speedups of 1.5-3$\times$ for serial kinematic trees with $N> 5$, and a C++ implementation shows runtimes of $\approx$51$\mu s$ for a quadruped.
[ { "created": "Thu, 3 Mar 2022 03:21:06 GMT", "version": "v1" }, { "created": "Sun, 14 Aug 2022 21:55:26 GMT", "version": "v2" } ]
2022-08-16
[ [ "Singh", "Shubham", "" ], [ "Russell", "Ryan P.", "" ], [ "Wensing", "Patrick M.", "" ] ]
Optimization-based robot control strategies often rely on first-order dynamics approximation methods, as in iLQR. Using second-order approximations of the dynamics is expensive due to the costly second-order partial derivatives of the dynamics with respect to the state and control. Current approaches for calculating these derivatives typically use automatic differentiation (AD) and chain-rule accumulation or finite-difference. In this paper, for the first time, we present analytical expressions for the second-order partial derivatives of inverse dynamics for open-chain rigid-body systems with floating base and multi-DoF joints. A new extension of spatial vector algebra is proposed that enables the analysis. A recursive algorithm with complexity of $\mathcal{O}(Nd^2)$ is also provided where $N$ is the number of bodies and $d$ is the depth of the kinematic tree. A comparison with AD in CasADi shows speedups of 1.5-3$\times$ for serial kinematic trees with $N> 5$, and a C++ implementation shows runtimes of $\approx$51$\mu s$ for a quadruped.
1305.7167
Pavel Zaichenkov
Pavel Zaichenkov (1 and 4), Bert Gijsbers (2 and 3), Clemens Grelck (3), Olga Tveretina (1), Alex Shafarenko (1) ((1) University of Hertfordshire, (2) Ghent University, (3) University of Amsterdam, (4) Moscow Institute of Physics and Technology)
A Case Study in Coordination Programming: Performance Evaluation of S-Net vs Intel's Concurrent Collections
9 pages, 8 figures, 1 table, accepted for PLC 2014 workshop
null
null
null
cs.PL cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a programming methodology and runtime performance case study comparing the declarative data flow coordination language S-Net with Intel's Concurrent Collections (CnC). As a coordination language S-Net achieves a near-complete separation of concerns between sequential software components implemented in a separate algorithmic language and their parallel orchestration in an asynchronous data flow streaming network. We investigate the merits of S-Net and CnC with the help of a relevant and non-trivial linear algebra problem: tiled Cholesky decomposition. We describe two alternative S-Net implementations of tiled Cholesky factorization and compare them with two CnC implementations, one with explicit performance tuning and one without, that have previously been used to illustrate Intel CnC. Our experiments on a 48-core machine demonstrate that S-Net manages to outperform CnC on this problem.
[ { "created": "Thu, 30 May 2013 17:21:26 GMT", "version": "v1" }, { "created": "Sat, 15 Jun 2013 11:01:14 GMT", "version": "v2" }, { "created": "Thu, 7 Nov 2013 00:40:07 GMT", "version": "v3" }, { "created": "Thu, 3 Apr 2014 12:24:47 GMT", "version": "v4" } ]
2014-04-04
[ [ "Zaichenkov", "Pavel", "", "1 and 4" ], [ "Gijsbers", "Bert", "", "2 and 3" ], [ "Grelck", "Clemens", "" ], [ "Tveretina", "Olga", "" ], [ "Shafarenko", "Alex", "" ] ]
We present a programming methodology and runtime performance case study comparing the declarative data flow coordination language S-Net with Intel's Concurrent Collections (CnC). As a coordination language S-Net achieves a near-complete separation of concerns between sequential software components implemented in a separate algorithmic language and their parallel orchestration in an asynchronous data flow streaming network. We investigate the merits of S-Net and CnC with the help of a relevant and non-trivial linear algebra problem: tiled Cholesky decomposition. We describe two alternative S-Net implementations of tiled Cholesky factorization and compare them with two CnC implementations, one with explicit performance tuning and one without, that have previously been used to illustrate Intel CnC. Our experiments on a 48-core machine demonstrate that S-Net manages to outperform CnC on this problem.
2006.10592
Nitin Saurabh
Balagopal Komarath and Nitin Saurabh
On the complexity of detecting hazards
To appear in Information Processing Letters
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting and eliminating logic hazards in Boolean circuits is a fundamental problem in logic circuit design. We show that there is no $O(3^{(1-\epsilon)n} \text{poly}(s))$ time algorithm, for any $\epsilon > 0$, that detects logic hazards in Boolean circuits of size $s$ on $n$ variables under the assumption that the strong exponential time hypothesis is true. This lower bound holds even when the input circuits are restricted to be formulas of depth four. We also present a polynomial time algorithm for detecting $1$-hazards in DNF (or, $0$-hazards in CNF) formulas. Since $0$-hazards in DNF (or, $1$-hazards in CNF) formulas are easy to eliminate, this algorithm can be used to detect whether a given DNF or CNF formula has a hazard in practice.
[ { "created": "Thu, 18 Jun 2020 14:55:08 GMT", "version": "v1" } ]
2020-06-19
[ [ "Komarath", "Balagopal", "" ], [ "Saurabh", "Nitin", "" ] ]
Detecting and eliminating logic hazards in Boolean circuits is a fundamental problem in logic circuit design. We show that there is no $O(3^{(1-\epsilon)n} \text{poly}(s))$ time algorithm, for any $\epsilon > 0$, that detects logic hazards in Boolean circuits of size $s$ on $n$ variables under the assumption that the strong exponential time hypothesis is true. This lower bound holds even when the input circuits are restricted to be formulas of depth four. We also present a polynomial time algorithm for detecting $1$-hazards in DNF (or, $0$-hazards in CNF) formulas. Since $0$-hazards in DNF (or, $1$-hazards in CNF) formulas are easy to eliminate, this algorithm can be used to detect whether a given DNF or CNF formula has a hazard in practice.
cs/0510058
Peter Jung
Peter Jung
Precoding for 2x2 Doubly-Dispersive WSSUS Channels
6 pages, 6th International ITG-Conference on Source and Channel Coding (SCC 2006), Apr., 2006, Munich, typos corrected
null
null
null
cs.IT math.IT
null
Optimal link adaption to the scattering function of wide sense stationary uncorrelated scattering (WSSUS) mobile communication channels is still an unsolved problem despite its importance for next-generation system design. In multicarrier transmission such link adaption is performed by pulse shaping which in turn is equivalent to precoding with respect to the second order channel statistics. In the present framework a translation of the precoder optimization problem into an optimization problem over trace class operators is used. This problem which is also well-known in the context of quantum information theory is unsolved in general due to its non-convex nature. However in very low dimension the problem formulation reveals an additional analytic structure which again admits the solution to the optimal precoder and multiplexing scheme. Hence, in this contribution the analytic solution of the problem for the 2x2 doubly--dispersive WSSUS channel is presented.
[ { "created": "Thu, 20 Oct 2005 14:57:12 GMT", "version": "v1" }, { "created": "Mon, 16 Jan 2006 09:06:09 GMT", "version": "v2" } ]
2007-07-13
[ [ "Jung", "Peter", "" ] ]
Optimal link adaption to the scattering function of wide sense stationary uncorrelated scattering (WSSUS) mobile communication channels is still an unsolved problem despite its importance for next-generation system design. In multicarrier transmission such link adaption is performed by pulse shaping which in turn is equivalent to precoding with respect to the second order channel statistics. In the present framework a translation of the precoder optimization problem into an optimization problem over trace class operators is used. This problem which is also well-known in the context of quantum information theory is unsolved in general due to its non-convex nature. However in very low dimension the problem formulation reveals an additional analytic structure which again admits the solution to the optimal precoder and multiplexing scheme. Hence, in this contribution the analytic solution of the problem for the 2x2 doubly--dispersive WSSUS channel is presented.
1810.09379
Alexandre Rademaker
Alessandra Cid and Alexandre Rademaker and Bruno Cuconato and Valeria de Paiva
Linguistic Legal Concept Extraction in Portuguese
This work was accepted for publication in the JURIX 2018 (http://jurix2018.ai.rug.nl) in a short 5-pages version
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This work investigates legal concepts and their expression in Portuguese, concentrating on the "Order of Attorneys of Brazil" Bar exam. Using a corpus formed by a collection of multiple-choice questions, three norms related to the Ethics part of the OAB exam, language resources (Princeton WordNet and OpenWordNet-PT) and tools (AntConc and Freeling), we began to investigate the concepts and words missing from our repertory of concepts and words in Portuguese, the knowledge base OpenWordNet-PT. We add these concepts and words to OpenWordNet-PT and hence obtain a representation of these texts that is "contained" in the lexical knowledge base.
[ { "created": "Mon, 22 Oct 2018 15:58:57 GMT", "version": "v1" } ]
2018-10-23
[ [ "Cid", "Alessandra", "" ], [ "Rademaker", "Alexandre", "" ], [ "Cuconato", "Bruno", "" ], [ "de Paiva", "Valeria", "" ] ]
This work investigates legal concepts and their expression in Portuguese, concentrating on the "Order of Attorneys of Brazil" Bar exam. Using a corpus formed by a collection of multiple-choice questions, three norms related to the Ethics part of the OAB exam, language resources (Princeton WordNet and OpenWordNet-PT) and tools (AntConc and Freeling), we began to investigate the concepts and words missing from our repertory of concepts and words in Portuguese, the knowledge base OpenWordNet-PT. We add these concepts and words to OpenWordNet-PT and hence obtain a representation of these texts that is "contained" in the lexical knowledge base.
1309.5439
Mickael Randour
V\'eronique Bruy\`ere, Emmanuel Filiot, Mickael Randour and Jean-Fran\c{c}ois Raskin
Meet Your Expectations With Guarantees: Beyond Worst-Case Synthesis in Quantitative Games
Extended version. Journal version published in Information and Computation, conference version published in STACS 2014
null
null
null
cs.GT cs.FL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We extend the quantitative synthesis framework by going beyond the worst-case. On the one hand, classical analysis of two-player games involves an adversary (modeling the environment of the system) which is purely antagonistic and asks for strict guarantees. On the other hand, stochastic models like Markov decision processes represent situations where the system is faced to a purely randomized environment: the aim is then to optimize the expected payoff, with no guarantee on individual outcomes. We introduce the beyond worst-case synthesis problem, which is to construct strategies that guarantee some quantitative requirement in the worst-case while providing an higher expected value against a particular stochastic model of the environment given as input. This problem is relevant to produce system controllers that provide nice expected performance in the everyday situation while ensuring a strict (but relaxed) performance threshold even in the event of very bad (while unlikely) circumstances. We study the beyond worst-case synthesis problem for two important quantitative settings: the mean-payoff and the shortest path. In both cases, we show how to decide the existence of finite-memory strategies satisfying the problem and how to synthesize one if one exists. We establish algorithms and we study complexity bounds and memory requirements.
[ { "created": "Sat, 21 Sep 2013 05:44:00 GMT", "version": "v1" }, { "created": "Tue, 8 Oct 2013 18:30:41 GMT", "version": "v2" }, { "created": "Thu, 2 Jan 2014 10:13:00 GMT", "version": "v3" }, { "created": "Wed, 29 Oct 2014 18:31:34 GMT", "version": "v4" }, { "created": "Fri, 30 Oct 2015 16:37:53 GMT", "version": "v5" } ]
2015-11-02
[ [ "Bruyère", "Véronique", "" ], [ "Filiot", "Emmanuel", "" ], [ "Randour", "Mickael", "" ], [ "Raskin", "Jean-François", "" ] ]
We extend the quantitative synthesis framework by going beyond the worst-case. On the one hand, classical analysis of two-player games involves an adversary (modeling the environment of the system) which is purely antagonistic and asks for strict guarantees. On the other hand, stochastic models like Markov decision processes represent situations where the system is faced to a purely randomized environment: the aim is then to optimize the expected payoff, with no guarantee on individual outcomes. We introduce the beyond worst-case synthesis problem, which is to construct strategies that guarantee some quantitative requirement in the worst-case while providing an higher expected value against a particular stochastic model of the environment given as input. This problem is relevant to produce system controllers that provide nice expected performance in the everyday situation while ensuring a strict (but relaxed) performance threshold even in the event of very bad (while unlikely) circumstances. We study the beyond worst-case synthesis problem for two important quantitative settings: the mean-payoff and the shortest path. In both cases, we show how to decide the existence of finite-memory strategies satisfying the problem and how to synthesize one if one exists. We establish algorithms and we study complexity bounds and memory requirements.
1811.07126
Xue Yang
Xue Yang, Jirui Yang, Junchi Yan, Yue Zhang, Tengfei Zhang, Zhi Guo, Sun Xian and Kun Fu
SCRDet: Towards More Robust Detection for Small, Cluttered and Rotated Objects
10 pages, 10 figures, 6 tables, ICCV2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object detection has been a building block in computer vision. Though considerable progress has been made, there still exist challenges for objects with small size, arbitrary direction, and dense distribution. Apart from natural images, such issues are especially pronounced for aerial images of great importance. This paper presents a novel multi-category rotation detector for small, cluttered and rotated objects, namely SCRDet. Specifically, a sampling fusion network is devised which fuses multi-layer feature with effective anchor sampling, to improve the sensitivity to small objects. Meanwhile, the supervised pixel attention network and the channel attention network are jointly explored for small and cluttered object detection by suppressing the noise and highlighting the objects feature. For more accurate rotation estimation, the IoU constant factor is added to the smooth L1 loss to address the boundary problem for the rotating bounding box. Extensive experiments on two remote sensing public datasets DOTA, NWPU VHR-10 as well as natural image datasets COCO, VOC2007 and scene text data ICDAR2015 show the state-of-the-art performance of our detector. The code and models will be available at https://github.com/DetectionTeamUCAS.
[ { "created": "Sat, 17 Nov 2018 08:24:25 GMT", "version": "v1" }, { "created": "Tue, 20 Nov 2018 08:22:24 GMT", "version": "v2" }, { "created": "Thu, 1 Aug 2019 06:50:29 GMT", "version": "v3" }, { "created": "Sat, 10 Aug 2019 02:53:31 GMT", "version": "v4" } ]
2019-08-13
[ [ "Yang", "Xue", "" ], [ "Yang", "Jirui", "" ], [ "Yan", "Junchi", "" ], [ "Zhang", "Yue", "" ], [ "Zhang", "Tengfei", "" ], [ "Guo", "Zhi", "" ], [ "Xian", "Sun", "" ], [ "Fu", "Kun", "" ] ]
Object detection has been a building block in computer vision. Though considerable progress has been made, there still exist challenges for objects with small size, arbitrary direction, and dense distribution. Apart from natural images, such issues are especially pronounced for aerial images of great importance. This paper presents a novel multi-category rotation detector for small, cluttered and rotated objects, namely SCRDet. Specifically, a sampling fusion network is devised which fuses multi-layer feature with effective anchor sampling, to improve the sensitivity to small objects. Meanwhile, the supervised pixel attention network and the channel attention network are jointly explored for small and cluttered object detection by suppressing the noise and highlighting the objects feature. For more accurate rotation estimation, the IoU constant factor is added to the smooth L1 loss to address the boundary problem for the rotating bounding box. Extensive experiments on two remote sensing public datasets DOTA, NWPU VHR-10 as well as natural image datasets COCO, VOC2007 and scene text data ICDAR2015 show the state-of-the-art performance of our detector. The code and models will be available at https://github.com/DetectionTeamUCAS.
1507.08781
Leonid Yaroslavsky
L. Yaroslavsky
Can compressed sensing beat the Nyquist sampling rate?
null
Opt. Eng. 54(7) 079701 (2015)
10.1117/1.OE.54.7.079701
null
cs.IT math.IT physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data saving capability of "Compressed sensing (sampling)" in signal discretization is disputed and found to be far below the theoretical upper bound defined by the signal sparsity. On a simple and intuitive example, it is demonstrated that, in a realistic scenario for signals that are believed to be sparse, one can achieve a substantially larger saving than compressing sensing can. It is also shown that frequent assertions in the literature that "Compressed sensing" can beat the Nyquist sampling approach are misleading substitution of terms and are rooted in misinterpretation of the sampling theory.
[ { "created": "Fri, 31 Jul 2015 07:40:36 GMT", "version": "v1" } ]
2015-10-28
[ [ "Yaroslavsky", "L.", "" ] ]
Data saving capability of "Compressed sensing (sampling)" in signal discretization is disputed and found to be far below the theoretical upper bound defined by the signal sparsity. On a simple and intuitive example, it is demonstrated that, in a realistic scenario for signals that are believed to be sparse, one can achieve a substantially larger saving than compressing sensing can. It is also shown that frequent assertions in the literature that "Compressed sensing" can beat the Nyquist sampling approach are misleading substitution of terms and are rooted in misinterpretation of the sampling theory.
1807.05798
Jimmy Lin
Jimmy Lin and Peilin Yang
Repeatability Corner Cases in Document Ranking: The Impact of Score Ties
Published in the Proceedings of the 42nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2019)
null
10.1145/3331184.3331339
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Document ranking experiments should be repeatable. However, the interaction between multi-threaded indexing and score ties during retrieval may yield non-deterministic rankings, making repeatability not as trivial as one might imagine. In the context of the open-source Lucene search engine, score ties are broken by internal document ids, which are assigned at index time. Due to multi-threaded indexing, which makes experimentation with large modern document collections practical, internal document ids are not assigned consistently between different index instances of the same collection, and thus score ties are broken unpredictably. This short paper examines the effectiveness impact of such score ties, quantifying the variability that can be attributed to this phenomenon. The obvious solution to this non-determinism and to ensure repeatable document ranking is to break score ties using external collection document ids. This approach, however, comes with measurable efficiency costs due to the necessity of consulting external identifiers during query evaluation.
[ { "created": "Mon, 16 Jul 2018 11:32:52 GMT", "version": "v1" }, { "created": "Mon, 2 Sep 2019 20:16:41 GMT", "version": "v2" } ]
2019-09-04
[ [ "Lin", "Jimmy", "" ], [ "Yang", "Peilin", "" ] ]
Document ranking experiments should be repeatable. However, the interaction between multi-threaded indexing and score ties during retrieval may yield non-deterministic rankings, making repeatability not as trivial as one might imagine. In the context of the open-source Lucene search engine, score ties are broken by internal document ids, which are assigned at index time. Due to multi-threaded indexing, which makes experimentation with large modern document collections practical, internal document ids are not assigned consistently between different index instances of the same collection, and thus score ties are broken unpredictably. This short paper examines the effectiveness impact of such score ties, quantifying the variability that can be attributed to this phenomenon. The obvious solution to this non-determinism and to ensure repeatable document ranking is to break score ties using external collection document ids. This approach, however, comes with measurable efficiency costs due to the necessity of consulting external identifiers during query evaluation.