id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2311.14883
Sheikh Rabiul Islam
Alana Cedeno, Rachel Liang, and Sheikh Rabiul Islam
Predicting Potential School Shooters from Social Media Posts
null
IEEE Big Data 2023
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
The rate of terror attacks has surged over the past decade, resulting in the tragic and senseless loss or alteration of numerous lives. Offenders behind mass shootings, bombings, or other domestic terrorism incidents have historically exhibited warning signs on social media before carrying out actual incidents. However, due to inadequate and comprehensive police procedures, authorities and social media platforms are often unable to detect these early indicators of intent. To tackle this issue, we aim to create a multimodal model capable of predicting sentiments simultaneously from both images (i.e., social media photos) and text (i.e., social media posts), generating a unified prediction. The proposed method involves segregating the image and text components of an online post and utilizing a captioning model to generate sentences summarizing the image's contents. Subsequently, a sentiment analyzer evaluates this caption, or description, along with the original post's text to determine whether the post is positive (i.e., concerning) or negative (i.e., benign). This undertaking represents a significant step toward implementing the developed system in real-world scenarios.
[ { "created": "Sat, 25 Nov 2023 00:30:23 GMT", "version": "v1" } ]
2023-11-28
[ [ "Cedeno", "Alana", "" ], [ "Liang", "Rachel", "" ], [ "Islam", "Sheikh Rabiul", "" ] ]
The rate of terror attacks has surged over the past decade, resulting in the tragic and senseless loss or alteration of numerous lives. Offenders behind mass shootings, bombings, or other domestic terrorism incidents have historically exhibited warning signs on social media before carrying out actual incidents. However, due to inadequate and comprehensive police procedures, authorities and social media platforms are often unable to detect these early indicators of intent. To tackle this issue, we aim to create a multimodal model capable of predicting sentiments simultaneously from both images (i.e., social media photos) and text (i.e., social media posts), generating a unified prediction. The proposed method involves segregating the image and text components of an online post and utilizing a captioning model to generate sentences summarizing the image's contents. Subsequently, a sentiment analyzer evaluates this caption, or description, along with the original post's text to determine whether the post is positive (i.e., concerning) or negative (i.e., benign). This undertaking represents a significant step toward implementing the developed system in real-world scenarios.
2105.08872
Jiansheng Fang
Jiansheng Fang, Huazhu Fu, Dan Zeng, Xiao Yan, Yuguang Yan, and Jiang Liu
Combating Ambiguity for Hash-code Learning in Medical Instance Retrieval
11 pages,8 figures, JBHI Journal
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When encountering a dubious diagnostic case, medical instance retrieval can help radiologists make evidence-based diagnoses by finding images containing instances similar to a query case from a large image database. The similarity between the query case and retrieved similar cases is determined by visual features extracted from pathologically abnormal regions. However, the manifestation of these regions often lacks specificity, i.e., different diseases can have the same manifestation, and different manifestations may occur at different stages of the same disease. To combat the manifestation ambiguity in medical instance retrieval, we propose a novel deep framework called Y-Net, encoding images into compact hash-codes generated from convolutional features by feature aggregation. Y-Net can learn highly discriminative convolutional features by unifying the pixel-wise segmentation loss and classification loss. The segmentation loss allows exploring subtle spatial differences for good spatial-discriminability while the classification loss utilizes class-aware semantic information for good semantic-separability. As a result, Y-Net can enhance the visual features in pathologically abnormal regions and suppress the disturbing of the background during model training, which could effectively embed discriminative features into the hash-codes in the retrieval stage. Extensive experiments on two medical image datasets demonstrate that Y-Net can alleviate the ambiguity of pathologically abnormal regions and its retrieval performance outperforms the state-of-the-art method by an average of 9.27\% on the returned list of 10.
[ { "created": "Wed, 19 May 2021 01:13:05 GMT", "version": "v1" } ]
2021-05-20
[ [ "Fang", "Jiansheng", "" ], [ "Fu", "Huazhu", "" ], [ "Zeng", "Dan", "" ], [ "Yan", "Xiao", "" ], [ "Yan", "Yuguang", "" ], [ "Liu", "Jiang", "" ] ]
When encountering a dubious diagnostic case, medical instance retrieval can help radiologists make evidence-based diagnoses by finding images containing instances similar to a query case from a large image database. The similarity between the query case and retrieved similar cases is determined by visual features extracted from pathologically abnormal regions. However, the manifestation of these regions often lacks specificity, i.e., different diseases can have the same manifestation, and different manifestations may occur at different stages of the same disease. To combat the manifestation ambiguity in medical instance retrieval, we propose a novel deep framework called Y-Net, encoding images into compact hash-codes generated from convolutional features by feature aggregation. Y-Net can learn highly discriminative convolutional features by unifying the pixel-wise segmentation loss and classification loss. The segmentation loss allows exploring subtle spatial differences for good spatial-discriminability while the classification loss utilizes class-aware semantic information for good semantic-separability. As a result, Y-Net can enhance the visual features in pathologically abnormal regions and suppress the disturbing of the background during model training, which could effectively embed discriminative features into the hash-codes in the retrieval stage. Extensive experiments on two medical image datasets demonstrate that Y-Net can alleviate the ambiguity of pathologically abnormal regions and its retrieval performance outperforms the state-of-the-art method by an average of 9.27\% on the returned list of 10.
2310.06486
Guoyuan An
Guoyuan An, Juhyung Seon, Inkyu An, Yuchi Huo, Sung-Eui Yoon
Topological RANSAC for instance verification and retrieval without fine-tuning
null
null
null
null
cs.AI cs.CV cs.IR
http://creativecommons.org/licenses/by/4.0/
This paper presents an innovative approach to enhancing explainable image retrieval, particularly in situations where a fine-tuning set is unavailable. The widely-used SPatial verification (SP) method, despite its efficacy, relies on a spatial model and the hypothesis-testing strategy for instance recognition, leading to inherent limitations, including the assumption of planar structures and neglect of topological relations among features. To address these shortcomings, we introduce a pioneering technique that replaces the spatial model with a topological one within the RANSAC process. We propose bio-inspired saccade and fovea functions to verify the topological consistency among features, effectively circumventing the issues associated with SP's spatial model. Our experimental results demonstrate that our method significantly outperforms SP, achieving state-of-the-art performance in non-fine-tuning retrieval. Furthermore, our approach can enhance performance when used in conjunction with fine-tuned features. Importantly, our method retains high explainability and is lightweight, offering a practical and adaptable solution for a variety of real-world applications.
[ { "created": "Tue, 10 Oct 2023 09:53:59 GMT", "version": "v1" } ]
2023-10-11
[ [ "An", "Guoyuan", "" ], [ "Seon", "Juhyung", "" ], [ "An", "Inkyu", "" ], [ "Huo", "Yuchi", "" ], [ "Yoon", "Sung-Eui", "" ] ]
This paper presents an innovative approach to enhancing explainable image retrieval, particularly in situations where a fine-tuning set is unavailable. The widely-used SPatial verification (SP) method, despite its efficacy, relies on a spatial model and the hypothesis-testing strategy for instance recognition, leading to inherent limitations, including the assumption of planar structures and neglect of topological relations among features. To address these shortcomings, we introduce a pioneering technique that replaces the spatial model with a topological one within the RANSAC process. We propose bio-inspired saccade and fovea functions to verify the topological consistency among features, effectively circumventing the issues associated with SP's spatial model. Our experimental results demonstrate that our method significantly outperforms SP, achieving state-of-the-art performance in non-fine-tuning retrieval. Furthermore, our approach can enhance performance when used in conjunction with fine-tuned features. Importantly, our method retains high explainability and is lightweight, offering a practical and adaptable solution for a variety of real-world applications.
2306.12881
Adrian Holzbock
Adrian Holzbock, Achyut Hegde, Klaus Dietmayer, and Vasileios Belagiannis
Data-Free Backbone Fine-Tuning for Pruned Neural Networks
Accpeted for presentation at the 31st European Signal Processing Conference (EUSIPCO) 2023, September 4-8, 2023, Helsinki, Finland
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Model compression techniques reduce the computational load and memory consumption of deep neural networks. After the compression operation, e.g. parameter pruning, the model is normally fine-tuned on the original training dataset to recover from the performance drop caused by compression. However, the training data is not always available due to privacy issues or other factors. In this work, we present a data-free fine-tuning approach for pruning the backbone of deep neural networks. In particular, the pruned network backbone is trained with synthetically generated images, and our proposed intermediate supervision to mimic the unpruned backbone's output feature map. Afterwards, the pruned backbone can be combined with the original network head to make predictions. We generate synthetic images by back-propagating gradients to noise images while relying on L1-pruning for the backbone pruning. In our experiments, we show that our approach is task-independent due to pruning only the backbone. By evaluating our approach on 2D human pose estimation, object detection, and image classification, we demonstrate promising performance compared to the unpruned model. Our code is available at https://github.com/holzbock/dfbf.
[ { "created": "Thu, 22 Jun 2023 13:44:40 GMT", "version": "v1" } ]
2023-06-23
[ [ "Holzbock", "Adrian", "" ], [ "Hegde", "Achyut", "" ], [ "Dietmayer", "Klaus", "" ], [ "Belagiannis", "Vasileios", "" ] ]
Model compression techniques reduce the computational load and memory consumption of deep neural networks. After the compression operation, e.g. parameter pruning, the model is normally fine-tuned on the original training dataset to recover from the performance drop caused by compression. However, the training data is not always available due to privacy issues or other factors. In this work, we present a data-free fine-tuning approach for pruning the backbone of deep neural networks. In particular, the pruned network backbone is trained with synthetically generated images, and our proposed intermediate supervision to mimic the unpruned backbone's output feature map. Afterwards, the pruned backbone can be combined with the original network head to make predictions. We generate synthetic images by back-propagating gradients to noise images while relying on L1-pruning for the backbone pruning. In our experiments, we show that our approach is task-independent due to pruning only the backbone. By evaluating our approach on 2D human pose estimation, object detection, and image classification, we demonstrate promising performance compared to the unpruned model. Our code is available at https://github.com/holzbock/dfbf.
2102.10607
Sivaramakrishnan Rajaraman
Sivaramakrishnan Rajaraman, Les Folio, Jane Dimperio, Philip Alderson and Sameer Antani
Improved Semantic Segmentation of Tuberculosis-consistent findings in Chest X-rays Using Augmented Training of Modality-specific U-Net Models with Weak Localizations
31 pages, 19 figures, journal publication
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning (DL) has drawn tremendous attention in object localization and recognition for both natural and medical images. U-Net segmentation models have demonstrated superior performance compared to conventional handcrafted feature-based methods. Medical image modality-specific DL models are better at transferring domain knowledge to a relevant target task than those that are pretrained on stock photography images. This helps improve model adaptation, generalization, and class-specific region of interest (ROI) localization. In this study, we train chest X-ray (CXR) modality-specific U-Nets and other state-of-the-art U-Net models for semantic segmentation of tuberculosis (TB)-consistent findings. Automated segmentation of such manifestations could help radiologists reduce errors and supplement decision-making while improving patient care and productivity. Our approach uses the publicly available TBX11K CXR dataset with weak TB annotations, typically provided as bounding boxes, to train a set of U-Net models. Next, we improve the results by augmenting the training data with weak localizations, post-processed into an ROI mask, from a DL classifier that is trained to classify CXRs as showing normal lungs or suspected TB manifestations. Test data are individually derived from the TBX11K CXR training distribution and other cross-institutional collections including the Shenzhen TB and Montgomery TB CXR datasets. We observe that our augmented training strategy helped the CXR modality-specific U-Net models achieve superior performance with test data derived from the TBX11K CXR training distribution as well as from cross-institutional collections (p < 0.05).
[ { "created": "Sun, 21 Feb 2021 14:03:49 GMT", "version": "v1" }, { "created": "Mon, 1 Mar 2021 18:10:45 GMT", "version": "v2" }, { "created": "Fri, 26 Mar 2021 17:14:27 GMT", "version": "v3" } ]
2021-03-29
[ [ "Rajaraman", "Sivaramakrishnan", "" ], [ "Folio", "Les", "" ], [ "Dimperio", "Jane", "" ], [ "Alderson", "Philip", "" ], [ "Antani", "Sameer", "" ] ]
Deep learning (DL) has drawn tremendous attention in object localization and recognition for both natural and medical images. U-Net segmentation models have demonstrated superior performance compared to conventional handcrafted feature-based methods. Medical image modality-specific DL models are better at transferring domain knowledge to a relevant target task than those that are pretrained on stock photography images. This helps improve model adaptation, generalization, and class-specific region of interest (ROI) localization. In this study, we train chest X-ray (CXR) modality-specific U-Nets and other state-of-the-art U-Net models for semantic segmentation of tuberculosis (TB)-consistent findings. Automated segmentation of such manifestations could help radiologists reduce errors and supplement decision-making while improving patient care and productivity. Our approach uses the publicly available TBX11K CXR dataset with weak TB annotations, typically provided as bounding boxes, to train a set of U-Net models. Next, we improve the results by augmenting the training data with weak localizations, post-processed into an ROI mask, from a DL classifier that is trained to classify CXRs as showing normal lungs or suspected TB manifestations. Test data are individually derived from the TBX11K CXR training distribution and other cross-institutional collections including the Shenzhen TB and Montgomery TB CXR datasets. We observe that our augmented training strategy helped the CXR modality-specific U-Net models achieve superior performance with test data derived from the TBX11K CXR training distribution as well as from cross-institutional collections (p < 0.05).
1905.03767
Ashkan Khakzar
Ashkan Khakzar, Shadi Albarqouni, Nassir Navab
Learning Interpretable Features via Adversarially Robust Optimization
MICCAI 2019 (Medical Image Computing and Computer Assisted Interventions)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural networks are proven to be remarkably successful for classification and diagnosis in medical applications. However, the ambiguity in the decision-making process and the interpretability of the learned features is a matter of concern. In this work, we propose a method for improving the feature interpretability of neural network classifiers. Initially, we propose a baseline convolutional neural network with state of the art performance in terms of accuracy and weakly supervised localization. Subsequently, the loss is modified to integrate robustness to adversarial examples into the training process. In this work, feature interpretability is quantified via evaluating the weakly supervised localization using the ground truth bounding boxes. Interpretability is also visually assessed using class activation maps and saliency maps. The method is applied to NIH ChestX-ray14, the largest publicly available chest x-rays dataset. We demonstrate that the adversarially robust optimization paradigm improves feature interpretability both quantitatively and visually.
[ { "created": "Thu, 9 May 2019 17:50:25 GMT", "version": "v1" }, { "created": "Mon, 19 Aug 2019 09:18:32 GMT", "version": "v2" } ]
2019-08-20
[ [ "Khakzar", "Ashkan", "" ], [ "Albarqouni", "Shadi", "" ], [ "Navab", "Nassir", "" ] ]
Neural networks are proven to be remarkably successful for classification and diagnosis in medical applications. However, the ambiguity in the decision-making process and the interpretability of the learned features is a matter of concern. In this work, we propose a method for improving the feature interpretability of neural network classifiers. Initially, we propose a baseline convolutional neural network with state of the art performance in terms of accuracy and weakly supervised localization. Subsequently, the loss is modified to integrate robustness to adversarial examples into the training process. In this work, feature interpretability is quantified via evaluating the weakly supervised localization using the ground truth bounding boxes. Interpretability is also visually assessed using class activation maps and saliency maps. The method is applied to NIH ChestX-ray14, the largest publicly available chest x-rays dataset. We demonstrate that the adversarially robust optimization paradigm improves feature interpretability both quantitatively and visually.
1902.03519
Ali Vakilian
Arturs Backurs, Piotr Indyk, Krzysztof Onak, Baruch Schieber, Ali Vakilian, Tal Wagner
Scalable Fair Clustering
ICML 2019
null
null
null
cs.DS cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the fair variant of the classic $k$-median problem introduced by Chierichetti et al. [2017]. In the standard $k$-median problem, given an input pointset $P$, the goal is to find $k$ centers $C$ and assign each input point to one of the centers in $C$ such that the average distance of points to their cluster center is minimized. In the fair variant of $k$-median, the points are colored, and the goal is to minimize the same average distance objective while ensuring that all clusters have an "approximately equal" number of points of each color. Chierichetti et al. proposed a two-phase algorithm for fair $k$-clustering. In the first step, the pointset is partitioned into subsets called fairlets that satisfy the fairness requirement and approximately preserve the $k$-median objective. In the second step, fairlets are merged into $k$ clusters by one of the existing $k$-median algorithms. The running time of this algorithm is dominated by the first step, which takes super-quadratic time. In this paper, we present a practical approximate fairlet decomposition algorithm that runs in nearly linear time. Our algorithm additionally allows for finer control over the balance of resulting clusters than the original work. We complement our theoretical bounds with empirical evaluation.
[ { "created": "Sun, 10 Feb 2019 00:04:34 GMT", "version": "v1" }, { "created": "Mon, 10 Jun 2019 18:19:34 GMT", "version": "v2" } ]
2019-06-12
[ [ "Backurs", "Arturs", "" ], [ "Indyk", "Piotr", "" ], [ "Onak", "Krzysztof", "" ], [ "Schieber", "Baruch", "" ], [ "Vakilian", "Ali", "" ], [ "Wagner", "Tal", "" ] ]
We study the fair variant of the classic $k$-median problem introduced by Chierichetti et al. [2017]. In the standard $k$-median problem, given an input pointset $P$, the goal is to find $k$ centers $C$ and assign each input point to one of the centers in $C$ such that the average distance of points to their cluster center is minimized. In the fair variant of $k$-median, the points are colored, and the goal is to minimize the same average distance objective while ensuring that all clusters have an "approximately equal" number of points of each color. Chierichetti et al. proposed a two-phase algorithm for fair $k$-clustering. In the first step, the pointset is partitioned into subsets called fairlets that satisfy the fairness requirement and approximately preserve the $k$-median objective. In the second step, fairlets are merged into $k$ clusters by one of the existing $k$-median algorithms. The running time of this algorithm is dominated by the first step, which takes super-quadratic time. In this paper, we present a practical approximate fairlet decomposition algorithm that runs in nearly linear time. Our algorithm additionally allows for finer control over the balance of resulting clusters than the original work. We complement our theoretical bounds with empirical evaluation.
1910.13620
Xiang Huang
Xiang Huang, Jack H. Lutz, and Andrei N. Migunov
Algorithmic Randomness in Continuous-Time Markov Chains
null
null
null
null
cs.IT cs.LO math.IT math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we develop the elements of the theory of algorithmic randomness in continuous-time Markov chains (CTMCs). Our main contribution is a rigorous, useful notion of what it means for an $\textit{ individual trajectory }$ of a CTMC to be ${ \textit random }$. CTMCs have discrete state spaces and operate in continuous time. This, together with the fact that trajectories may or may not halt, presents challenges not encountered in more conventional developments of algorithmic randomness. Although we formulate algorithmic randomness in the general context of CTMCs, we are primarily interested in the $\textit{ computational }$ power of stochastic chemical reaction networks, which are special cases of CTMCs. This leads us to embrace situations in which the long-term behavior of a network depends essentially on its initial state and hence to eschew assumptions that are frequently made in Markov chain theory to avoid such dependencies. After defining the randomness of trajectories in terms of a new kind of martingale (algorithmic betting strategy), we prove equivalent characterizations in terms of constructive measure theory and Kolmogorov complexity. As a preliminary application, we prove that, in any stochastic chemical reaction network, $\textit{ every }$ random trajectory with bounded molecular counts has the $\textit{ non-Zeno property }$ that infinitely many reactions do not occur in any finite interval of time.
[ { "created": "Wed, 30 Oct 2019 01:48:38 GMT", "version": "v1" }, { "created": "Tue, 7 Dec 2021 05:03:22 GMT", "version": "v2" }, { "created": "Fri, 17 Dec 2021 17:52:23 GMT", "version": "v3" } ]
2021-12-20
[ [ "Huang", "Xiang", "" ], [ "Lutz", "Jack H.", "" ], [ "Migunov", "Andrei N.", "" ] ]
In this paper, we develop the elements of the theory of algorithmic randomness in continuous-time Markov chains (CTMCs). Our main contribution is a rigorous, useful notion of what it means for an $\textit{ individual trajectory }$ of a CTMC to be ${ \textit random }$. CTMCs have discrete state spaces and operate in continuous time. This, together with the fact that trajectories may or may not halt, presents challenges not encountered in more conventional developments of algorithmic randomness. Although we formulate algorithmic randomness in the general context of CTMCs, we are primarily interested in the $\textit{ computational }$ power of stochastic chemical reaction networks, which are special cases of CTMCs. This leads us to embrace situations in which the long-term behavior of a network depends essentially on its initial state and hence to eschew assumptions that are frequently made in Markov chain theory to avoid such dependencies. After defining the randomness of trajectories in terms of a new kind of martingale (algorithmic betting strategy), we prove equivalent characterizations in terms of constructive measure theory and Kolmogorov complexity. As a preliminary application, we prove that, in any stochastic chemical reaction network, $\textit{ every }$ random trajectory with bounded molecular counts has the $\textit{ non-Zeno property }$ that infinitely many reactions do not occur in any finite interval of time.
2306.12725
Senbao Shi
Senbao Shi, Zhenran Xu, Baotian Hu, Min Zhang
Generative Multimodal Entity Linking
Accepted by LREC-COLING 2024
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal Entity Linking (MEL) is the task of mapping mentions with multimodal contexts to the referent entities from a knowledge base. Existing MEL methods mainly focus on designing complex multimodal interaction mechanisms and require fine-tuning all model parameters, which can be prohibitively costly and difficult to scale in the era of Large Language Models (LLMs). In this work, we propose GEMEL, a Generative Multimodal Entity Linking framework based on LLMs, which directly generates target entity names. We keep the vision and language model frozen and only train a feature mapper to enable cross-modality interactions. To adapt LLMs to the MEL task, we leverage the in-context learning capability of LLMs by retrieving multimodal instances as demonstrations. Extensive experiments show that, with only ~0.3% of the model parameters fine-tuned, GEMEL achieves state-of-the-art results on two well-established MEL datasets (7.7% accuracy gains on WikiDiverse and 8.8% accuracy gains on WikiMEL). The performance gain stems from mitigating the popularity bias of LLM predictions and disambiguating less common entities effectively. Further analysis verifies the generality and scalability of GEMEL. Our framework is compatible with any off-the-shelf language model, paving the way towards an efficient and general solution for utilizing LLMs in the MEL task. Our code is available at https://github.com/HITsz-TMG/GEMEL.
[ { "created": "Thu, 22 Jun 2023 07:57:19 GMT", "version": "v1" }, { "created": "Fri, 18 Aug 2023 05:12:12 GMT", "version": "v2" }, { "created": "Tue, 19 Mar 2024 12:30:53 GMT", "version": "v3" }, { "created": "Wed, 20 Mar 2024 01:30:41 GMT", "version": "v4" } ]
2024-03-21
[ [ "Shi", "Senbao", "" ], [ "Xu", "Zhenran", "" ], [ "Hu", "Baotian", "" ], [ "Zhang", "Min", "" ] ]
Multimodal Entity Linking (MEL) is the task of mapping mentions with multimodal contexts to the referent entities from a knowledge base. Existing MEL methods mainly focus on designing complex multimodal interaction mechanisms and require fine-tuning all model parameters, which can be prohibitively costly and difficult to scale in the era of Large Language Models (LLMs). In this work, we propose GEMEL, a Generative Multimodal Entity Linking framework based on LLMs, which directly generates target entity names. We keep the vision and language model frozen and only train a feature mapper to enable cross-modality interactions. To adapt LLMs to the MEL task, we leverage the in-context learning capability of LLMs by retrieving multimodal instances as demonstrations. Extensive experiments show that, with only ~0.3% of the model parameters fine-tuned, GEMEL achieves state-of-the-art results on two well-established MEL datasets (7.7% accuracy gains on WikiDiverse and 8.8% accuracy gains on WikiMEL). The performance gain stems from mitigating the popularity bias of LLM predictions and disambiguating less common entities effectively. Further analysis verifies the generality and scalability of GEMEL. Our framework is compatible with any off-the-shelf language model, paving the way towards an efficient and general solution for utilizing LLMs in the MEL task. Our code is available at https://github.com/HITsz-TMG/GEMEL.
1602.01648
Maiara F. Bollauf
Maiara F. Bollauf and Ram Zamir
Uniformity Properties of Construction C
5 pages, 1 figure, submitted to ISIT 2016
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Construction C (also known as Forney's multi-level code formula) forms a Euclidean code for the additive white Gaussian noise (AWGN) channel from $L$ binary code components. If the component codes are linear, then the minimum distance is the same for all the points, although the kissing number may vary. In fact, while in the single level ($L=1$) case it reduces to lattice Construction A, a multi-level Construction C is in general not a lattice. We show that the two-level ($L=2$) case is special: a two-level Construction C satisfies Forney's definition for a geometrically uniform constellation. Specifically, every point sees the same configuration of neighbors, up to a reflection of the coordinates in which the lower level code is equal to 1. In contrast, for three levels and up ($L\geq 3$), we construct examples where the distance spectrum varies between the points, hence the constellation is not geometrically uniform.
[ { "created": "Thu, 4 Feb 2016 11:53:31 GMT", "version": "v1" }, { "created": "Mon, 9 May 2016 15:46:00 GMT", "version": "v2" } ]
2016-05-10
[ [ "Bollauf", "Maiara F.", "" ], [ "Zamir", "Ram", "" ] ]
Construction C (also known as Forney's multi-level code formula) forms a Euclidean code for the additive white Gaussian noise (AWGN) channel from $L$ binary code components. If the component codes are linear, then the minimum distance is the same for all the points, although the kissing number may vary. In fact, while in the single level ($L=1$) case it reduces to lattice Construction A, a multi-level Construction C is in general not a lattice. We show that the two-level ($L=2$) case is special: a two-level Construction C satisfies Forney's definition for a geometrically uniform constellation. Specifically, every point sees the same configuration of neighbors, up to a reflection of the coordinates in which the lower level code is equal to 1. In contrast, for three levels and up ($L\geq 3$), we construct examples where the distance spectrum varies between the points, hence the constellation is not geometrically uniform.
2009.14794
Valerii Likhosherstov
Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller
Rethinking Attention with Performers
Published as a conference paper + oral presentation at ICLR 2021. 38 pages. See https://github.com/google-research/google-research/tree/master/protein_lm for protein language model code, and https://github.com/google-research/google-research/tree/master/performer for Performer code. See https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html for Google AI Blog
null
null
null
cs.LG cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can be also used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.
[ { "created": "Wed, 30 Sep 2020 17:09:09 GMT", "version": "v1" }, { "created": "Tue, 16 Feb 2021 21:40:24 GMT", "version": "v2" }, { "created": "Tue, 9 Mar 2021 16:26:47 GMT", "version": "v3" }, { "created": "Sat, 19 Nov 2022 12:45:21 GMT", "version": "v4" } ]
2022-11-22
[ [ "Choromanski", "Krzysztof", "" ], [ "Likhosherstov", "Valerii", "" ], [ "Dohan", "David", "" ], [ "Song", "Xingyou", "" ], [ "Gane", "Andreea", "" ], [ "Sarlos", "Tamas", "" ], [ "Hawkins", "Peter", "" ], [ "Davis", "Jared", "" ], [ "Mohiuddin", "Afroz", "" ], [ "Kaiser", "Lukasz", "" ], [ "Belanger", "David", "" ], [ "Colwell", "Lucy", "" ], [ "Weller", "Adrian", "" ] ]
We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can be also used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.
2011.07805
Guoqiang Wu
Guoqiang Wu, Jun Zhu
Multi-label classification: do Hamming loss and subset accuracy really conflict with each other?
To Appear in NeurIPS 2020
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Various evaluation measures have been developed for multi-label classification, including Hamming Loss (HL), Subset Accuracy (SA) and Ranking Loss (RL). However, there is a gap between empirical results and the existing theories: 1) an algorithm often empirically performs well on some measure(s) while poorly on others, while a formal theoretical analysis is lacking; and 2) in small label space cases, the algorithms optimizing HL often have comparable or even better performance on the SA measure than those optimizing SA directly, while existing theoretical results show that SA and HL are conflicting measures. This paper provides an attempt to fill up this gap by analyzing the learning guarantees of the corresponding learning algorithms on both SA and HL measures. We show that when a learning algorithm optimizes HL with its surrogate loss, it enjoys an error bound for the HL measure independent of $c$ (the number of labels), while the bound for the SA measure depends on at most $O(c)$. On the other hand, when directly optimizing SA with its surrogate loss, it has learning guarantees that depend on $O(\sqrt{c})$ for both HL and SA measures. This explains the observation that when the label space is not large, optimizing HL with its surrogate loss can have promising performance for SA. We further show that our techniques are applicable to analyze the learning guarantees of algorithms on other measures, such as RL. Finally, the theoretical analyses are supported by experimental results.
[ { "created": "Mon, 16 Nov 2020 09:13:16 GMT", "version": "v1" } ]
2020-11-17
[ [ "Wu", "Guoqiang", "" ], [ "Zhu", "Jun", "" ] ]
Various evaluation measures have been developed for multi-label classification, including Hamming Loss (HL), Subset Accuracy (SA) and Ranking Loss (RL). However, there is a gap between empirical results and the existing theories: 1) an algorithm often empirically performs well on some measure(s) while poorly on others, while a formal theoretical analysis is lacking; and 2) in small label space cases, the algorithms optimizing HL often have comparable or even better performance on the SA measure than those optimizing SA directly, while existing theoretical results show that SA and HL are conflicting measures. This paper provides an attempt to fill up this gap by analyzing the learning guarantees of the corresponding learning algorithms on both SA and HL measures. We show that when a learning algorithm optimizes HL with its surrogate loss, it enjoys an error bound for the HL measure independent of $c$ (the number of labels), while the bound for the SA measure depends on at most $O(c)$. On the other hand, when directly optimizing SA with its surrogate loss, it has learning guarantees that depend on $O(\sqrt{c})$ for both HL and SA measures. This explains the observation that when the label space is not large, optimizing HL with its surrogate loss can have promising performance for SA. We further show that our techniques are applicable to analyze the learning guarantees of algorithms on other measures, such as RL. Finally, the theoretical analyses are supported by experimental results.
1509.04199
Eric Sopena
Eric Sopena (LaBRI)
i-MARK: A New Subtraction Division Game
A few typos have been corrected, including the statement of Theorem 8
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given two finite sets of integers $S\subseteq\NNN\setminus\{0\}$ and $D\subseteq\NNN\setminus\{0,1\}$,the impartial combinatorial game $\IMARK(S,D)$ is played on a heap of tokens. From a heap of $n$ tokens, each player can moveeither to a heap of $n-s$ tokens for some $s\in S$, or to a heap of $n/d$ tokensfor some $d\in D$ if $d$ divides $n$.Such games can be considered as an integral variant of \MARK-type games, introduced by Elwyn Berlekamp and Joe Buhlerand studied by Aviezri Fraenkel and Alan Guo, for which it is allowed to move from a heap of $n$ tokensto a heap of $\lfloor n/d\rfloor$ tokens for any $d\in D$.Under normal convention, it is observed that the Sprague-Grundy sequence of the game $\IMARK(S,D)$ is aperiodic for any sets $S$ and $D$.However, we prove that, in many cases, this sequence is almost periodic and that the set of winning positions is periodic.Moreover, in all these cases, the Sprague-Grundy value of a heap of $n$ tokens can be computed in time $O(\log n)$.We also prove that, under mis\`ere convention, the outcome sequence of these games is purely periodic.
[ { "created": "Mon, 14 Sep 2015 16:57:57 GMT", "version": "v1" }, { "created": "Mon, 9 Nov 2015 15:11:52 GMT", "version": "v2" } ]
2015-11-10
[ [ "Sopena", "Eric", "", "LaBRI" ] ]
Given two finite sets of integers $S\subseteq\NNN\setminus\{0\}$ and $D\subseteq\NNN\setminus\{0,1\}$,the impartial combinatorial game $\IMARK(S,D)$ is played on a heap of tokens. From a heap of $n$ tokens, each player can moveeither to a heap of $n-s$ tokens for some $s\in S$, or to a heap of $n/d$ tokensfor some $d\in D$ if $d$ divides $n$.Such games can be considered as an integral variant of \MARK-type games, introduced by Elwyn Berlekamp and Joe Buhlerand studied by Aviezri Fraenkel and Alan Guo, for which it is allowed to move from a heap of $n$ tokensto a heap of $\lfloor n/d\rfloor$ tokens for any $d\in D$.Under normal convention, it is observed that the Sprague-Grundy sequence of the game $\IMARK(S,D)$ is aperiodic for any sets $S$ and $D$.However, we prove that, in many cases, this sequence is almost periodic and that the set of winning positions is periodic.Moreover, in all these cases, the Sprague-Grundy value of a heap of $n$ tokens can be computed in time $O(\log n)$.We also prove that, under mis\`ere convention, the outcome sequence of these games is purely periodic.
2407.11585
Tian Shilong
Shilong Tian, Hong Chen, Chengtao Lv, Yu Liu, Jinyang Guo, Xianglong Liu, Shengxi Li, Hao Yang, Tao Xie
QVD: Post-training Quantization for Video Diffusion Models
accepted by ACMMM2024
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, video diffusion models (VDMs) have garnered significant attention due to their notable advancements in generating coherent and realistic video content. However, processing multiple frame features concurrently, coupled with the considerable model size, results in high latency and extensive memory consumption, hindering their broader application. Post-training quantization (PTQ) is an effective technique to reduce memory footprint and improve computational efficiency. Unlike image diffusion, we observe that the temporal features, which are integrated into all frame features, exhibit pronounced skewness. Furthermore, we investigate significant inter-channel disparities and asymmetries in the activation of video diffusion models, resulting in low coverage of quantization levels by individual channels and increasing the challenge of quantization. To address these issues, we introduce the first PTQ strategy tailored for video diffusion models, dubbed QVD. Specifically, we propose the High Temporal Discriminability Quantization (HTDQ) method, designed for temporal features, which retains the high discriminability of quantized features, providing precise temporal guidance for all video frames. In addition, we present the Scattered Channel Range Integration (SCRI) method which aims to improve the coverage of quantization levels across individual channels. Experimental validations across various models, datasets, and bit-width settings demonstrate the effectiveness of our QVD in terms of diverse metrics. In particular, we achieve near-lossless performance degradation on W8A8, outperforming the current methods by 205.12 in FVD.
[ { "created": "Tue, 16 Jul 2024 10:47:27 GMT", "version": "v1" }, { "created": "Wed, 17 Jul 2024 05:27:04 GMT", "version": "v2" } ]
2024-07-18
[ [ "Tian", "Shilong", "" ], [ "Chen", "Hong", "" ], [ "Lv", "Chengtao", "" ], [ "Liu", "Yu", "" ], [ "Guo", "Jinyang", "" ], [ "Liu", "Xianglong", "" ], [ "Li", "Shengxi", "" ], [ "Yang", "Hao", "" ], [ "Xie", "Tao", "" ] ]
Recently, video diffusion models (VDMs) have garnered significant attention due to their notable advancements in generating coherent and realistic video content. However, processing multiple frame features concurrently, coupled with the considerable model size, results in high latency and extensive memory consumption, hindering their broader application. Post-training quantization (PTQ) is an effective technique to reduce memory footprint and improve computational efficiency. Unlike image diffusion, we observe that the temporal features, which are integrated into all frame features, exhibit pronounced skewness. Furthermore, we investigate significant inter-channel disparities and asymmetries in the activation of video diffusion models, resulting in low coverage of quantization levels by individual channels and increasing the challenge of quantization. To address these issues, we introduce the first PTQ strategy tailored for video diffusion models, dubbed QVD. Specifically, we propose the High Temporal Discriminability Quantization (HTDQ) method, designed for temporal features, which retains the high discriminability of quantized features, providing precise temporal guidance for all video frames. In addition, we present the Scattered Channel Range Integration (SCRI) method which aims to improve the coverage of quantization levels across individual channels. Experimental validations across various models, datasets, and bit-width settings demonstrate the effectiveness of our QVD in terms of diverse metrics. In particular, we achieve near-lossless performance degradation on W8A8, outperforming the current methods by 205.12 in FVD.
2009.02717
Suhail Sherif
Arkadev Chattopadhyay, Ankit Garg, Suhail Sherif
Towards Stronger Counterexamples to the Log-Approximate-Rank Conjecture
null
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give improved separations for the query complexity analogue of the log-approximate-rank conjecture i.e. we show that there are a plethora of total Boolean functions on $n$ input bits, each of which has approximate Fourier sparsity at most $O(n^3)$ and randomized parity decision tree complexity $\Theta(n)$. This improves upon the recent work of Chattopadhyay, Mande and Sherif (JACM '20) both qualitatively (in terms of designing a large number of examples) and quantitatively (improving the gap from quartic to cubic). We leave open the problem of proving a randomized communication complexity lower bound for XOR compositions of our examples. A linear lower bound would lead to new and improved refutations of the log-approximate-rank conjecture. Moreover, if any of these compositions had even a sub-linear cost randomized communication protocol, it would demonstrate that randomized parity decision tree complexity does not lift to randomized communication complexity in general (with the XOR gadget).
[ { "created": "Sun, 6 Sep 2020 11:57:33 GMT", "version": "v1" } ]
2020-09-08
[ [ "Chattopadhyay", "Arkadev", "" ], [ "Garg", "Ankit", "" ], [ "Sherif", "Suhail", "" ] ]
We give improved separations for the query complexity analogue of the log-approximate-rank conjecture i.e. we show that there are a plethora of total Boolean functions on $n$ input bits, each of which has approximate Fourier sparsity at most $O(n^3)$ and randomized parity decision tree complexity $\Theta(n)$. This improves upon the recent work of Chattopadhyay, Mande and Sherif (JACM '20) both qualitatively (in terms of designing a large number of examples) and quantitatively (improving the gap from quartic to cubic). We leave open the problem of proving a randomized communication complexity lower bound for XOR compositions of our examples. A linear lower bound would lead to new and improved refutations of the log-approximate-rank conjecture. Moreover, if any of these compositions had even a sub-linear cost randomized communication protocol, it would demonstrate that randomized parity decision tree complexity does not lift to randomized communication complexity in general (with the XOR gadget).
2105.06723
Amrita Suresh
Benedikt Bollig, Alain Finkel, Amrita Suresh
Bounded Reachability Problems are Decidable in FIFO Machines
null
Logical Methods in Computer Science, Volume 18, Issue 1 (January 20, 2022) lmcs:7485
10.46298/lmcs-18(1:19)2022
null
cs.LO cs.FL
http://creativecommons.org/licenses/by/4.0/
The undecidability of basic decision problems for general FIFO machines such as reachability and unboundedness is well-known. In this paper, we provide an underapproximation for the general model by considering only runs that are input-bounded (i.e. the sequence of messages sent through a particular channel belongs to a given bounded language). We prove, by reducing this model to a counter machine with restricted zero tests, that the rational-reachability problem (and by extension, control-state reachability, unboundedness, deadlock, etc.) is decidable. This class of machines subsumes input-letter-bounded machines, flat machines, linear FIFO nets, and monogeneous machines, for which some of these problems were already shown to be decidable. These theoretical results can form the foundations to build a tool to verify general FIFO machines based on the analysis of input-bounded machines.
[ { "created": "Fri, 14 May 2021 09:13:33 GMT", "version": "v1" }, { "created": "Sun, 14 Nov 2021 23:27:44 GMT", "version": "v2" }, { "created": "Wed, 8 Dec 2021 14:52:09 GMT", "version": "v3" }, { "created": "Wed, 19 Jan 2022 16:00:43 GMT", "version": "v4" } ]
2023-06-22
[ [ "Bollig", "Benedikt", "" ], [ "Finkel", "Alain", "" ], [ "Suresh", "Amrita", "" ] ]
The undecidability of basic decision problems for general FIFO machines such as reachability and unboundedness is well-known. In this paper, we provide an underapproximation for the general model by considering only runs that are input-bounded (i.e. the sequence of messages sent through a particular channel belongs to a given bounded language). We prove, by reducing this model to a counter machine with restricted zero tests, that the rational-reachability problem (and by extension, control-state reachability, unboundedness, deadlock, etc.) is decidable. This class of machines subsumes input-letter-bounded machines, flat machines, linear FIFO nets, and monogeneous machines, for which some of these problems were already shown to be decidable. These theoretical results can form the foundations to build a tool to verify general FIFO machines based on the analysis of input-bounded machines.
1603.06065
Sungkyun Chang
Sungkyun Chang and Kyogu Lee
A pairwise approach to simultaneous onset/offset detection for singing voice using correntropy
2014 IEEE International Conference on Acoustics, Speech and Signal Processing, 5 pages, 5 figures
null
10.1109/ICASSP.2014.6853672
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novelmethod to search for precise locations of paired note onset and offset in a singing voice signal. In comparison with the existing onset detection algorithms,our approach differs in two key respects. First, we employ Correntropy, a generalized correlation function inspired from Reyni's entropy, as a detection function to capture the instantaneous flux while preserving insensitiveness to outliers. Next, a novel peak picking algorithm is specially designed for this detection function. By calculating the fitness of a pre-defined inverse hyperbolic kernel to a detection function, it is possible to find an onset and its corresponding offset simultaneously. Experimental results show that the proposed method achieves performance significantly better than or comparable to other state-of-the-art techniques for onset detection in singing voice.
[ { "created": "Sat, 19 Mar 2016 08:45:21 GMT", "version": "v1" } ]
2020-10-29
[ [ "Chang", "Sungkyun", "" ], [ "Lee", "Kyogu", "" ] ]
In this paper, we propose a novelmethod to search for precise locations of paired note onset and offset in a singing voice signal. In comparison with the existing onset detection algorithms,our approach differs in two key respects. First, we employ Correntropy, a generalized correlation function inspired from Reyni's entropy, as a detection function to capture the instantaneous flux while preserving insensitiveness to outliers. Next, a novel peak picking algorithm is specially designed for this detection function. By calculating the fitness of a pre-defined inverse hyperbolic kernel to a detection function, it is possible to find an onset and its corresponding offset simultaneously. Experimental results show that the proposed method achieves performance significantly better than or comparable to other state-of-the-art techniques for onset detection in singing voice.
1807.00686
Ting Yao
Ting Yao and Xue Li
YH Technologies at ActivityNet Challenge 2018
Rank 2 in both Temporal Activity Detection Task & Kinetics Task @ ActivityNet 2018. arXiv admin note: substantial text overlap with arXiv:1710.08011 by other authors
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This notebook paper presents an overview and comparative analysis of our systems designed for the following five tasks in ActivityNet Challenge 2018: temporal action proposals, temporal action localization, dense-captioning events in videos, trimmed action recognition, and spatio-temporal action localization.
[ { "created": "Fri, 29 Jun 2018 07:49:08 GMT", "version": "v1" } ]
2018-07-03
[ [ "Yao", "Ting", "" ], [ "Li", "Xue", "" ] ]
This notebook paper presents an overview and comparative analysis of our systems designed for the following five tasks in ActivityNet Challenge 2018: temporal action proposals, temporal action localization, dense-captioning events in videos, trimmed action recognition, and spatio-temporal action localization.
2309.01940
Lingyue Fu Miss
Lingyue Fu, Huacan Chai, Shuang Luo, Kounianhua Du, Weiming Zhang, Longteng Fan, Jiayi Lei, Renting Rui, Jianghao Lin, Yuchen Fang, Yifan Liu, Jingkuan Wang, Siyuan Qi, Kangning Zhang, Weinan Zhang, Yong Yu
CodeApex: A Bilingual Programming Evaluation Benchmark for Large Language Models
33pages
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the emergence of Large Language Models (LLMs), there has been a significant improvement in the programming capabilities of models, attracting growing attention from researchers. Evaluating the programming capabilities of LLMs is crucial as it reflects the multifaceted abilities of LLMs, and it has numerous downstream applications. In this paper, we propose CodeApex, a bilingual benchmark dataset focusing on the programming comprehension, code generation, and code correction abilities of LLMs. Programming comprehension task tests LLMs on multiple-choice exam questions covering conceptual understanding, commonsense reasoning, and multi-hop reasoning. The code generation task evaluates LLMs through completing C++ functions based on provided descriptions and prototypes. The code correction task asks LLMs to fix real-world erroneous code segments with different error messages. We evaluate 12 widely used LLMs, including both general-purpose and specialized models. GPT-4 exhibits the best programming capabilities, achieving approximate accuracy of 69%, 54%, and 66% on the three tasks, respectively. Compared to human performance, there is still significant room for improvement in LLM programming. We hope that CodeApex can serve as a reference for evaluating the coding capabilities of LLMs, further promoting their development and growth.
[ { "created": "Tue, 5 Sep 2023 04:12:01 GMT", "version": "v1" }, { "created": "Wed, 6 Sep 2023 15:36:11 GMT", "version": "v2" }, { "created": "Sun, 10 Sep 2023 13:32:38 GMT", "version": "v3" }, { "created": "Mon, 11 Mar 2024 08:07:28 GMT", "version": "v4" } ]
2024-03-12
[ [ "Fu", "Lingyue", "" ], [ "Chai", "Huacan", "" ], [ "Luo", "Shuang", "" ], [ "Du", "Kounianhua", "" ], [ "Zhang", "Weiming", "" ], [ "Fan", "Longteng", "" ], [ "Lei", "Jiayi", "" ], [ "Rui", "Renting", "" ], [ "Lin", "Jianghao", "" ], [ "Fang", "Yuchen", "" ], [ "Liu", "Yifan", "" ], [ "Wang", "Jingkuan", "" ], [ "Qi", "Siyuan", "" ], [ "Zhang", "Kangning", "" ], [ "Zhang", "Weinan", "" ], [ "Yu", "Yong", "" ] ]
With the emergence of Large Language Models (LLMs), there has been a significant improvement in the programming capabilities of models, attracting growing attention from researchers. Evaluating the programming capabilities of LLMs is crucial as it reflects the multifaceted abilities of LLMs, and it has numerous downstream applications. In this paper, we propose CodeApex, a bilingual benchmark dataset focusing on the programming comprehension, code generation, and code correction abilities of LLMs. Programming comprehension task tests LLMs on multiple-choice exam questions covering conceptual understanding, commonsense reasoning, and multi-hop reasoning. The code generation task evaluates LLMs through completing C++ functions based on provided descriptions and prototypes. The code correction task asks LLMs to fix real-world erroneous code segments with different error messages. We evaluate 12 widely used LLMs, including both general-purpose and specialized models. GPT-4 exhibits the best programming capabilities, achieving approximate accuracy of 69%, 54%, and 66% on the three tasks, respectively. Compared to human performance, there is still significant room for improvement in LLM programming. We hope that CodeApex can serve as a reference for evaluating the coding capabilities of LLMs, further promoting their development and growth.
1908.07152
Joseph O'Rourke
Joseph O'Rourke
Unfolding Polyhedra
Proceedings 31st Canadian Conference on Computational Geometry, Aug 2019, Edmonton, Alberta. pp. 85-86. The arXiv version updates the proceedings version by citing a recent result that not every polycube can be edge-unzipped
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Starting with the unsolved "D\"urer's problem" of edge-unfolding a convex polyhedron to a net, we specialize and generalize (a) the types of cuts permitted, and (b) the polyhedra shapes, to highlight both advances established and which problems remain open.
[ { "created": "Tue, 13 Aug 2019 13:49:12 GMT", "version": "v1" } ]
2019-08-21
[ [ "O'Rourke", "Joseph", "" ] ]
Starting with the unsolved "D\"urer's problem" of edge-unfolding a convex polyhedron to a net, we specialize and generalize (a) the types of cuts permitted, and (b) the polyhedra shapes, to highlight both advances established and which problems remain open.
2307.02339
Ludwig Mohr
Ludwig Mohr, Ismail Geles and Friedrich Fraundorfer
GAFAR: Graph-Attention Feature-Augmentation for Registration A Fast and Light-weight Point Set Registration Algorithm
Accepted to the 11th European Conference on Mobile Robots (ECMR2023)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rigid registration of point clouds is a fundamental problem in computer vision with many applications from 3D scene reconstruction to geometry capture and robotics. If a suitable initial registration is available, conventional methods like ICP and its many variants can provide adequate solutions. In absence of a suitable initialization and in the presence of a high outlier rate or in the case of small overlap though the task of rigid registration still presents great challenges. The advent of deep learning in computer vision has brought new drive to research on this topic, since it provides the possibility to learn expressive feature-representations and provide one-shot estimates instead of depending on time-consuming iterations of conventional robust methods. Yet, the rotation and permutation invariant nature of point clouds poses its own challenges to deep learning, resulting in loss of performance and low generalization capability due to sensitivity to outliers and characteristics of 3D scans not present during network training. In this work, we present a novel fast and light-weight network architecture using the attention mechanism to augment point descriptors at inference time to optimally suit the registration task of the specific point clouds it is presented with. Employing a fully-connected graph both within and between point clouds lets the network reason about the importance and reliability of points for registration, making our approach robust to outliers, low overlap and unseen data. We test the performance of our registration algorithm on different registration and generalization tasks and provide information on runtime and resource consumption. The code and trained weights are available at https://github.com/mordecaimalignatius/GAFAR/.
[ { "created": "Wed, 5 Jul 2023 14:50:36 GMT", "version": "v1" } ]
2023-07-06
[ [ "Mohr", "Ludwig", "" ], [ "Geles", "Ismail", "" ], [ "Fraundorfer", "Friedrich", "" ] ]
Rigid registration of point clouds is a fundamental problem in computer vision with many applications from 3D scene reconstruction to geometry capture and robotics. If a suitable initial registration is available, conventional methods like ICP and its many variants can provide adequate solutions. In absence of a suitable initialization and in the presence of a high outlier rate or in the case of small overlap though the task of rigid registration still presents great challenges. The advent of deep learning in computer vision has brought new drive to research on this topic, since it provides the possibility to learn expressive feature-representations and provide one-shot estimates instead of depending on time-consuming iterations of conventional robust methods. Yet, the rotation and permutation invariant nature of point clouds poses its own challenges to deep learning, resulting in loss of performance and low generalization capability due to sensitivity to outliers and characteristics of 3D scans not present during network training. In this work, we present a novel fast and light-weight network architecture using the attention mechanism to augment point descriptors at inference time to optimally suit the registration task of the specific point clouds it is presented with. Employing a fully-connected graph both within and between point clouds lets the network reason about the importance and reliability of points for registration, making our approach robust to outliers, low overlap and unseen data. We test the performance of our registration algorithm on different registration and generalization tasks and provide information on runtime and resource consumption. The code and trained weights are available at https://github.com/mordecaimalignatius/GAFAR/.
1904.10574
Hasan Manzour
Hasan Manzour, Simge K\"u\c{c}\"ukyavuz, Ali Shojaie
Integer Programming for Learning Directed Acyclic Graphs from Continuous Data
null
null
null
null
cs.LG cs.DM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning directed acyclic graphs (DAGs) from data is a challenging task both in theory and in practice, because the number of possible DAGs scales superexponentially with the number of nodes. In this paper, we study the problem of learning an optimal DAG from continuous observational data. We cast this problem in the form of a mathematical programming model which can naturally incorporate a super-structure in order to reduce the set of possible candidate DAGs. We use the penalized negative log-likelihood score function with both $\ell_0$ and $\ell_1$ regularizations and propose a new mixed-integer quadratic optimization (MIQO) model, referred to as a layered network (LN) formulation. The LN formulation is a compact model, which enjoys as tight an optimal continuous relaxation value as the stronger but larger formulations under a mild condition. Computational results indicate that the proposed formulation outperforms existing mathematical formulations and scales better than available algorithms that can solve the same problem with only $\ell_1$ regularization. In particular, the LN formulation clearly outperforms existing methods in terms of computational time needed to find an optimal DAG in the presence of a sparse super-structure.
[ { "created": "Tue, 23 Apr 2019 23:58:40 GMT", "version": "v1" } ]
2019-04-25
[ [ "Manzour", "Hasan", "" ], [ "Küçükyavuz", "Simge", "" ], [ "Shojaie", "Ali", "" ] ]
Learning directed acyclic graphs (DAGs) from data is a challenging task both in theory and in practice, because the number of possible DAGs scales superexponentially with the number of nodes. In this paper, we study the problem of learning an optimal DAG from continuous observational data. We cast this problem in the form of a mathematical programming model which can naturally incorporate a super-structure in order to reduce the set of possible candidate DAGs. We use the penalized negative log-likelihood score function with both $\ell_0$ and $\ell_1$ regularizations and propose a new mixed-integer quadratic optimization (MIQO) model, referred to as a layered network (LN) formulation. The LN formulation is a compact model, which enjoys as tight an optimal continuous relaxation value as the stronger but larger formulations under a mild condition. Computational results indicate that the proposed formulation outperforms existing mathematical formulations and scales better than available algorithms that can solve the same problem with only $\ell_1$ regularization. In particular, the LN formulation clearly outperforms existing methods in terms of computational time needed to find an optimal DAG in the presence of a sparse super-structure.
1511.07608
Janne V. Kujala
Janne V. Kujala, Tuomas J. Lukka, and Harri Holopainen
Picking a Conveyor Clean by an Autonomously Learning Robot
6 pages, 8 figures
null
null
null
cs.RO cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a research picking prototype related to our company's industrial waste sorting application. The goal of the prototype is to be as autonomous as possible and it both calibrates itself and improves its picking with minimal human intervention. The system learns to pick objects better based on a feedback sensor in its gripper and uses machine learning to choosing the best proposal from a random sample produced by simple hard-coded geometric models. We show experimentally the system improving its picking autonomously by measuring the pick success rate as function of time. We also show how this system can pick a conveyor belt clean, depositing 70 out of 80 objects in a difficult to manipulate pile of novel objects into the correct chute. We discuss potential improvements and next steps in this direction.
[ { "created": "Tue, 24 Nov 2015 08:35:49 GMT", "version": "v1" } ]
2015-11-25
[ [ "Kujala", "Janne V.", "" ], [ "Lukka", "Tuomas J.", "" ], [ "Holopainen", "Harri", "" ] ]
We present a research picking prototype related to our company's industrial waste sorting application. The goal of the prototype is to be as autonomous as possible and it both calibrates itself and improves its picking with minimal human intervention. The system learns to pick objects better based on a feedback sensor in its gripper and uses machine learning to choosing the best proposal from a random sample produced by simple hard-coded geometric models. We show experimentally the system improving its picking autonomously by measuring the pick success rate as function of time. We also show how this system can pick a conveyor belt clean, depositing 70 out of 80 objects in a difficult to manipulate pile of novel objects into the correct chute. We discuss potential improvements and next steps in this direction.
1708.04317
Tianyang Wang
Tianyang Wang, Zhengrui Qin, Michelle Zhu
An ELU Network with Total Variation for Image Denoising
10 pages, Accepted by the 24th International Conference on Neural Information Processing (2017)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel convolutional neural network (CNN) for image denoising, which uses exponential linear unit (ELU) as the activation function. We investigate the suitability by analyzing ELU's connection with trainable nonlinear reaction diffusion model (TNRD) and residual denoising. On the other hand, batch normalization (BN) is indispensable for residual denoising and convergence purpose. However, direct stacking of BN and ELU degrades the performance of CNN. To mitigate this issue, we design an innovative combination of activation layer and normalization layer to exploit and leverage the ELU network, and discuss the corresponding rationale. Moreover, inspired by the fact that minimizing total variation (TV) can be applied to image denoising, we propose a TV regularized L2 loss to evaluate the training effect during the iterations. Finally, we conduct extensive experiments, showing that our model outperforms some recent and popular approaches on Gaussian denoising with specific or randomized noise levels for both gray and color images.
[ { "created": "Mon, 14 Aug 2017 20:47:35 GMT", "version": "v1" } ]
2017-08-16
[ [ "Wang", "Tianyang", "" ], [ "Qin", "Zhengrui", "" ], [ "Zhu", "Michelle", "" ] ]
In this paper, we propose a novel convolutional neural network (CNN) for image denoising, which uses exponential linear unit (ELU) as the activation function. We investigate the suitability by analyzing ELU's connection with trainable nonlinear reaction diffusion model (TNRD) and residual denoising. On the other hand, batch normalization (BN) is indispensable for residual denoising and convergence purpose. However, direct stacking of BN and ELU degrades the performance of CNN. To mitigate this issue, we design an innovative combination of activation layer and normalization layer to exploit and leverage the ELU network, and discuss the corresponding rationale. Moreover, inspired by the fact that minimizing total variation (TV) can be applied to image denoising, we propose a TV regularized L2 loss to evaluate the training effect during the iterations. Finally, we conduct extensive experiments, showing that our model outperforms some recent and popular approaches on Gaussian denoising with specific or randomized noise levels for both gray and color images.
2207.11952
Viorica Chifu
Cristina Bianca Pop, Viorica Rozina Chifu, Corina Cordea, Emil Stefan Chifu, Octav Barsan
Forecasting the Short-Term Energy Consumption Using Random Forests and Gradient Boosting
null
2021 20th RoEduNet Conference: Networking in Education and Research (RoEduNet), 2021, pp. 1-6
10.1109/RoEduNet54112.2021.9638276
null
cs.AI cs.DC
http://creativecommons.org/licenses/by/4.0/
This paper analyzes comparatively the performance of Random Forests and Gradient Boosting algorithms in the field of forecasting the energy consumption based on historical data. The two algorithms are applied in order to forecast the energy consumption individually, and then combined together by using a Weighted Average Ensemble Method. The comparison among the achieved experimental results proves that the Weighted Average Ensemble Method provides more accurate results than each of the two algorithms applied alone.
[ { "created": "Mon, 25 Jul 2022 07:40:25 GMT", "version": "v1" } ]
2022-07-26
[ [ "Pop", "Cristina Bianca", "" ], [ "Chifu", "Viorica Rozina", "" ], [ "Cordea", "Corina", "" ], [ "Chifu", "Emil Stefan", "" ], [ "Barsan", "Octav", "" ] ]
This paper analyzes comparatively the performance of Random Forests and Gradient Boosting algorithms in the field of forecasting the energy consumption based on historical data. The two algorithms are applied in order to forecast the energy consumption individually, and then combined together by using a Weighted Average Ensemble Method. The comparison among the achieved experimental results proves that the Weighted Average Ensemble Method provides more accurate results than each of the two algorithms applied alone.
2403.14293
Ponkoj Shill
Ponkoj Chandra Shill, Md. Azizul Hakim, Muhammad Jahanzeb Khan, Bashira Akter Anima
Human Reactions to Incorrect Answers from Robots
6 pages, 6 figures, 1 table, Ro-Man 2024
null
null
null
cs.RO cs.SE
http://creativecommons.org/publicdomain/zero/1.0/
As robots grow more and more integrated into numerous industries, it is critical to comprehend how humans respond to their failures. This paper systematically studies how trust dynamics and system design are affected by human responses to robot failures. The three-stage survey used in the study provides a thorough understanding of human-robot interactions. While the second stage concentrates on interaction details, such as robot precision and error acknowledgment, the first stage collects demographic data and initial levels of trust. In the last phase, participants' perceptions are examined after the encounter, and trust dynamics, forgiveness, and propensity to suggest robotic technologies are evaluated. Results show that participants' trust in robotic technologies increased significantly when robots acknowledged their errors or limitations to participants and their willingness to suggest robots for activities in the future points to a favorable change in perception, emphasizing the role that direct engagement has in influencing trust dynamics. By providing useful advice for creating more sympathetic, responsive, and reliable robotic systems, the study advances the science of human-robot interaction and promotes a wider adoption of robotic technologies.
[ { "created": "Thu, 21 Mar 2024 11:00:11 GMT", "version": "v1" } ]
2024-07-08
[ [ "Shill", "Ponkoj Chandra", "" ], [ "Hakim", "Md. Azizul", "" ], [ "Khan", "Muhammad Jahanzeb", "" ], [ "Anima", "Bashira Akter", "" ] ]
As robots grow more and more integrated into numerous industries, it is critical to comprehend how humans respond to their failures. This paper systematically studies how trust dynamics and system design are affected by human responses to robot failures. The three-stage survey used in the study provides a thorough understanding of human-robot interactions. While the second stage concentrates on interaction details, such as robot precision and error acknowledgment, the first stage collects demographic data and initial levels of trust. In the last phase, participants' perceptions are examined after the encounter, and trust dynamics, forgiveness, and propensity to suggest robotic technologies are evaluated. Results show that participants' trust in robotic technologies increased significantly when robots acknowledged their errors or limitations to participants and their willingness to suggest robots for activities in the future points to a favorable change in perception, emphasizing the role that direct engagement has in influencing trust dynamics. By providing useful advice for creating more sympathetic, responsive, and reliable robotic systems, the study advances the science of human-robot interaction and promotes a wider adoption of robotic technologies.
1608.04105
Timothy Molter
Timothy W. Molter and M. Alexander Nugent
Machine Learning with Memristors via Thermodynamic RAM
null
CNNA 2016, 15th International Workshop on Cellular Nanoscale Networks and their Applications, Dresden, Germany, 2016, pp. 1-2
null
null
cs.ET cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thermodynamic RAM (kT-RAM) is a neuromemristive co-processor design based on the theory of AHaH Computing and implemented via CMOS and memristors. The co-processor is a 2-D array of differential memristor pairs (synapses) that can be selectively coupled together (neurons) via the digital bit addressing of the underlying CMOS RAM circuitry. The chip is designed to plug into existing digital computers and be interacted with via a simple instruction set. Anti-Hebbian and Hebbian (AHaH) computing forms the theoretical framework from which a nature-inspired type of computing architecture is built where, unlike von Neumann architectures, memory and processor are physically combined for synaptic operations. Through exploitation of AHaH attractor states, memristor-based circuits converge to attractor basins that represents machine learning solutions such as unsupervised feature learning, supervised classification and anomaly detection. Because kT-RAM eliminates the need to shuttle bits back and forth between memory and processor and can operate at very low voltage levels, it can significantly surpass CPU, GPU, and FPGA performance for synaptic integration and learning operations. Here, we present a memristor technology developed for use in kT-RAM, in particular bi-directional incremental adaptation of conductance via short low-voltage 1.0 V, 1.0 microsecond pulses.
[ { "created": "Sun, 14 Aug 2016 14:01:10 GMT", "version": "v1" } ]
2017-04-27
[ [ "Molter", "Timothy W.", "" ], [ "Nugent", "M. Alexander", "" ] ]
Thermodynamic RAM (kT-RAM) is a neuromemristive co-processor design based on the theory of AHaH Computing and implemented via CMOS and memristors. The co-processor is a 2-D array of differential memristor pairs (synapses) that can be selectively coupled together (neurons) via the digital bit addressing of the underlying CMOS RAM circuitry. The chip is designed to plug into existing digital computers and be interacted with via a simple instruction set. Anti-Hebbian and Hebbian (AHaH) computing forms the theoretical framework from which a nature-inspired type of computing architecture is built where, unlike von Neumann architectures, memory and processor are physically combined for synaptic operations. Through exploitation of AHaH attractor states, memristor-based circuits converge to attractor basins that represents machine learning solutions such as unsupervised feature learning, supervised classification and anomaly detection. Because kT-RAM eliminates the need to shuttle bits back and forth between memory and processor and can operate at very low voltage levels, it can significantly surpass CPU, GPU, and FPGA performance for synaptic integration and learning operations. Here, we present a memristor technology developed for use in kT-RAM, in particular bi-directional incremental adaptation of conductance via short low-voltage 1.0 V, 1.0 microsecond pulses.
1903.10740
Juraj Sebej
Stavros Konstantinidis, Mitja Mastnak, Juraj Sebej
Partitioning a Symmetric Rational Relation into Two Asymmetric Rational Relations
19 pages, 4 figures. Submitted to the 24th International Conference on Implementation and Application of Automata, July 22-25, 2019, Kosice, Slovakia
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of partitioning effectively a given symmetric (and irreflexive) rational relation R into two asymmetric rational relations. This problem is motivated by a recent method of embedding an R-independent language into one that is maximal R-independent, where the method requires to use an asymmetric partition of R. We solve the problem when R is realized by a zero-avoiding transducer (with some bound k): if the absolute value of the input-output length discrepancy of a computation exceeds k then the length discrepancy of the computation cannot become zero. This class of relations properly contains all recognizable, all left synchronous, and all right synchronous relations. We leave the asymmetric partition problem open when R is not realized by a zero-avoiding transducer. We also show examples of total wordorderings for which there is a relation R that cannot be partitioned into two asymmetric rational relations such that one of them is decreasing with respect to the given word-ordering.
[ { "created": "Tue, 26 Mar 2019 08:58:25 GMT", "version": "v1" } ]
2019-03-27
[ [ "Konstantinidis", "Stavros", "" ], [ "Mastnak", "Mitja", "" ], [ "Sebej", "Juraj", "" ] ]
We consider the problem of partitioning effectively a given symmetric (and irreflexive) rational relation R into two asymmetric rational relations. This problem is motivated by a recent method of embedding an R-independent language into one that is maximal R-independent, where the method requires to use an asymmetric partition of R. We solve the problem when R is realized by a zero-avoiding transducer (with some bound k): if the absolute value of the input-output length discrepancy of a computation exceeds k then the length discrepancy of the computation cannot become zero. This class of relations properly contains all recognizable, all left synchronous, and all right synchronous relations. We leave the asymmetric partition problem open when R is not realized by a zero-avoiding transducer. We also show examples of total wordorderings for which there is a relation R that cannot be partitioned into two asymmetric rational relations such that one of them is decreasing with respect to the given word-ordering.
1006.0379
Jamshid Abouei
J. David Brown, Jamshid Abouei, Konstantinos N. Plataniotis, Subbarayan Pasupathy
Adaptive Demodulation in Differentially Coherent Phase Systems: Design and Performance Analysis
25 pages, 11 Figures, submitted to IEEE Transactions on Communications, June 1, 2010
null
10.1109/TCOMM.2011.051311.100331
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adaptive Demodulation (ADM) is a newly proposed rate-adaptive system which operates without requiring Channel State Information (CSI) at the transmitter (unlike adaptive modulation) by using adaptive decision region boundaries at the receiver and encoding the data with a rateless code. This paper addresses the design and performance of an ADM scheme for two common differentially coherent schemes: M-DPSK (M-ary Differential Phase Shift Keying) and M-DAPSK (M-ary Differential Amplitude and Phase Shift Keying) operating over AWGN and Rayleigh fading channels. The optimal method for determining the most reliable bits for a given differential detection scheme is presented. In addition, simple (near-optimal) implementations are provided for recovering the most reliable bits from a received pair of differentially encoded symbols for systems using 16-DPSK and 16- DAPSK. The new receivers offer the advantages of a rate-adaptive system, without requiring CSI at the transmitter and a coherent phase reference at the receiver. Bit error analysis for the ADM system in both cases is presented along with numerical results of the spectral efficiency for the rate-adaptive systems operating over a Rayleigh fading channel.
[ { "created": "Wed, 2 Jun 2010 13:56:42 GMT", "version": "v1" } ]
2016-11-17
[ [ "Brown", "J. David", "" ], [ "Abouei", "Jamshid", "" ], [ "Plataniotis", "Konstantinos N.", "" ], [ "Pasupathy", "Subbarayan", "" ] ]
Adaptive Demodulation (ADM) is a newly proposed rate-adaptive system which operates without requiring Channel State Information (CSI) at the transmitter (unlike adaptive modulation) by using adaptive decision region boundaries at the receiver and encoding the data with a rateless code. This paper addresses the design and performance of an ADM scheme for two common differentially coherent schemes: M-DPSK (M-ary Differential Phase Shift Keying) and M-DAPSK (M-ary Differential Amplitude and Phase Shift Keying) operating over AWGN and Rayleigh fading channels. The optimal method for determining the most reliable bits for a given differential detection scheme is presented. In addition, simple (near-optimal) implementations are provided for recovering the most reliable bits from a received pair of differentially encoded symbols for systems using 16-DPSK and 16- DAPSK. The new receivers offer the advantages of a rate-adaptive system, without requiring CSI at the transmitter and a coherent phase reference at the receiver. Bit error analysis for the ADM system in both cases is presented along with numerical results of the spectral efficiency for the rate-adaptive systems operating over a Rayleigh fading channel.
2203.15767
Yaxin Hu
Yaxin Hu, Yuxiao Qu, Adam Maus, and Bilge Mutlu
Polite or Direct? Conversation Design of a Smart Display for Older Adults Based on Politeness Theory
To be published in 2022 CHI Conference on Human Factors in Computing Systems (CHI '22)
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conversational interfaces increasingly rely on human-like dialogue to offer a natural experience. However, relying on dialogue involving multiple exchanges for even simple tasks can overburden users, particularly older adults. In this paper, we explored the use of politeness theory in conversation design to alleviate this burden and improve user experience. To achieve this goal, we categorized the voice interaction offered by a smart display application designed for older adults into seven major speech acts: request, suggest, instruct, comment, welcome, farewell, and repair. We identified face needs for each speech act, applied politeness strategies that best address these needs, and tested the ability of these strategies to shape the perceived politeness of a voice assistant in an online study ($n=64$). Based on the findings of this study, we designed direct and polite versions of the system and conducted a field study ($n=15$) in which participants used each of the versions for five days at their homes. Based on five factors merged from our qualitative findings, we identified four distinctive user personas$\unicode{x2013}$socially oriented follower, socially oriented leader, utility oriented follower, and utility oriented leader$\unicode{x2013}$that can inform personalized design of smart displays.
[ { "created": "Tue, 29 Mar 2022 17:26:08 GMT", "version": "v1" } ]
2022-03-30
[ [ "Hu", "Yaxin", "" ], [ "Qu", "Yuxiao", "" ], [ "Maus", "Adam", "" ], [ "Mutlu", "Bilge", "" ] ]
Conversational interfaces increasingly rely on human-like dialogue to offer a natural experience. However, relying on dialogue involving multiple exchanges for even simple tasks can overburden users, particularly older adults. In this paper, we explored the use of politeness theory in conversation design to alleviate this burden and improve user experience. To achieve this goal, we categorized the voice interaction offered by a smart display application designed for older adults into seven major speech acts: request, suggest, instruct, comment, welcome, farewell, and repair. We identified face needs for each speech act, applied politeness strategies that best address these needs, and tested the ability of these strategies to shape the perceived politeness of a voice assistant in an online study ($n=64$). Based on the findings of this study, we designed direct and polite versions of the system and conducted a field study ($n=15$) in which participants used each of the versions for five days at their homes. Based on five factors merged from our qualitative findings, we identified four distinctive user personas$\unicode{x2013}$socially oriented follower, socially oriented leader, utility oriented follower, and utility oriented leader$\unicode{x2013}$that can inform personalized design of smart displays.
2103.07576
Annemarie van der Marel
Annemarie van der Marel (1), Claire L. O'Connell (1), Sanjay Prasher (1), Chelsea Carminito (1), Xavier Francis (1), Elizabeth A. Hobson (1) ((1) Department of Biological Sciences, University of Cincinnati, Cincinnati, OH, USA)
A comparison of low-cost behavioral observation software applications and recommendations for use
23 pages
null
null
null
cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
In the field of animal behavior and behavioral ecology, many standardized methods to observe animal behavior were established in the last decades. While the protocols remained similar, behavioral researchers can take advantage of technological advancements to enter observations directly onto a handheld computer (phone, tablet, etc.), saving time and potentially increasing fidelity of recordings. However, we now have the choice between many different platforms for recording behavioral observations. Our challenge is choosing the most appropriate platform that fits a particular study question, research design, budget, and desired amount of preparatory time. Here, we review six low-cost software applications for handheld computers that are available for real-time entry of behavioral observations: Animal Behaviour Pro, Animal Observer, BORIS, CyberTracker, Prim8, and ZooMonitor. We discuss the preliminary decisions that have to be made about the study design, and we assess the six applications by providing the advantages and disadvantages of each platform and an overall application comparison. Our goal is to help researchers make calculated decisions about what behavioral observation platform is best for their study system and question.
[ { "created": "Fri, 12 Mar 2021 23:51:30 GMT", "version": "v1" }, { "created": "Fri, 16 Apr 2021 01:34:37 GMT", "version": "v2" }, { "created": "Thu, 16 Sep 2021 15:17:02 GMT", "version": "v3" }, { "created": "Mon, 25 Oct 2021 15:11:16 GMT", "version": "v4" } ]
2021-10-26
[ [ "van der Marel", "Annemarie", "" ], [ "O'Connell", "Claire L.", "" ], [ "Prasher", "Sanjay", "" ], [ "Carminito", "Chelsea", "" ], [ "Francis", "Xavier", "" ], [ "Hobson", "Elizabeth A.", "" ] ]
In the field of animal behavior and behavioral ecology, many standardized methods to observe animal behavior were established in the last decades. While the protocols remained similar, behavioral researchers can take advantage of technological advancements to enter observations directly onto a handheld computer (phone, tablet, etc.), saving time and potentially increasing fidelity of recordings. However, we now have the choice between many different platforms for recording behavioral observations. Our challenge is choosing the most appropriate platform that fits a particular study question, research design, budget, and desired amount of preparatory time. Here, we review six low-cost software applications for handheld computers that are available for real-time entry of behavioral observations: Animal Behaviour Pro, Animal Observer, BORIS, CyberTracker, Prim8, and ZooMonitor. We discuss the preliminary decisions that have to be made about the study design, and we assess the six applications by providing the advantages and disadvantages of each platform and an overall application comparison. Our goal is to help researchers make calculated decisions about what behavioral observation platform is best for their study system and question.
1801.03106
Wolfgang Orthuber
Wolfgang Orthuber (Kiel University)
Why informatics and general science need a conjoint basic definition of information
15 pages, 4 figures
null
null
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
First the basic definition of information as a selection from a set of possibilities resp. domain is recalled. This also applies to digital information. The bits of digital information are parts of number sequences which represent a selection from a set of possibilities resp. domain. For faultless conversation sender and receiver of information must have the same definition of the domain (e.g. of language vocabulary). Up to now the definition of the domain and of its elements is derived from context and knowledge. The internet provides an additional important possibility: A link to a conjoint uniform definition of the domain at unique location on the internet. The associated basic information structure is called "Domain Vector" (DV) and has the structure "UL (of the domain definition) plus sequence of numbers". The "UL" is not only "Uniform Locator" of the domain definition. It also identifies a certain kind of information for later comparison and search. It can be a Uniform Resource Locator (URL) or an abbreviated equivalent, e.g. a hierarchic numeric pointer or a short local pointer to a table with global internet pointers. The DV structure can be used as general carrier of information which is language independent and more precise than language. A domain which contains DVs is called "Domain Space" (DS) and is defined as metric space. This allows similarity search according to user defined criteria, so that any kind of definable information can be made comparable and searchable according to user selected (relevant) and objectifiable (globally uniform) criteria. DS definitions can be reused in new DS definitions. Their elements, the DVs, are automatically globally uniformly identified and defined. Obviously such conjoint definition of comparable information has great potential. It also can avoid interoperability problems and redundant programming and so save high costs.
[ { "created": "Mon, 8 Jan 2018 17:25:00 GMT", "version": "v1" } ]
2018-01-11
[ [ "Orthuber", "Wolfgang", "", "Kiel University" ] ]
First the basic definition of information as a selection from a set of possibilities resp. domain is recalled. This also applies to digital information. The bits of digital information are parts of number sequences which represent a selection from a set of possibilities resp. domain. For faultless conversation sender and receiver of information must have the same definition of the domain (e.g. of language vocabulary). Up to now the definition of the domain and of its elements is derived from context and knowledge. The internet provides an additional important possibility: A link to a conjoint uniform definition of the domain at unique location on the internet. The associated basic information structure is called "Domain Vector" (DV) and has the structure "UL (of the domain definition) plus sequence of numbers". The "UL" is not only "Uniform Locator" of the domain definition. It also identifies a certain kind of information for later comparison and search. It can be a Uniform Resource Locator (URL) or an abbreviated equivalent, e.g. a hierarchic numeric pointer or a short local pointer to a table with global internet pointers. The DV structure can be used as general carrier of information which is language independent and more precise than language. A domain which contains DVs is called "Domain Space" (DS) and is defined as metric space. This allows similarity search according to user defined criteria, so that any kind of definable information can be made comparable and searchable according to user selected (relevant) and objectifiable (globally uniform) criteria. DS definitions can be reused in new DS definitions. Their elements, the DVs, are automatically globally uniformly identified and defined. Obviously such conjoint definition of comparable information has great potential. It also can avoid interoperability problems and redundant programming and so save high costs.
1806.06180
Ganapati Bhat
Ganapati Bhat, Suat Gumussoy, Umit Y. Ogras
Power-Temperature Stability and Safety Analysis for Multiprocessor Systems
Published in ACM TECS
ACM Trans. Embed. Comput. Syst. 16, 5s, Article 145 (September 2017), 19 pages
10.1145/3126567
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern multiprocessor system-on-chips (SoCs) integrate multiple heterogeneous cores to achieve high energy efficiency. The power consumption of each core contributes to an increase in the temperature across the chip floorplan. In turn, higher temperature increases the leakage power exponentially, and leads to a positive feedback with nonlinear dynamics. This paper presents a power-temperature stability and safety analysis technique for multiprocessor systems. This analysis reveals the conditions under which the power-temperature trajectory converges to a stable fixed point. We also present a simple formula to compute the stable fixed point and maximum thermally-safe power consumption at runtime. Hardware measurements on a state-of-the-art mobile processor show that our analytical formulation can predict the stable fixed point with an average error of 2.6%. Hence, our approach can be used at runtime to ensure thermally safe operation and guard against thermal threats.
[ { "created": "Sat, 16 Jun 2018 04:43:25 GMT", "version": "v1" } ]
2018-06-19
[ [ "Bhat", "Ganapati", "" ], [ "Gumussoy", "Suat", "" ], [ "Ogras", "Umit Y.", "" ] ]
Modern multiprocessor system-on-chips (SoCs) integrate multiple heterogeneous cores to achieve high energy efficiency. The power consumption of each core contributes to an increase in the temperature across the chip floorplan. In turn, higher temperature increases the leakage power exponentially, and leads to a positive feedback with nonlinear dynamics. This paper presents a power-temperature stability and safety analysis technique for multiprocessor systems. This analysis reveals the conditions under which the power-temperature trajectory converges to a stable fixed point. We also present a simple formula to compute the stable fixed point and maximum thermally-safe power consumption at runtime. Hardware measurements on a state-of-the-art mobile processor show that our analytical formulation can predict the stable fixed point with an average error of 2.6%. Hence, our approach can be used at runtime to ensure thermally safe operation and guard against thermal threats.
1606.07518
EPTCS
Alexandru Baltag (Institute for logic, Language and Computation. University of Amsterdam), Nina Gierasimczuk (Institute for Logic, Language and Computation. University of Amsterdam), Sonja Smets (Institute for Logic, Language and Computation. University of Amsterdam)
On the Solvability of Inductive Problems: A Study in Epistemic Topology
In Proceedings TARK 2015, arXiv:1606.07295
EPTCS 215, 2016, pp. 81-98
10.4204/EPTCS.215.7
null
cs.LO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the issues of inductive problem-solving and learning by doxastic agents. We provide topological characterizations of solvability and learnability, and we use them to prove that AGM-style belief revision is "universal", i.e., that every solvable problem is solvable by AGM conditioning.
[ { "created": "Fri, 24 Jun 2016 00:30:59 GMT", "version": "v1" } ]
2016-06-27
[ [ "Baltag", "Alexandru", "", "Institute for logic, Language and Computation.\n University of Amsterdam" ], [ "Gierasimczuk", "Nina", "", "Institute for Logic, Language\n and Computation. University of Amsterdam" ], [ "Smets", "Sonja", "", "Institute for Logic,\n Language and Computation. University of Amsterdam" ] ]
We investigate the issues of inductive problem-solving and learning by doxastic agents. We provide topological characterizations of solvability and learnability, and we use them to prove that AGM-style belief revision is "universal", i.e., that every solvable problem is solvable by AGM conditioning.
1211.6631
Onur Ozdemir
Onur Ozdemir, Pramod K. Varshney, Wei Su, and Andrew L. Drozd
Asymptotic Properties of Likelihood Based Linear Modulation Classification Systems
12 pages double-column, 6 figures, submitted to IEEE Transactions on Wireless Communications
null
null
null
cs.IT math.IT stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of linear modulation classification using likelihood based methods is considered. Asymptotic properties of most commonly used classifiers in the literature are derived. These classifiers are based on hybrid likelihood ratio test (HLRT) and average likelihood ratio test (ALRT), respectively. Both a single-sensor setting and a multi-sensor setting that uses a distributed decision fusion approach are analyzed. For a modulation classification system using a single sensor, it is shown that HLRT achieves asymptotically vanishing probability of error (Pe) whereas the same result cannot be proven for ALRT. In a multi-sensor setting using soft decision fusion, conditions are derived under which Pe vanishes asymptotically. Furthermore, the asymptotic analysis of the fusion rule that assumes independent sensor decisions is carried out.
[ { "created": "Wed, 28 Nov 2012 15:22:29 GMT", "version": "v1" } ]
2012-11-29
[ [ "Ozdemir", "Onur", "" ], [ "Varshney", "Pramod K.", "" ], [ "Su", "Wei", "" ], [ "Drozd", "Andrew L.", "" ] ]
The problem of linear modulation classification using likelihood based methods is considered. Asymptotic properties of most commonly used classifiers in the literature are derived. These classifiers are based on hybrid likelihood ratio test (HLRT) and average likelihood ratio test (ALRT), respectively. Both a single-sensor setting and a multi-sensor setting that uses a distributed decision fusion approach are analyzed. For a modulation classification system using a single sensor, it is shown that HLRT achieves asymptotically vanishing probability of error (Pe) whereas the same result cannot be proven for ALRT. In a multi-sensor setting using soft decision fusion, conditions are derived under which Pe vanishes asymptotically. Furthermore, the asymptotic analysis of the fusion rule that assumes independent sensor decisions is carried out.
2306.17498
Hartmut Koenitz
Hartmut Koenitz, Jonathan Barbara, Lissa Holloway-Attaway, Frank Nack, Mirjam Palosaari Eladhari, Agnes Bakk
INDCOR White Paper 0: Interactive Digital Narratives (IDNs) -- A Solution to the Challenge of Representing Complex Issues
null
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Citizens everywhere have the right to be well-informed. Yet, with the high complexity of many contemporary issues, such as global warming and migration, our means of information need to mutually adapt. Narrative has always been at the core of information exchange - regardless of whether our ancestors sat around a fire and exchanged stories, or whether we read an article in a newspaper, or watched a TV news broadcast. Yet, the narrative formats of the newspaper article, the news broadcast, the documentary, and the textbook are severely limited when it comes to representing highly complex topics which may include several competing - and sometimes equally valid - perspectives. Such complexity contributes to a high level of uncertainty due to a multitude of factors affecting an outcome. Fortunately, with Interactive Digital Narrative (IDN), there is a novel media format which can address these challenges. IDNs can present several different perspectives in the same work, and give audiences the ability to explore them at will through decision-making. After experiencing the consequences of their decisions, the audience can replay to revisit and change these decisions in order to consider their alternatives. IDN works enable deep personalization and the inclusion of live data. These capabilities make IDN a 21st century democratic medium, empowering citizens through the understanding of complex issues. In this white paper, we discuss the challenge of representing complexity, describe the advantages offered by IDNs, and point out opportunities and strategies for deployment.
[ { "created": "Fri, 30 Jun 2023 09:16:59 GMT", "version": "v1" } ]
2023-07-03
[ [ "Koenitz", "Hartmut", "" ], [ "Barbara", "Jonathan", "" ], [ "Holloway-Attaway", "Lissa", "" ], [ "Nack", "Frank", "" ], [ "Eladhari", "Mirjam Palosaari", "" ], [ "Bakk", "Agnes", "" ] ]
Citizens everywhere have the right to be well-informed. Yet, with the high complexity of many contemporary issues, such as global warming and migration, our means of information need to mutually adapt. Narrative has always been at the core of information exchange - regardless of whether our ancestors sat around a fire and exchanged stories, or whether we read an article in a newspaper, or watched a TV news broadcast. Yet, the narrative formats of the newspaper article, the news broadcast, the documentary, and the textbook are severely limited when it comes to representing highly complex topics which may include several competing - and sometimes equally valid - perspectives. Such complexity contributes to a high level of uncertainty due to a multitude of factors affecting an outcome. Fortunately, with Interactive Digital Narrative (IDN), there is a novel media format which can address these challenges. IDNs can present several different perspectives in the same work, and give audiences the ability to explore them at will through decision-making. After experiencing the consequences of their decisions, the audience can replay to revisit and change these decisions in order to consider their alternatives. IDN works enable deep personalization and the inclusion of live data. These capabilities make IDN a 21st century democratic medium, empowering citizens through the understanding of complex issues. In this white paper, we discuss the challenge of representing complexity, describe the advantages offered by IDNs, and point out opportunities and strategies for deployment.
1703.09651
Divya Shyam Singh
Divya Shyam Singha, G.B.L. Chowdarya, D Roy Mahapatraa
Structural Damage Identification Using Artificial Neural Network and Synthetic data
6 pages,6 figures, ISSS conference
null
null
null
cs.LG cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents real-time vibration based identification technique using measured frequency response functions(FRFs) under random vibration loading. Artificial Neural Networks (ANNs) are trained to map damage fingerprints to damage characteristic parameters. Principal component statistical analysis(PCA) technique was used to tackle the problem of high dimensionality and high noise of data, which is common for industrial structures. The present study considers Crack, Rivet hole expansion and redundant uniform mass as damages on the structure. Frequency response function data after being reduced in size using PCA is fed to individual neural networks to localize and predict the severity of damage on the structure. The system of ANNs trained with both numerical and experimental model data to make the system reliable and robust. The methodology is applied to a numerical model of stiffened panel structure, where damages are confined close to the stiffener. The results showed that, in all the cases considered, it is possible to localize and predict severity of the damage occurrence with very good accuracy and reliability.
[ { "created": "Mon, 27 Mar 2017 08:54:09 GMT", "version": "v1" } ]
2017-03-29
[ [ "Singha", "Divya Shyam", "" ], [ "Chowdarya", "G. B. L.", "" ], [ "Mahapatraa", "D Roy", "" ] ]
This paper presents real-time vibration based identification technique using measured frequency response functions(FRFs) under random vibration loading. Artificial Neural Networks (ANNs) are trained to map damage fingerprints to damage characteristic parameters. Principal component statistical analysis(PCA) technique was used to tackle the problem of high dimensionality and high noise of data, which is common for industrial structures. The present study considers Crack, Rivet hole expansion and redundant uniform mass as damages on the structure. Frequency response function data after being reduced in size using PCA is fed to individual neural networks to localize and predict the severity of damage on the structure. The system of ANNs trained with both numerical and experimental model data to make the system reliable and robust. The methodology is applied to a numerical model of stiffened panel structure, where damages are confined close to the stiffener. The results showed that, in all the cases considered, it is possible to localize and predict severity of the damage occurrence with very good accuracy and reliability.
2111.00780
Lantao Yu
Lantao Yu, Jiaming Song, Yang Song, Stefano Ermon
Pseudo-Spherical Contrastive Divergence
NeurIPS 2021
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Energy-based models (EBMs) offer flexible distribution parametrization. However, due to the intractable partition function, they are typically trained via contrastive divergence for maximum likelihood estimation. In this paper, we propose pseudo-spherical contrastive divergence (PS-CD) to generalize maximum likelihood learning of EBMs. PS-CD is derived from the maximization of a family of strictly proper homogeneous scoring rules, which avoids the computation of the intractable partition function and provides a generalized family of learning objectives that include contrastive divergence as a special case. Moreover, PS-CD allows us to flexibly choose various learning objectives to train EBMs without additional computational cost or variational minimax optimization. Theoretical analysis on the proposed method and extensive experiments on both synthetic data and commonly used image datasets demonstrate the effectiveness and modeling flexibility of PS-CD, as well as its robustness to data contamination, thus showing its superiority over maximum likelihood and $f$-EBMs.
[ { "created": "Mon, 1 Nov 2021 09:17:15 GMT", "version": "v1" } ]
2021-11-02
[ [ "Yu", "Lantao", "" ], [ "Song", "Jiaming", "" ], [ "Song", "Yang", "" ], [ "Ermon", "Stefano", "" ] ]
Energy-based models (EBMs) offer flexible distribution parametrization. However, due to the intractable partition function, they are typically trained via contrastive divergence for maximum likelihood estimation. In this paper, we propose pseudo-spherical contrastive divergence (PS-CD) to generalize maximum likelihood learning of EBMs. PS-CD is derived from the maximization of a family of strictly proper homogeneous scoring rules, which avoids the computation of the intractable partition function and provides a generalized family of learning objectives that include contrastive divergence as a special case. Moreover, PS-CD allows us to flexibly choose various learning objectives to train EBMs without additional computational cost or variational minimax optimization. Theoretical analysis on the proposed method and extensive experiments on both synthetic data and commonly used image datasets demonstrate the effectiveness and modeling flexibility of PS-CD, as well as its robustness to data contamination, thus showing its superiority over maximum likelihood and $f$-EBMs.
1805.12282
Huda Khayrallah
Huda Khayrallah, Philipp Koehn
On the Impact of Various Types of Noise on Neural Machine Translation
Please cite as: @InProceedings{khayrallah-koehn:2018:WNMT, author = {Khayrallah, Huda and Koehn, Philipp}, title = {On the Impact of Various Types of Noise on Neural Machine Translation}, booktitle = {Proceedings of the Second Workshop on Neural Machine Translation and Generation}, year = {2018}, address = {Melbourne}, publisher = {Association for Computational Linguistics} }
null
10.18653/v1/W18-2709
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine how various types of noise in the parallel training data impact the quality of neural machine translation systems. We create five types of artificial noise and analyze how they degrade performance in neural and statistical machine translation. We find that neural models are generally more harmed by noise than statistical models. For one especially egregious type of noise they learn to just copy the input sentence.
[ { "created": "Thu, 31 May 2018 01:33:19 GMT", "version": "v1" } ]
2020-09-14
[ [ "Khayrallah", "Huda", "" ], [ "Koehn", "Philipp", "" ] ]
We examine how various types of noise in the parallel training data impact the quality of neural machine translation systems. We create five types of artificial noise and analyze how they degrade performance in neural and statistical machine translation. We find that neural models are generally more harmed by noise than statistical models. For one especially egregious type of noise they learn to just copy the input sentence.
2303.12766
Xin Lai
Xin Lai, Yukang Chen, Fanbin Lu, Jianhui Liu, Jiaya Jia
Spherical Transformer for LiDAR-based 3D Recognition
Accepted to CVPR 2023. Code is available at https://github.com/dvlab-research/SphereFormer.git
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
LiDAR-based 3D point cloud recognition has benefited various applications. Without specially considering the LiDAR point distribution, most current methods suffer from information disconnection and limited receptive field, especially for the sparse distant points. In this work, we study the varying-sparsity distribution of LiDAR points and present SphereFormer to directly aggregate information from dense close points to the sparse distant ones. We design radial window self-attention that partitions the space into multiple non-overlapping narrow and long windows. It overcomes the disconnection issue and enlarges the receptive field smoothly and dramatically, which significantly boosts the performance of sparse distant points. Moreover, to fit the narrow and long windows, we propose exponential splitting to yield fine-grained position encoding and dynamic feature selection to increase model representation ability. Notably, our method ranks 1st on both nuScenes and SemanticKITTI semantic segmentation benchmarks with 81.9% and 74.8% mIoU, respectively. Also, we achieve the 3rd place on nuScenes object detection benchmark with 72.8% NDS and 68.5% mAP. Code is available at https://github.com/dvlab-research/SphereFormer.git.
[ { "created": "Wed, 22 Mar 2023 17:30:14 GMT", "version": "v1" } ]
2023-03-23
[ [ "Lai", "Xin", "" ], [ "Chen", "Yukang", "" ], [ "Lu", "Fanbin", "" ], [ "Liu", "Jianhui", "" ], [ "Jia", "Jiaya", "" ] ]
LiDAR-based 3D point cloud recognition has benefited various applications. Without specially considering the LiDAR point distribution, most current methods suffer from information disconnection and limited receptive field, especially for the sparse distant points. In this work, we study the varying-sparsity distribution of LiDAR points and present SphereFormer to directly aggregate information from dense close points to the sparse distant ones. We design radial window self-attention that partitions the space into multiple non-overlapping narrow and long windows. It overcomes the disconnection issue and enlarges the receptive field smoothly and dramatically, which significantly boosts the performance of sparse distant points. Moreover, to fit the narrow and long windows, we propose exponential splitting to yield fine-grained position encoding and dynamic feature selection to increase model representation ability. Notably, our method ranks 1st on both nuScenes and SemanticKITTI semantic segmentation benchmarks with 81.9% and 74.8% mIoU, respectively. Also, we achieve the 3rd place on nuScenes object detection benchmark with 72.8% NDS and 68.5% mAP. Code is available at https://github.com/dvlab-research/SphereFormer.git.
2303.03398
Ronald Caplan
Ronald M. Caplan, Miko M. Stulajter, Jon A. Linker
Acceleration of a production Solar MHD code with Fortran standard parallelism: From OpenACC to `do concurrent'
10 pages, 2 tables, 4 figures, accepted to the AsHES workshop at IPDPS 2023
null
null
null
cs.MS astro-ph.IM cs.DC cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is growing interest in using standard language constructs for accelerated computing, avoiding the need for (often vendor-specific) external APIs. These constructs hold the potential to be more portable and much more `future-proof'. For Fortran codes, the current focus is on the {\tt do concurrent} (DC) loop. While there have been some successful examples of GPU-acceleration using DC for benchmark and/or small codes, its widespread adoption will require demonstrations of its use in full-size applications. Here, we look at the current capabilities and performance of using DC in a production application called Magnetohydrodynamic Algorithm outside a Sphere (MAS). MAS is a state-of-the-art model for studying coronal and heliospheric dynamics, is over 70,000 lines long, and has previously been ported to GPUs using MPI+OpenACC. We attempt to eliminate as many of its OpenACC directives as possible in favor of DC. We show that using the NVIDIA {\tt nvfortran} compiler's Fortran 202X preview implementation, unified managed memory, and modified MPI launch methods, we can achieve GPU acceleration across multiple GPUs without using a single OpenACC directive. However, doing so results in a slowdown between 1.25x and 3x. We discuss what future improvements are needed to avoid this loss, and show how we can still retain close
[ { "created": "Sun, 5 Mar 2023 21:37:34 GMT", "version": "v1" }, { "created": "Wed, 8 Mar 2023 20:18:20 GMT", "version": "v2" } ]
2023-03-10
[ [ "Caplan", "Ronald M.", "" ], [ "Stulajter", "Miko M.", "" ], [ "Linker", "Jon A.", "" ] ]
There is growing interest in using standard language constructs for accelerated computing, avoiding the need for (often vendor-specific) external APIs. These constructs hold the potential to be more portable and much more `future-proof'. For Fortran codes, the current focus is on the {\tt do concurrent} (DC) loop. While there have been some successful examples of GPU-acceleration using DC for benchmark and/or small codes, its widespread adoption will require demonstrations of its use in full-size applications. Here, we look at the current capabilities and performance of using DC in a production application called Magnetohydrodynamic Algorithm outside a Sphere (MAS). MAS is a state-of-the-art model for studying coronal and heliospheric dynamics, is over 70,000 lines long, and has previously been ported to GPUs using MPI+OpenACC. We attempt to eliminate as many of its OpenACC directives as possible in favor of DC. We show that using the NVIDIA {\tt nvfortran} compiler's Fortran 202X preview implementation, unified managed memory, and modified MPI launch methods, we can achieve GPU acceleration across multiple GPUs without using a single OpenACC directive. However, doing so results in a slowdown between 1.25x and 3x. We discuss what future improvements are needed to avoid this loss, and show how we can still retain close
1202.3683
Rajendra Shinde
Debojyoti Dutta, Michael Kapralov, Ian Post, Rajendra Shinde
Optimal bandwidth-aware VM allocation for Infrastructure-as-a-Service
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Infrastructure-as-a-Service (IaaS) providers need to offer richer services to be competitive while optimizing their resource usage to keep costs down. Richer service offerings include new resource request models involving bandwidth guarantees between virtual machines (VMs). Thus we consider the following problem: given a VM request graph (where nodes are VMs and edges represent virtual network connectivity between the VMs) and a real data center topology, find an allocation of VMs to servers that satisfies the bandwidth guarantees for every virtual network edge---which maps to a path in the physical network---and minimizes congestion of the network. Previous work has shown that for arbitrary networks and requests, finding the optimal embedding satisfying bandwidth requests is $\mathcal{NP}$-hard. However, in most data center architectures, the routing protocols employed are based on a spanning tree of the physical network. In this paper, we prove that the problem remains $\mathcal{NP}$-hard even when the physical network topology is restricted to be a tree, and the request graph topology is also restricted. We also present a dynamic programming algorithm for computing the optimal embedding in a tree network which runs in time $O(3^kn)$, where $n$ is the number of nodes in the physical topology and $k$ is the size of the request graph, which is well suited for practical requests which have small $k$. Such requests form a large class of web-service and enterprise workloads. Also, if we restrict the requests topology to a clique (all VMs connected to a virtual switch with uniform bandwidth requirements), we show that the dynamic programming algorithm can be modified to output the minimum congestion embedding in time $O(k^2n)$.
[ { "created": "Thu, 16 Feb 2012 20:03:44 GMT", "version": "v1" } ]
2012-02-17
[ [ "Dutta", "Debojyoti", "" ], [ "Kapralov", "Michael", "" ], [ "Post", "Ian", "" ], [ "Shinde", "Rajendra", "" ] ]
Infrastructure-as-a-Service (IaaS) providers need to offer richer services to be competitive while optimizing their resource usage to keep costs down. Richer service offerings include new resource request models involving bandwidth guarantees between virtual machines (VMs). Thus we consider the following problem: given a VM request graph (where nodes are VMs and edges represent virtual network connectivity between the VMs) and a real data center topology, find an allocation of VMs to servers that satisfies the bandwidth guarantees for every virtual network edge---which maps to a path in the physical network---and minimizes congestion of the network. Previous work has shown that for arbitrary networks and requests, finding the optimal embedding satisfying bandwidth requests is $\mathcal{NP}$-hard. However, in most data center architectures, the routing protocols employed are based on a spanning tree of the physical network. In this paper, we prove that the problem remains $\mathcal{NP}$-hard even when the physical network topology is restricted to be a tree, and the request graph topology is also restricted. We also present a dynamic programming algorithm for computing the optimal embedding in a tree network which runs in time $O(3^kn)$, where $n$ is the number of nodes in the physical topology and $k$ is the size of the request graph, which is well suited for practical requests which have small $k$. Such requests form a large class of web-service and enterprise workloads. Also, if we restrict the requests topology to a clique (all VMs connected to a virtual switch with uniform bandwidth requirements), we show that the dynamic programming algorithm can be modified to output the minimum congestion embedding in time $O(k^2n)$.
2303.10062
Qiaojie Zheng
Qiaojie Zheng, Jiucai Zhang, Amy Zhang, Xiaoli Zhang
Confidence-aware 3D Gaze Estimation and Evaluation Metric
9 pages 12 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Deep learning appearance-based 3D gaze estimation is gaining popularity due to its minimal hardware requirements and being free of constraint. Unreliable and overconfident inferences, however, still limit the adoption of this gaze estimation method. To address the unreliable and overconfident issues, we introduce a confidence-aware model that predicts uncertainties together with gaze angle estimations. We also introduce a novel effectiveness evaluation method based on the causality between eye feature degradation and the rise in inference uncertainty to assess the uncertainty estimation. Our confidence-aware model demonstrates reliable uncertainty estimations while providing angular estimation accuracies on par with the state-of-the-art. Compared with the existing statistical uncertainty-angular-error evaluation metric, the proposed effectiveness evaluation approach can more effectively judge inferred uncertainties' performance at each prediction.
[ { "created": "Fri, 17 Mar 2023 15:44:44 GMT", "version": "v1" } ]
2023-03-20
[ [ "Zheng", "Qiaojie", "" ], [ "Zhang", "Jiucai", "" ], [ "Zhang", "Amy", "" ], [ "Zhang", "Xiaoli", "" ] ]
Deep learning appearance-based 3D gaze estimation is gaining popularity due to its minimal hardware requirements and being free of constraint. Unreliable and overconfident inferences, however, still limit the adoption of this gaze estimation method. To address the unreliable and overconfident issues, we introduce a confidence-aware model that predicts uncertainties together with gaze angle estimations. We also introduce a novel effectiveness evaluation method based on the causality between eye feature degradation and the rise in inference uncertainty to assess the uncertainty estimation. Our confidence-aware model demonstrates reliable uncertainty estimations while providing angular estimation accuracies on par with the state-of-the-art. Compared with the existing statistical uncertainty-angular-error evaluation metric, the proposed effectiveness evaluation approach can more effectively judge inferred uncertainties' performance at each prediction.
2006.07244
Todd Murphey
Ana Pervan and Todd D. Murphey
Algorithmic Design for Embodied Intelligence in Synthetic Cells
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In nature, biological organisms jointly evolve both their morphology and their neurological capabilities to improve their chances for survival. Consequently, task information is encoded in both their brains and their bodies. In robotics, the development of complex control and planning algorithms often bears sole responsibility for improving task performance. This dependence on centralized control can be problematic for systems with computational limitations, such as mechanical systems and robots on the microscale. In these cases we need to be able to offload complex computation onto the physical morphology of the system. To this end, we introduce a methodology for algorithmically arranging sensing and actuation components into a robot design while maintaining a low level of design complexity (quantified using a measure of graph entropy), and a high level of task embodiment (evaluated by analyzing the Kullback-Leibler divergence between physical executions of the robot and those of an idealized system). This approach computes an idealized, unconstrained control policy which is projected onto a limited selection of sensors and actuators in a given library, resulting in intelligence that is distributed away from a central processor and instead embodied in the physical body of a robot. The method is demonstrated by computationally optimizing a simulated synthetic cell.
[ { "created": "Fri, 12 Jun 2020 14:58:12 GMT", "version": "v1" } ]
2020-06-15
[ [ "Pervan", "Ana", "" ], [ "Murphey", "Todd D.", "" ] ]
In nature, biological organisms jointly evolve both their morphology and their neurological capabilities to improve their chances for survival. Consequently, task information is encoded in both their brains and their bodies. In robotics, the development of complex control and planning algorithms often bears sole responsibility for improving task performance. This dependence on centralized control can be problematic for systems with computational limitations, such as mechanical systems and robots on the microscale. In these cases we need to be able to offload complex computation onto the physical morphology of the system. To this end, we introduce a methodology for algorithmically arranging sensing and actuation components into a robot design while maintaining a low level of design complexity (quantified using a measure of graph entropy), and a high level of task embodiment (evaluated by analyzing the Kullback-Leibler divergence between physical executions of the robot and those of an idealized system). This approach computes an idealized, unconstrained control policy which is projected onto a limited selection of sensors and actuators in a given library, resulting in intelligence that is distributed away from a central processor and instead embodied in the physical body of a robot. The method is demonstrated by computationally optimizing a simulated synthetic cell.
2103.10055
Yaohui Guo
Yaohui Guo, Cong Shi, and X. Jessie Yang
Reverse Psychology in Trust-Aware Human-Robot Interaction
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
To facilitate effective human-robot interaction (HRI), trust-aware HRI has been proposed, wherein the robotic agent explicitly considers the human's trust during its planning and decision making. The success of trust-aware HRI depends on the specification of a trust dynamics model and a trust-behavior model. In this study, we proposed one novel trust-behavior model, namely the reverse psychology model, and compared it against the commonly used disuse model. We examined how the two models affect the robot's optimal policy and the human-robot team performance. Results indicate that the robot will deliberately "manipulate" the human's trust under the reverse psychology model. To correct this "manipulative" behavior, we proposed a trust-seeking reward function that facilitates trust establishment without significantly sacrificing the team performance.
[ { "created": "Thu, 18 Mar 2021 07:30:58 GMT", "version": "v1" } ]
2021-03-19
[ [ "Guo", "Yaohui", "" ], [ "Shi", "Cong", "" ], [ "Yang", "X. Jessie", "" ] ]
To facilitate effective human-robot interaction (HRI), trust-aware HRI has been proposed, wherein the robotic agent explicitly considers the human's trust during its planning and decision making. The success of trust-aware HRI depends on the specification of a trust dynamics model and a trust-behavior model. In this study, we proposed one novel trust-behavior model, namely the reverse psychology model, and compared it against the commonly used disuse model. We examined how the two models affect the robot's optimal policy and the human-robot team performance. Results indicate that the robot will deliberately "manipulate" the human's trust under the reverse psychology model. To correct this "manipulative" behavior, we proposed a trust-seeking reward function that facilitates trust establishment without significantly sacrificing the team performance.
1910.08647
Pavel Naumov
Pavel Naumov and Jia Tao
Blameworthiness in Security Games
34th AAAI Conference on Artificial Intelligence (AAAI-20), February 7-12, 2020, New York, New York, USA
null
null
null
cs.AI cs.GT cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Security games are an example of a successful real-world application of game theory. The paper defines blameworthiness of the defender and the attacker in security games using the principle of alternative possibilities and provides a sound and complete logical system for reasoning about blameworthiness in such games. Two of the axioms of this system capture the asymmetry of information in security games.
[ { "created": "Fri, 18 Oct 2019 22:22:35 GMT", "version": "v1" }, { "created": "Mon, 11 Nov 2019 20:50:51 GMT", "version": "v2" } ]
2019-11-13
[ [ "Naumov", "Pavel", "" ], [ "Tao", "Jia", "" ] ]
Security games are an example of a successful real-world application of game theory. The paper defines blameworthiness of the defender and the attacker in security games using the principle of alternative possibilities and provides a sound and complete logical system for reasoning about blameworthiness in such games. Two of the axioms of this system capture the asymmetry of information in security games.
2101.06665
Farhad Merchant
Vinay Saxena, Ankitha Reddy, Jonathan Neudorfer, John Gustafson, Sangeeth Nambiar, Rainer Leupers, Farhad Merchant
Brightening the Optical Flow through Posit Arithmetic
To appear in ISQED 2021
null
null
null
cs.AR cs.MS
http://creativecommons.org/publicdomain/zero/1.0/
As new technologies are invented, their commercial viability needs to be carefully examined along with their technical merits and demerits. The posit data format, proposed as a drop-in replacement for IEEE 754 float format, is one such invention that requires extensive theoretical and experimental study to identify products that can benefit from the advantages of posits for specific market segments. In this paper, we present an extensive empirical study of posit-based arithmetic vis-\`a-vis IEEE 754 compliant arithmetic for the optical flow estimation method called Lucas-Kanade (LuKa). First, we use SoftPosit and SoftFloat format emulators to perform an empirical error analysis of the LuKa method. Our study shows that the average error in LuKa with SoftPosit is an order of magnitude lower than LuKa with SoftFloat. We then present the integration of the hardware implementation of a posit adder and multiplier in a RISC-V open-source platform. We make several recommendations, along with the analysis of LuKa in the RISC-V context, for future generation platforms incorporating posit arithmetic units.
[ { "created": "Sun, 17 Jan 2021 13:19:10 GMT", "version": "v1" } ]
2021-01-19
[ [ "Saxena", "Vinay", "" ], [ "Reddy", "Ankitha", "" ], [ "Neudorfer", "Jonathan", "" ], [ "Gustafson", "John", "" ], [ "Nambiar", "Sangeeth", "" ], [ "Leupers", "Rainer", "" ], [ "Merchant", "Farhad", "" ] ]
As new technologies are invented, their commercial viability needs to be carefully examined along with their technical merits and demerits. The posit data format, proposed as a drop-in replacement for IEEE 754 float format, is one such invention that requires extensive theoretical and experimental study to identify products that can benefit from the advantages of posits for specific market segments. In this paper, we present an extensive empirical study of posit-based arithmetic vis-\`a-vis IEEE 754 compliant arithmetic for the optical flow estimation method called Lucas-Kanade (LuKa). First, we use SoftPosit and SoftFloat format emulators to perform an empirical error analysis of the LuKa method. Our study shows that the average error in LuKa with SoftPosit is an order of magnitude lower than LuKa with SoftFloat. We then present the integration of the hardware implementation of a posit adder and multiplier in a RISC-V open-source platform. We make several recommendations, along with the analysis of LuKa in the RISC-V context, for future generation platforms incorporating posit arithmetic units.
2301.00153
Peter \v{S}vec
Peter \v{S}vec, \v{S}tefan Balogh, Martin Homola, J\'an K\v{l}uka
Knowledge-Based Dataset for Training PE Malware Detection Models
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ontologies are a standard for semantic schemata in many knowledge-intensive domains of human interest. They are now becoming increasingly important also in areas until very recently dominated by subsymbolic representations and machine-learning-based data processing. One such area is information security, and more specifically malware detection. We propose PE Malware Ontology that offers a reusable semantic schema for Portable Executable (PE, Windows binary format) malware files. The ontology was inspired by the structure of the data in the EMBER dataset and it currently covers the data intended for static malware analysis. With this proposal, we hope to achieve: a) a unified semantic representation for PE malware datasets that are available or will be published in the future; (b) applicability of symbolic, neural-symbolic, or otherwise explainable approaches in the PE Malware domain that may lead to improved interpretability of results which may now be characterized by the terms defined in the ontology; and (c)by joint publishing of semantically treated EMBER data, including fractional datasets, also improved reproducibility of experiments.
[ { "created": "Sat, 31 Dec 2022 08:46:02 GMT", "version": "v1" } ]
2023-01-03
[ [ "Švec", "Peter", "" ], [ "Balogh", "Štefan", "" ], [ "Homola", "Martin", "" ], [ "Kľuka", "Ján", "" ] ]
Ontologies are a standard for semantic schemata in many knowledge-intensive domains of human interest. They are now becoming increasingly important also in areas until very recently dominated by subsymbolic representations and machine-learning-based data processing. One such area is information security, and more specifically malware detection. We propose PE Malware Ontology that offers a reusable semantic schema for Portable Executable (PE, Windows binary format) malware files. The ontology was inspired by the structure of the data in the EMBER dataset and it currently covers the data intended for static malware analysis. With this proposal, we hope to achieve: a) a unified semantic representation for PE malware datasets that are available or will be published in the future; (b) applicability of symbolic, neural-symbolic, or otherwise explainable approaches in the PE Malware domain that may lead to improved interpretability of results which may now be characterized by the terms defined in the ontology; and (c)by joint publishing of semantically treated EMBER data, including fractional datasets, also improved reproducibility of experiments.
2406.05349
Thanh Huy Nguyen Mr.
Thanh-Huy Nguyen, Thi Kim Ngan Ngo, Mai Anh Vu, Ting-Yuan Tu
Blurry-Consistency Segmentation Framework with Selective Stacking on Differential Interference Contrast 3D Breast Cancer Spheroid
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The ability of three-dimensional (3D) spheroid modeling to study the invasive behavior of breast cancer cells has drawn increased attention. The deep learning-based image processing framework is very effective at speeding up the cell morphological analysis process. Out-of-focus photos taken while capturing 3D cells under several z-slices, however, could negatively impact the deep learning model. In this work, we created a new algorithm to handle blurry images while preserving the stacked image quality. Furthermore, we proposed a unique training architecture that leverages consistency training to help reduce the bias of the model when dense-slice stacking is applied. Additionally, the model's stability is increased under the sparse-slice stacking effect by utilizing the self-training approach. The new blurring stacking technique and training flow are combined with the suggested architecture and self-training mechanism to provide an innovative yet easy-to-use framework. Our methods produced noteworthy experimental outcomes in terms of both quantitative and qualitative aspects.
[ { "created": "Sat, 8 Jun 2024 04:31:36 GMT", "version": "v1" } ]
2024-06-11
[ [ "Nguyen", "Thanh-Huy", "" ], [ "Ngo", "Thi Kim Ngan", "" ], [ "Vu", "Mai Anh", "" ], [ "Tu", "Ting-Yuan", "" ] ]
The ability of three-dimensional (3D) spheroid modeling to study the invasive behavior of breast cancer cells has drawn increased attention. The deep learning-based image processing framework is very effective at speeding up the cell morphological analysis process. Out-of-focus photos taken while capturing 3D cells under several z-slices, however, could negatively impact the deep learning model. In this work, we created a new algorithm to handle blurry images while preserving the stacked image quality. Furthermore, we proposed a unique training architecture that leverages consistency training to help reduce the bias of the model when dense-slice stacking is applied. Additionally, the model's stability is increased under the sparse-slice stacking effect by utilizing the self-training approach. The new blurring stacking technique and training flow are combined with the suggested architecture and self-training mechanism to provide an innovative yet easy-to-use framework. Our methods produced noteworthy experimental outcomes in terms of both quantitative and qualitative aspects.
1003.3312
Secretary Aircc Journal
G. G. Md. Nawaz Ali (1), (2), Rajib Chakraborty (2), Md. Shihabul Alam (2) and Edward Chan (1), ((1) City University of Hong Kong, China and (2) Khulna University of Engineering & Technology, Bangladesh)
An Efficient Approach for Generalized Load Balancing in Multipath Packet Switched Networks
12 Pages, IJCNC Journal 2010
International Journal of Computer Networks & Communications 2.2 (2010) 142-153
10.5121/ijcnc.2010.2211
null
cs.NI
http://creativecommons.org/licenses/by-nc-sa/3.0/
This paper is a quantitative analysis on packet switched network with a view to generalize load balancing and determination of appropriate routing algorithm in multipath environment. Several routing algorithms have been introduced for routing of packets from source to destination. Some of them route packets accurately with increased workload and some of them drastically cut down the workload. A few of them can find out a minimum workload deviation for both UDP and TCP packets. We simulated these approaches in a well defined simulator, analyzed and evaluated their performance. After expanding our analysis with varying weights and number of paths we found that the recently proposed routing algorithm Mixed Weighted Fair Routing (MWFR) outperforms the existing routing algorithms by reducing the routing and network overhead and saving the scarce bandwidth as well as CPU consumption for packet switching networks.
[ { "created": "Wed, 17 Mar 2010 07:15:27 GMT", "version": "v1" } ]
2010-07-15
[ [ "Ali", "G. G. Md. Nawaz", "" ], [ "Chakraborty", "Rajib", "" ], [ "Alam", "Md. Shihabul", "" ], [ "Chan", "Edward", "" ] ]
This paper is a quantitative analysis on packet switched network with a view to generalize load balancing and determination of appropriate routing algorithm in multipath environment. Several routing algorithms have been introduced for routing of packets from source to destination. Some of them route packets accurately with increased workload and some of them drastically cut down the workload. A few of them can find out a minimum workload deviation for both UDP and TCP packets. We simulated these approaches in a well defined simulator, analyzed and evaluated their performance. After expanding our analysis with varying weights and number of paths we found that the recently proposed routing algorithm Mixed Weighted Fair Routing (MWFR) outperforms the existing routing algorithms by reducing the routing and network overhead and saving the scarce bandwidth as well as CPU consumption for packet switching networks.
2202.05998
Hangwei Qian
Hangwei Qian, Tian Tian, Chunyan Miao
What Makes Good Contrastive Learning on Small-Scale Wearable-based Tasks?
Preprint
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Self-supervised learning establishes a new paradigm of learning representations with much fewer or even no label annotations. Recently there has been remarkable progress on large-scale contrastive learning models which require substantial computing resources, yet such models are not practically optimal for small-scale tasks. To fill the gap, we aim to study contrastive learning on the wearable-based activity recognition task. Specifically, we conduct an in-depth study of contrastive learning from both algorithmic-level and task-level perspectives. For algorithmic-level analysis, we decompose contrastive models into several key components and conduct rigorous experimental evaluations to better understand the efficacy and rationale behind contrastive learning. More importantly, for task-level analysis, we show that the wearable-based signals bring unique challenges and opportunities to existing contrastive models, which cannot be readily solved by existing algorithms. Our thorough empirical studies suggest important practices and shed light on future research challenges. In the meantime, this paper presents an open-source PyTorch library \texttt{CL-HAR}, which can serve as a practical tool for researchers. The library is highly modularized and easy to use, which opens up avenues for exploring novel contrastive models quickly in the future.
[ { "created": "Sat, 12 Feb 2022 06:10:15 GMT", "version": "v1" } ]
2022-02-15
[ [ "Qian", "Hangwei", "" ], [ "Tian", "Tian", "" ], [ "Miao", "Chunyan", "" ] ]
Self-supervised learning establishes a new paradigm of learning representations with much fewer or even no label annotations. Recently there has been remarkable progress on large-scale contrastive learning models which require substantial computing resources, yet such models are not practically optimal for small-scale tasks. To fill the gap, we aim to study contrastive learning on the wearable-based activity recognition task. Specifically, we conduct an in-depth study of contrastive learning from both algorithmic-level and task-level perspectives. For algorithmic-level analysis, we decompose contrastive models into several key components and conduct rigorous experimental evaluations to better understand the efficacy and rationale behind contrastive learning. More importantly, for task-level analysis, we show that the wearable-based signals bring unique challenges and opportunities to existing contrastive models, which cannot be readily solved by existing algorithms. Our thorough empirical studies suggest important practices and shed light on future research challenges. In the meantime, this paper presents an open-source PyTorch library \texttt{CL-HAR}, which can serve as a practical tool for researchers. The library is highly modularized and easy to use, which opens up avenues for exploring novel contrastive models quickly in the future.
2112.11215
Yixuan Zhang
Nurul Suhaimi, Nutchanon Yongsatianchot, Yixuan Zhang, Anisa Amiji, Shivani A. Patel, Stacy Marsella, Miso Kim, Jacqueline Griffin, Andrea Parker
Examining Older Adults' Information Exposure, Wellbeing, and Adherence to Protective Measures During the COVID-19 Pandemic
3 pages
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Older adults are at greater risk of experiencing negative physical and psychological impacts of the novel coronavirus 2019 (COVID-19) pandemic. Our ongoing study is assessing COVID-19 information exposure in adults aged 55 and above compared to other age groups living in Massachusetts and Georgia. This work investigates the potential association between information exposure and wellbeing as well as adherence to COVID-19 protective measures. Our initial results show that older adults received information related to COVID-19 less frequently than the middle-aged group, yet they feel more content and less stressed than the other age groups. Further analysis to identify other potential confounding variables is addressed.
[ { "created": "Fri, 17 Dec 2021 15:33:13 GMT", "version": "v1" } ]
2021-12-22
[ [ "Suhaimi", "Nurul", "" ], [ "Yongsatianchot", "Nutchanon", "" ], [ "Zhang", "Yixuan", "" ], [ "Amiji", "Anisa", "" ], [ "Patel", "Shivani A.", "" ], [ "Marsella", "Stacy", "" ], [ "Kim", "Miso", "" ], [ "Griffin", "Jacqueline", "" ], [ "Parker", "Andrea", "" ] ]
Older adults are at greater risk of experiencing negative physical and psychological impacts of the novel coronavirus 2019 (COVID-19) pandemic. Our ongoing study is assessing COVID-19 information exposure in adults aged 55 and above compared to other age groups living in Massachusetts and Georgia. This work investigates the potential association between information exposure and wellbeing as well as adherence to COVID-19 protective measures. Our initial results show that older adults received information related to COVID-19 less frequently than the middle-aged group, yet they feel more content and less stressed than the other age groups. Further analysis to identify other potential confounding variables is addressed.
2402.11131
Mahyar Najibi
Nikhil Bhendawade, Irina Belousova, Qichen Fu, Henry Mason, Mohammad Rastegari, Mahyar Najibi
Speculative Streaming: Fast LLM Inference without Auxiliary Models
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Speculative decoding is a prominent technique to speed up the inference of a large target language model based on predictions of an auxiliary draft model. While effective, in application-specific settings, it often involves fine-tuning both draft and target models to achieve high acceptance rates. As the number of downstream tasks grows, these draft models add significant complexity to inference systems. We propose Speculative Streaming, a single-model speculative decoding method that fuses drafting into the target model by changing the fine-tuning objective from next token prediction to future n-gram prediction. Speculative Streaming speeds up decoding by 1.8 - 3.1X in a diverse set of tasks, such as Summarization, Structured Queries, and Meaning Representation, without sacrificing generation quality. Additionally, Speculative Streaming is parameter-efficient. It achieves on-par/higher speed-ups than Medusa-style architectures while using ~10000X fewer extra parameters, making it well-suited for resource-constrained devices.
[ { "created": "Fri, 16 Feb 2024 23:36:43 GMT", "version": "v1" } ]
2024-02-20
[ [ "Bhendawade", "Nikhil", "" ], [ "Belousova", "Irina", "" ], [ "Fu", "Qichen", "" ], [ "Mason", "Henry", "" ], [ "Rastegari", "Mohammad", "" ], [ "Najibi", "Mahyar", "" ] ]
Speculative decoding is a prominent technique to speed up the inference of a large target language model based on predictions of an auxiliary draft model. While effective, in application-specific settings, it often involves fine-tuning both draft and target models to achieve high acceptance rates. As the number of downstream tasks grows, these draft models add significant complexity to inference systems. We propose Speculative Streaming, a single-model speculative decoding method that fuses drafting into the target model by changing the fine-tuning objective from next token prediction to future n-gram prediction. Speculative Streaming speeds up decoding by 1.8 - 3.1X in a diverse set of tasks, such as Summarization, Structured Queries, and Meaning Representation, without sacrificing generation quality. Additionally, Speculative Streaming is parameter-efficient. It achieves on-par/higher speed-ups than Medusa-style architectures while using ~10000X fewer extra parameters, making it well-suited for resource-constrained devices.
2106.14350
Boris Kovalerchuk
Boris Kovalerchuk, Divya Chandrika Kalla, Bedant Agarwal
Deep Learning Image Recognition for Non-images
33 pages, 17 figures, 18 tables
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Powerful deep learning algorithms open an opportunity for solving non-image Machine Learning (ML) problems by transforming these problems to into the image recognition problems. The CPC-R algorithm presented in this chapter converts non-image data into images by visualizing non-image data. Then deep learning CNN algorithms solve the learning problems on these images. The design of the CPC-R algorithm allows preserving all high-dimensional information in 2-D images. The use of pair values mapping instead of single value mapping used in the alternative approaches allows encoding each n-D point with 2 times fewer visual elements. The attributes of an n-D point are divided into pairs of its values and each pair is visualized as 2-D points in the same 2-D Cartesian coordinates. Next, grey scale or color intensity values are assigned to each pair to encode the order of pairs. This is resulted in the heatmap image. The computational experiments with CPC-R are conducted for different CNN architectures, and methods to optimize the CPC-R images showing that the combined CPC-R and deep learning CNN algorithms are able to solve non-image ML problems reaching high accuracy on the benchmark datasets. This chapter expands our prior work by adding more experiments to test accuracy of classification, exploring saliency and informativeness of discovered features to test their interpretability, and generalizing the approach.
[ { "created": "Mon, 28 Jun 2021 00:36:36 GMT", "version": "v1" }, { "created": "Wed, 9 Feb 2022 22:08:16 GMT", "version": "v2" } ]
2022-02-11
[ [ "Kovalerchuk", "Boris", "" ], [ "Kalla", "Divya Chandrika", "" ], [ "Agarwal", "Bedant", "" ] ]
Powerful deep learning algorithms open an opportunity for solving non-image Machine Learning (ML) problems by transforming these problems to into the image recognition problems. The CPC-R algorithm presented in this chapter converts non-image data into images by visualizing non-image data. Then deep learning CNN algorithms solve the learning problems on these images. The design of the CPC-R algorithm allows preserving all high-dimensional information in 2-D images. The use of pair values mapping instead of single value mapping used in the alternative approaches allows encoding each n-D point with 2 times fewer visual elements. The attributes of an n-D point are divided into pairs of its values and each pair is visualized as 2-D points in the same 2-D Cartesian coordinates. Next, grey scale or color intensity values are assigned to each pair to encode the order of pairs. This is resulted in the heatmap image. The computational experiments with CPC-R are conducted for different CNN architectures, and methods to optimize the CPC-R images showing that the combined CPC-R and deep learning CNN algorithms are able to solve non-image ML problems reaching high accuracy on the benchmark datasets. This chapter expands our prior work by adding more experiments to test accuracy of classification, exploring saliency and informativeness of discovered features to test their interpretability, and generalizing the approach.
1904.04978
Dawei Du
Congcong Li, Dawei Du, Libo Zhang, Tiejian Luo, Yanjun Wu, Qi Tian, Longyin Wen, Siwei Lyu
Data Priming Network for Automatic Check-Out
Accepted to ACM MM 2019
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic Check-Out (ACO) receives increased interests in recent years. An important component of the ACO system is the visual item counting, which recognizes the categories and counts of the items chosen by the customers. However, the training of such a system is challenged by the domain adaptation problem, in which the training data are images from isolated items while the testing images are for collections of items. Existing methods solve this problem with data augmentation using synthesized images, but the image synthesis leads to unreal images that affect the training process. In this paper, we propose a new data priming method to solve the domain adaptation problem. Specifically, we first use pre-augmentation data priming, in which we remove distracting background from the training images using the coarse-to-fine strategy and select images with realistic view angles by the pose pruning method. In the post-augmentation step, we train a data priming network using detection and counting collaborative learning, and select more reliable images from testing data to fine-tune the final visual item tallying network. Experiments on the large scale Retail Product Checkout (RPC) dataset demonstrate the superiority of the proposed method, i.e., we achieve 80.51% checkout accuracy compared with 56.68% of the baseline methods. The source codes can be found in https://isrc.iscas.ac.cn/gitlab/research/acm-mm-2019-ACO.
[ { "created": "Wed, 10 Apr 2019 02:12:48 GMT", "version": "v1" }, { "created": "Thu, 1 Aug 2019 17:12:56 GMT", "version": "v2" }, { "created": "Wed, 7 Aug 2019 03:04:32 GMT", "version": "v3" } ]
2019-08-08
[ [ "Li", "Congcong", "" ], [ "Du", "Dawei", "" ], [ "Zhang", "Libo", "" ], [ "Luo", "Tiejian", "" ], [ "Wu", "Yanjun", "" ], [ "Tian", "Qi", "" ], [ "Wen", "Longyin", "" ], [ "Lyu", "Siwei", "" ] ]
Automatic Check-Out (ACO) receives increased interests in recent years. An important component of the ACO system is the visual item counting, which recognizes the categories and counts of the items chosen by the customers. However, the training of such a system is challenged by the domain adaptation problem, in which the training data are images from isolated items while the testing images are for collections of items. Existing methods solve this problem with data augmentation using synthesized images, but the image synthesis leads to unreal images that affect the training process. In this paper, we propose a new data priming method to solve the domain adaptation problem. Specifically, we first use pre-augmentation data priming, in which we remove distracting background from the training images using the coarse-to-fine strategy and select images with realistic view angles by the pose pruning method. In the post-augmentation step, we train a data priming network using detection and counting collaborative learning, and select more reliable images from testing data to fine-tune the final visual item tallying network. Experiments on the large scale Retail Product Checkout (RPC) dataset demonstrate the superiority of the proposed method, i.e., we achieve 80.51% checkout accuracy compared with 56.68% of the baseline methods. The source codes can be found in https://isrc.iscas.ac.cn/gitlab/research/acm-mm-2019-ACO.
2309.06188
Mazvydas Gudelis
Mazvydas Gudelis, Michal Mackiewicz, Julie Bremner, Sophie Fielding
Computer Vision Pipeline for Automated Antarctic Krill Analysis
Accepted to MVEO @ BMVC 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
British Antarctic Survey (BAS) researchers launch annual expeditions to the Antarctic in order to estimate Antarctic Krill biomass and assess the change from previous years. These comparisons provide insight into the effects of the current environment on this key component of the marine food chain. In this work we have developed tools for automating the data collection and analysis process, using web-based image annotation tools and deep learning image classification and regression models. We achieve highly accurate krill instance segmentation results with an average 77.28% AP score, as well as separate maturity stage and length estimation of krill specimens with 62.99% accuracy and a 1.98mm length error respectively.
[ { "created": "Tue, 12 Sep 2023 12:54:12 GMT", "version": "v1" }, { "created": "Thu, 12 Oct 2023 11:51:21 GMT", "version": "v2" } ]
2023-10-13
[ [ "Gudelis", "Mazvydas", "" ], [ "Mackiewicz", "Michal", "" ], [ "Bremner", "Julie", "" ], [ "Fielding", "Sophie", "" ] ]
British Antarctic Survey (BAS) researchers launch annual expeditions to the Antarctic in order to estimate Antarctic Krill biomass and assess the change from previous years. These comparisons provide insight into the effects of the current environment on this key component of the marine food chain. In this work we have developed tools for automating the data collection and analysis process, using web-based image annotation tools and deep learning image classification and regression models. We achieve highly accurate krill instance segmentation results with an average 77.28% AP score, as well as separate maturity stage and length estimation of krill specimens with 62.99% accuracy and a 1.98mm length error respectively.
1303.4169
Makiko Konoshima
Yui Noma, Makiko Konoshima
Markov Chain Monte Carlo for Arrangement of Hyperplanes in Locality-Sensitive Hashing
13 pages, 10 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since Hamming distances can be calculated by bitwise computations, they can be calculated with less computational load than L2 distances. Similarity searches can therefore be performed faster in Hamming distance space. The elements of Hamming distance space are bit strings. On the other hand, the arrangement of hyperplanes induce the transformation from the feature vectors into feature bit strings. This transformation method is a type of locality-sensitive hashing that has been attracting attention as a way of performing approximate similarity searches at high speed. Supervised learning of hyperplane arrangements allows us to obtain a method that transforms them into feature bit strings reflecting the information of labels applied to higher-dimensional feature vectors. In this p aper, we propose a supervised learning method for hyperplane arrangements in feature space that uses a Markov chain Monte Carlo (MCMC) method. We consider the probability density functions used during learning, and evaluate their performance. We also consider the sampling method for learning data pairs needed in learning, and we evaluate its performance. We confirm that the accuracy of this learning method when using a suitable probability density function and sampling method is greater than the accuracy of existing learning methods.
[ { "created": "Mon, 18 Mar 2013 07:14:15 GMT", "version": "v1" } ]
2013-03-19
[ [ "Noma", "Yui", "" ], [ "Konoshima", "Makiko", "" ] ]
Since Hamming distances can be calculated by bitwise computations, they can be calculated with less computational load than L2 distances. Similarity searches can therefore be performed faster in Hamming distance space. The elements of Hamming distance space are bit strings. On the other hand, the arrangement of hyperplanes induce the transformation from the feature vectors into feature bit strings. This transformation method is a type of locality-sensitive hashing that has been attracting attention as a way of performing approximate similarity searches at high speed. Supervised learning of hyperplane arrangements allows us to obtain a method that transforms them into feature bit strings reflecting the information of labels applied to higher-dimensional feature vectors. In this p aper, we propose a supervised learning method for hyperplane arrangements in feature space that uses a Markov chain Monte Carlo (MCMC) method. We consider the probability density functions used during learning, and evaluate their performance. We also consider the sampling method for learning data pairs needed in learning, and we evaluate its performance. We confirm that the accuracy of this learning method when using a suitable probability density function and sampling method is greater than the accuracy of existing learning methods.
1908.00981
Sakib Khan
Sakib Mahmud Khan, Mashrur Chowdhury
Situation-Aware Left-Turning Connected and Automated Vehicle Operation at Signalized Intersections
null
null
null
null
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One challenging aspect of the Connected and Automated Vehicle (CAV) operation in mixed traffic is the development of a situation-awareness module for CAVs. While operating on public roads, CAVs need to assess their surroundings, especially the intentions of non-CAVs. Generally, CAVs demonstrate a defensive driving behavior, and CAVs expect other non-autonomous entities on the road will follow the traffic rules or common driving behavior. However, the presence of aggressive human drivers in the surrounding environment, who may not follow traffic rules and behave abruptly, can lead to serious safety consequences. In this paper, we have addressed the CAV and non-CAV interaction by evaluating a situation-awareness module for left-turning CAV operations in an urban area. Existing literature does not consider the intent of the following vehicle for a CAVs left-turning movement, and existing CAV controllers do not assess the following non-CAVs intents. Based on our simulation study, the situation-aware CAV controller module reduces up to 27% of the abrupt braking of the following non-CAVs for scenarios with different opposing through movement compared to the base scenario with the autonomous vehicle, without considering the following vehicles intent. The analysis shows that the average travel time reductions for the opposite through traffic volumes of 600, 800, and 1000 vehicle/hour/lane are 58%, 52%, and 62%, respectively, for the aggressive human driver following the CAV if the following vehicles intent is considered by a CAV in making a left turn at an intersection.
[ { "created": "Fri, 2 Aug 2019 16:44:14 GMT", "version": "v1" }, { "created": "Mon, 16 Nov 2020 19:21:11 GMT", "version": "v2" } ]
2020-11-18
[ [ "Khan", "Sakib Mahmud", "" ], [ "Chowdhury", "Mashrur", "" ] ]
One challenging aspect of the Connected and Automated Vehicle (CAV) operation in mixed traffic is the development of a situation-awareness module for CAVs. While operating on public roads, CAVs need to assess their surroundings, especially the intentions of non-CAVs. Generally, CAVs demonstrate a defensive driving behavior, and CAVs expect other non-autonomous entities on the road will follow the traffic rules or common driving behavior. However, the presence of aggressive human drivers in the surrounding environment, who may not follow traffic rules and behave abruptly, can lead to serious safety consequences. In this paper, we have addressed the CAV and non-CAV interaction by evaluating a situation-awareness module for left-turning CAV operations in an urban area. Existing literature does not consider the intent of the following vehicle for a CAVs left-turning movement, and existing CAV controllers do not assess the following non-CAVs intents. Based on our simulation study, the situation-aware CAV controller module reduces up to 27% of the abrupt braking of the following non-CAVs for scenarios with different opposing through movement compared to the base scenario with the autonomous vehicle, without considering the following vehicles intent. The analysis shows that the average travel time reductions for the opposite through traffic volumes of 600, 800, and 1000 vehicle/hour/lane are 58%, 52%, and 62%, respectively, for the aggressive human driver following the CAV if the following vehicles intent is considered by a CAV in making a left turn at an intersection.
0705.1673
Tshilidzi Marwala
L. Mdlazi, C.J. Stander, P.S. Heyns and T. Marwala
Using artificial intelligence for data reduction in mechanical engineering
6 pages
null
null
null
cs.CE cs.AI cs.NE
null
In this paper artificial neural networks and support vector machines are used to reduce the amount of vibration data that is required to estimate the Time Domain Average of a gear vibration signal. Two models for estimating the time domain average of a gear vibration signal are proposed. The models are tested on data from an accelerated gear life test rig. Experimental results indicate that the required data for calculating the Time Domain Average of a gear vibration signal can be reduced by up to 75% when the proposed models are implemented.
[ { "created": "Fri, 11 May 2007 15:49:40 GMT", "version": "v1" } ]
2007-05-23
[ [ "Mdlazi", "L.", "" ], [ "Stander", "C. J.", "" ], [ "Heyns", "P. S.", "" ], [ "Marwala", "T.", "" ] ]
In this paper artificial neural networks and support vector machines are used to reduce the amount of vibration data that is required to estimate the Time Domain Average of a gear vibration signal. Two models for estimating the time domain average of a gear vibration signal are proposed. The models are tested on data from an accelerated gear life test rig. Experimental results indicate that the required data for calculating the Time Domain Average of a gear vibration signal can be reduced by up to 75% when the proposed models are implemented.
1104.4597
Thomas Rothvoss
Thomas Rothvoss
The Entropy Rounding Method in Approximation Algorithms
null
null
null
null
cs.DS math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let A be a matrix, c be any linear objective function and x be a fractional vector, say an LP solution to some discrete optimization problem. Then a recurring task in theoretical computer science (and in approximation algorithms in particular) is to obtain an integral vector y such that Ax is roughly Ay and c*y exceeds c*x by only a moderate factor. We give a new randomized rounding procedure for this task, provided that A has bounded Delta-approximate entropy. This property means that for uniformly chosen random signs chi(j) in {-1,+1} on any subset of the columns, the outcome A*chi can be approximately described using a sub-linear number of bits in expectation. To achieve this result, we modify well-known techniques from the field of discrepancy theory, especially we rely on Beck's entropy method, which to the best of our knowledge has never been used before in the context of approximation algorithms. Our result can be made constructive using the Bansal framework based on semidefinite programming. We demonstrate the versatility of our procedure by rounding fractional solutions to column-based linear programs for some generalizations of Bin Packing. For example we obtain a polynomial time OPT + O(log^2 OPT) approximation for Bin Packing With Rejection and the first AFPTAS for the Train Delivery problem.
[ { "created": "Sun, 24 Apr 2011 00:48:36 GMT", "version": "v1" } ]
2011-04-26
[ [ "Rothvoss", "Thomas", "" ] ]
Let A be a matrix, c be any linear objective function and x be a fractional vector, say an LP solution to some discrete optimization problem. Then a recurring task in theoretical computer science (and in approximation algorithms in particular) is to obtain an integral vector y such that Ax is roughly Ay and c*y exceeds c*x by only a moderate factor. We give a new randomized rounding procedure for this task, provided that A has bounded Delta-approximate entropy. This property means that for uniformly chosen random signs chi(j) in {-1,+1} on any subset of the columns, the outcome A*chi can be approximately described using a sub-linear number of bits in expectation. To achieve this result, we modify well-known techniques from the field of discrepancy theory, especially we rely on Beck's entropy method, which to the best of our knowledge has never been used before in the context of approximation algorithms. Our result can be made constructive using the Bansal framework based on semidefinite programming. We demonstrate the versatility of our procedure by rounding fractional solutions to column-based linear programs for some generalizations of Bin Packing. For example we obtain a polynomial time OPT + O(log^2 OPT) approximation for Bin Packing With Rejection and the first AFPTAS for the Train Delivery problem.
1609.04259
Guillaume Moroz
Guillaume Moroz (VEGAS), \'Eric Schost
A Fast Algorithm for Computing the Truncated Resultant
null
ISSAC '16, Jul 2016, Waterloo, Canada. ACM, Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation, pp.341-348, 2016
10.1145/2930889.2930931
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let P and Q be two polynomials in K[x, y] with degree at most d, where K is a field. Denoting by R $\in$ K[x] the resultant of P and Q with respect to y, we present an algorithm to compute R mod x^k in O~(kd) arithmetic operations in K, where the O~ notation indicates that we omit polylogarithmic factors. This is an improvement over state-of-the-art algorithms that require to compute R in O~(d^3) operations before computing its first k coefficients.
[ { "created": "Wed, 14 Sep 2016 13:25:33 GMT", "version": "v1" } ]
2016-09-15
[ [ "Moroz", "Guillaume", "", "VEGAS" ], [ "Schost", "Éric", "" ] ]
Let P and Q be two polynomials in K[x, y] with degree at most d, where K is a field. Denoting by R $\in$ K[x] the resultant of P and Q with respect to y, we present an algorithm to compute R mod x^k in O~(kd) arithmetic operations in K, where the O~ notation indicates that we omit polylogarithmic factors. This is an improvement over state-of-the-art algorithms that require to compute R in O~(d^3) operations before computing its first k coefficients.
2307.01559
Elia Cereda
Elia Cereda and Alessandro Giusti and Daniele Palossi
Secure Deep Learning-based Distributed Intelligence on Pocket-sized Drones
This paper has been accepted for publication in the EWSN 2023 conference. \copyright 2023 ACM
null
null
null
cs.RO cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Palm-sized nano-drones are an appealing class of edge nodes, but their limited computational resources prevent running large deep-learning models onboard. Adopting an edge-fog computational paradigm, we can offload part of the computation to the fog; however, this poses security concerns if the fog node, or the communication link, can not be trusted. To tackle this concern, we propose a novel distributed edge-fog execution scheme that validates fog computation by redundantly executing a random subnetwork aboard our nano-drone. Compared to a State-of-the-Art visual pose estimation network that entirely runs onboard, a larger network executed in a distributed way improves the $R^2$ score by +0.19; in case of attack, our approach detects it within 2s with 95% probability.
[ { "created": "Tue, 4 Jul 2023 08:29:41 GMT", "version": "v1" } ]
2023-07-06
[ [ "Cereda", "Elia", "" ], [ "Giusti", "Alessandro", "" ], [ "Palossi", "Daniele", "" ] ]
Palm-sized nano-drones are an appealing class of edge nodes, but their limited computational resources prevent running large deep-learning models onboard. Adopting an edge-fog computational paradigm, we can offload part of the computation to the fog; however, this poses security concerns if the fog node, or the communication link, can not be trusted. To tackle this concern, we propose a novel distributed edge-fog execution scheme that validates fog computation by redundantly executing a random subnetwork aboard our nano-drone. Compared to a State-of-the-Art visual pose estimation network that entirely runs onboard, a larger network executed in a distributed way improves the $R^2$ score by +0.19; in case of attack, our approach detects it within 2s with 95% probability.
1703.01026
Edward Barker
Edward W. Barker and Charl J. Ras
Unsupervised Basis Function Adaptation for Reinforcement Learning
Extended abstract submitted (3 March 2017) for 3rd Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM) 2017
null
null
null
cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When using reinforcement learning (RL) algorithms to evaluate a policy it is common, given a large state space, to introduce some form of approximation architecture for the value function (VF). The exact form of this architecture can have a significant effect on the accuracy of the VF estimate, however, and determining a suitable approximation architecture can often be a highly complex task. Consequently there is a large amount of interest in the potential for allowing RL algorithms to adaptively generate approximation architectures. We investigate a method of adapting approximation architectures which uses feedback regarding the frequency with which an agent has visited certain states to guide which areas of the state space to approximate with greater detail. This method is "unsupervised" in the sense that it makes no direct reference to reward or the VF estimate. We introduce an algorithm based upon this idea which adapts a state aggregation approximation architecture on-line. A common method of scoring a VF estimate is to weight the squared Bellman error of each state-action by the probability of that state-action occurring. Adopting this scoring method, and assuming $S$ states, we demonstrate theoretically that - provided (1) the number of cells $X$ in the state aggregation architecture is of order $\sqrt{S}\log_2{S}\ln{S}$ or greater, (2) the policy and transition function are close to deterministic, and (3) the prior for the transition function is uniformly distributed - our algorithm, used in conjunction with a suitable RL algorithm, can guarantee a score which is arbitrarily close to zero as $S$ becomes large. It is able to do this despite having only $O(X \log_2S)$ space complexity and negligible time complexity. The results take advantage of certain properties of the stationary distributions of Markov chains.
[ { "created": "Fri, 3 Mar 2017 03:24:03 GMT", "version": "v1" } ]
2017-03-06
[ [ "Barker", "Edward W.", "" ], [ "Ras", "Charl J.", "" ] ]
When using reinforcement learning (RL) algorithms to evaluate a policy it is common, given a large state space, to introduce some form of approximation architecture for the value function (VF). The exact form of this architecture can have a significant effect on the accuracy of the VF estimate, however, and determining a suitable approximation architecture can often be a highly complex task. Consequently there is a large amount of interest in the potential for allowing RL algorithms to adaptively generate approximation architectures. We investigate a method of adapting approximation architectures which uses feedback regarding the frequency with which an agent has visited certain states to guide which areas of the state space to approximate with greater detail. This method is "unsupervised" in the sense that it makes no direct reference to reward or the VF estimate. We introduce an algorithm based upon this idea which adapts a state aggregation approximation architecture on-line. A common method of scoring a VF estimate is to weight the squared Bellman error of each state-action by the probability of that state-action occurring. Adopting this scoring method, and assuming $S$ states, we demonstrate theoretically that - provided (1) the number of cells $X$ in the state aggregation architecture is of order $\sqrt{S}\log_2{S}\ln{S}$ or greater, (2) the policy and transition function are close to deterministic, and (3) the prior for the transition function is uniformly distributed - our algorithm, used in conjunction with a suitable RL algorithm, can guarantee a score which is arbitrarily close to zero as $S$ becomes large. It is able to do this despite having only $O(X \log_2S)$ space complexity and negligible time complexity. The results take advantage of certain properties of the stationary distributions of Markov chains.
1903.00902
Wendong Wang
Wendong Wang, Jianjun Wang
Deterministic Analysis of Weighted BPDN With Partially Known Support Information
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, with the aid of the powerful Restricted Isometry Constant (RIC), a deterministic (or say non-stochastic) analysis, which includes a series of sufficient conditions (related to the RIC order) and their resultant error estimates, is established for the weighted Basis Pursuit De-Noising (BPDN) to guarantee the robust signal recovery when Partially Known Support Information (PKSI) of the signal is available. Specifically, the obtained conditions extend nontrivially the ones induced recently for the traditional constrained weighted $\ell_{1}$-minimization model to those for its unconstrained counterpart, i.e., the weighted BPDN. The obtained error estimates are also comparable to the analogous ones induced previously for the robust recovery of the signals with PKSI from some constrained models. Moreover, these results to some degree may well complement the recent investigation of the weighted BPDN which is based on the stochastic analysis.
[ { "created": "Sun, 3 Mar 2019 13:20:45 GMT", "version": "v1" } ]
2019-03-05
[ [ "Wang", "Wendong", "" ], [ "Wang", "Jianjun", "" ] ]
In this paper, with the aid of the powerful Restricted Isometry Constant (RIC), a deterministic (or say non-stochastic) analysis, which includes a series of sufficient conditions (related to the RIC order) and their resultant error estimates, is established for the weighted Basis Pursuit De-Noising (BPDN) to guarantee the robust signal recovery when Partially Known Support Information (PKSI) of the signal is available. Specifically, the obtained conditions extend nontrivially the ones induced recently for the traditional constrained weighted $\ell_{1}$-minimization model to those for its unconstrained counterpart, i.e., the weighted BPDN. The obtained error estimates are also comparable to the analogous ones induced previously for the robust recovery of the signals with PKSI from some constrained models. Moreover, these results to some degree may well complement the recent investigation of the weighted BPDN which is based on the stochastic analysis.
2310.15017
Zhongjian Qiao
Zhongjian Qiao and Jiafei Lyu and Xiu Li
Mind the Model, Not the Agent: The Primacy Bias in Model-based RL
Accepted by European Conference on Artificial Intelligence (ECAI) 2024
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The primacy bias in model-free reinforcement learning (MFRL), which refers to the agent's tendency to overfit early data and lose the ability to learn from new data, can significantly decrease the performance of MFRL algorithms. Previous studies have shown that employing simple techniques, such as resetting the agent's parameters, can substantially alleviate the primacy bias in MFRL. However, the primacy bias in model-based reinforcement learning (MBRL) remains unexplored. In this work, we focus on investigating the primacy bias in MBRL. We begin by observing that resetting the agent's parameters harms its performance in the context of MBRL. We further find that the primacy bias in MBRL is more closely related to the primacy bias of the world model instead of the primacy bias of the agent. Based on this finding, we propose \textit{world model resetting}, a simple yet effective technique to alleviate the primacy bias in MBRL. We apply our method to two different MBRL algorithms, MBPO and DreamerV2. We validate the effectiveness of our method on multiple continuous control tasks on MuJoCo and DeepMind Control Suite, as well as discrete control tasks on Atari 100k benchmark. The experimental results show that \textit{world model resetting} can significantly alleviate the primacy bias in the model-based setting and improve the algorithm's performance. We also give a guide on how to perform \textit{world model resetting} effectively.
[ { "created": "Mon, 23 Oct 2023 15:12:20 GMT", "version": "v1" }, { "created": "Sun, 7 Jul 2024 14:32:02 GMT", "version": "v2" } ]
2024-07-09
[ [ "Qiao", "Zhongjian", "" ], [ "Lyu", "Jiafei", "" ], [ "Li", "Xiu", "" ] ]
The primacy bias in model-free reinforcement learning (MFRL), which refers to the agent's tendency to overfit early data and lose the ability to learn from new data, can significantly decrease the performance of MFRL algorithms. Previous studies have shown that employing simple techniques, such as resetting the agent's parameters, can substantially alleviate the primacy bias in MFRL. However, the primacy bias in model-based reinforcement learning (MBRL) remains unexplored. In this work, we focus on investigating the primacy bias in MBRL. We begin by observing that resetting the agent's parameters harms its performance in the context of MBRL. We further find that the primacy bias in MBRL is more closely related to the primacy bias of the world model instead of the primacy bias of the agent. Based on this finding, we propose \textit{world model resetting}, a simple yet effective technique to alleviate the primacy bias in MBRL. We apply our method to two different MBRL algorithms, MBPO and DreamerV2. We validate the effectiveness of our method on multiple continuous control tasks on MuJoCo and DeepMind Control Suite, as well as discrete control tasks on Atari 100k benchmark. The experimental results show that \textit{world model resetting} can significantly alleviate the primacy bias in the model-based setting and improve the algorithm's performance. We also give a guide on how to perform \textit{world model resetting} effectively.
2305.09123
Weizhao Tang
Weizhao Tang, Peiyao Sheng, Ronghao Ni, Pronoy Roy, Xuechao Wang, Giulia Fanti, and Pramod Viswanath
CFT-Forensics: High-Performance Byzantine Accountability for Crash Fault Tolerant Protocols
null
null
null
null
cs.DC cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Crash fault tolerant (CFT) consensus algorithms are commonly used in scenarios where system components are trusted -- e.g., enterprise settings and government infrastructure. However, CFT consensus can be broken by even a single corrupt node. A desirable property in the face of such potential Byzantine faults is \emph{accountability}: if a corrupt node breaks protocol and affects consensus safety, it should be possible to identify the culpable components with cryptographic integrity from the node states. Today, the best-known protocol for providing accountability to CFT protocols is called PeerReview; it essentially records a signed transcript of all messages sent during the CFT protocol. Because PeerReview is agnostic to the underlying CFT protocol, it incurs high communication and storage overhead. We propose CFT-Forensics, an accountability framework for CFT protocols. We show that for a special family of \emph{forensics-compliant} CFT protocols (which includes widely-used CFT protocols like Raft and multi-Paxos), CFT-Forensics gives provable accountability guarantees. Under realistic deployment settings, we show theoretically that CFT-Forensics operates at a fraction of the cost of PeerReview. We subsequently instantiate CFT-Forensics for Raft, and implement Raft-Forensics as an extension to the popular nuRaft library. In extensive experiments, we demonstrate that Raft-Forensics adds low overhead to vanilla Raft. With 256 byte messages, Raft-Forensics achieves a peak throughput 87.8\% of vanilla Raft at 46\% higher latency ($+44$ ms). We finally integrate Raft-Forensics into the open-source central bank digital currency OpenCBDC, and show that in wide-area network experiments, Raft-Forensics achieves 97.8\% of the throughput of Raft, with 14.5\% higher latency ($+326$ ms).
[ { "created": "Tue, 16 May 2023 03:09:26 GMT", "version": "v1" }, { "created": "Thu, 2 Nov 2023 20:59:49 GMT", "version": "v2" }, { "created": "Mon, 3 Jun 2024 14:20:12 GMT", "version": "v3" } ]
2024-06-04
[ [ "Tang", "Weizhao", "" ], [ "Sheng", "Peiyao", "" ], [ "Ni", "Ronghao", "" ], [ "Roy", "Pronoy", "" ], [ "Wang", "Xuechao", "" ], [ "Fanti", "Giulia", "" ], [ "Viswanath", "Pramod", "" ] ]
Crash fault tolerant (CFT) consensus algorithms are commonly used in scenarios where system components are trusted -- e.g., enterprise settings and government infrastructure. However, CFT consensus can be broken by even a single corrupt node. A desirable property in the face of such potential Byzantine faults is \emph{accountability}: if a corrupt node breaks protocol and affects consensus safety, it should be possible to identify the culpable components with cryptographic integrity from the node states. Today, the best-known protocol for providing accountability to CFT protocols is called PeerReview; it essentially records a signed transcript of all messages sent during the CFT protocol. Because PeerReview is agnostic to the underlying CFT protocol, it incurs high communication and storage overhead. We propose CFT-Forensics, an accountability framework for CFT protocols. We show that for a special family of \emph{forensics-compliant} CFT protocols (which includes widely-used CFT protocols like Raft and multi-Paxos), CFT-Forensics gives provable accountability guarantees. Under realistic deployment settings, we show theoretically that CFT-Forensics operates at a fraction of the cost of PeerReview. We subsequently instantiate CFT-Forensics for Raft, and implement Raft-Forensics as an extension to the popular nuRaft library. In extensive experiments, we demonstrate that Raft-Forensics adds low overhead to vanilla Raft. With 256 byte messages, Raft-Forensics achieves a peak throughput 87.8\% of vanilla Raft at 46\% higher latency ($+44$ ms). We finally integrate Raft-Forensics into the open-source central bank digital currency OpenCBDC, and show that in wide-area network experiments, Raft-Forensics achieves 97.8\% of the throughput of Raft, with 14.5\% higher latency ($+326$ ms).
2102.10757
Yaochen Xie
Yaochen Xie, Zhao Xu, Jingtun Zhang, Zhengyang Wang, Shuiwang Ji
Self-Supervised Learning of Graph Neural Networks: A Unified Review
Accepted by TPAMI. 26 pages, 7 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep models trained in supervised mode have achieved remarkable success on a variety of tasks. When labeled samples are limited, self-supervised learning (SSL) is emerging as a new paradigm for making use of large amounts of unlabeled samples. SSL has achieved promising performance on natural language and image learning tasks. Recently, there is a trend to extend such success to graph data using graph neural networks (GNNs). In this survey, we provide a unified review of different ways of training GNNs using SSL. Specifically, we categorize SSL methods into contrastive and predictive models. In either category, we provide a unified framework for methods as well as how these methods differ in each component under the framework. Our unified treatment of SSL methods for GNNs sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms. We also summarize different SSL settings and the corresponding datasets used in each setting. To facilitate methodological development and empirical comparison, we develop a standardized testbed for SSL in GNNs, including implementations of common baseline methods, datasets, and evaluation metrics.
[ { "created": "Mon, 22 Feb 2021 03:43:45 GMT", "version": "v1" }, { "created": "Tue, 23 Feb 2021 18:12:23 GMT", "version": "v2" }, { "created": "Tue, 23 Mar 2021 22:24:21 GMT", "version": "v3" }, { "created": "Tue, 15 Feb 2022 19:15:32 GMT", "version": "v4" }, { "created": "Mon, 25 Apr 2022 14:44:40 GMT", "version": "v5" } ]
2022-04-26
[ [ "Xie", "Yaochen", "" ], [ "Xu", "Zhao", "" ], [ "Zhang", "Jingtun", "" ], [ "Wang", "Zhengyang", "" ], [ "Ji", "Shuiwang", "" ] ]
Deep models trained in supervised mode have achieved remarkable success on a variety of tasks. When labeled samples are limited, self-supervised learning (SSL) is emerging as a new paradigm for making use of large amounts of unlabeled samples. SSL has achieved promising performance on natural language and image learning tasks. Recently, there is a trend to extend such success to graph data using graph neural networks (GNNs). In this survey, we provide a unified review of different ways of training GNNs using SSL. Specifically, we categorize SSL methods into contrastive and predictive models. In either category, we provide a unified framework for methods as well as how these methods differ in each component under the framework. Our unified treatment of SSL methods for GNNs sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms. We also summarize different SSL settings and the corresponding datasets used in each setting. To facilitate methodological development and empirical comparison, we develop a standardized testbed for SSL in GNNs, including implementations of common baseline methods, datasets, and evaluation metrics.
1409.0988
Matthias W\"ahlisch
Michael Frey, Mesut G\"unes
Attack of the Ants: Studying Ant Routing Algorithms in Simulation and Wireless Testbeds
Published in: A. F\"orster, C. Sommer, T. Steinbach, M. W\"ahlisch (Eds.), Proc. of 1st OMNeT++ Community Summit, Hamburg, Germany, September 2, 2014, arXiv:1409.0093, 2014
null
null
OMNET/2014/08
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless networks are becoming the key building block of our communications infrastructure. Examples range from cellular networks to ad hoc and sensor networks in wildlife monitoring and environmental scenarios. With the rise of the Internet of Things (IoT) millions of physical and virtual objects will communicate wireless and enhance the daily life. The adaptivity and scalability of wireless networks in the IoT is one of the most challenging tasks. Bio-inspired networking algorithms are a way to tackle these issues. In this paper we present a simulation framework based on OMNeT++ to implement ant routing algorithms to study and compare them on the algorithmic level and an approach to run large simulation studies in a comprehensive way.
[ { "created": "Wed, 3 Sep 2014 08:30:36 GMT", "version": "v1" } ]
2014-09-05
[ [ "Frey", "Michael", "" ], [ "Günes", "Mesut", "" ] ]
Wireless networks are becoming the key building block of our communications infrastructure. Examples range from cellular networks to ad hoc and sensor networks in wildlife monitoring and environmental scenarios. With the rise of the Internet of Things (IoT) millions of physical and virtual objects will communicate wireless and enhance the daily life. The adaptivity and scalability of wireless networks in the IoT is one of the most challenging tasks. Bio-inspired networking algorithms are a way to tackle these issues. In this paper we present a simulation framework based on OMNeT++ to implement ant routing algorithms to study and compare them on the algorithmic level and an approach to run large simulation studies in a comprehensive way.
1503.02427
Mingxuan Wang
Mingxuan Wang and Zhengdong Lu and Hang Li and Qun Liu
Syntax-based Deep Matching of Short Texts
Accepted by IJCAI-2015 as full paper
null
null
null
cs.CL cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many tasks in natural language processing, ranging from machine translation to question answering, can be reduced to the problem of matching two sentences or more generally two short texts. We propose a new approach to the problem, called Deep Match Tree (DeepMatch$_{tree}$), under a general setting. The approach consists of two components, 1) a mining algorithm to discover patterns for matching two short-texts, defined in the product space of dependency trees, and 2) a deep neural network for matching short texts using the mined patterns, as well as a learning algorithm to build the network having a sparse structure. We test our algorithm on the problem of matching a tweet and a response in social media, a hard matching problem proposed in [Wang et al., 2013], and show that DeepMatch$_{tree}$ can outperform a number of competitor models including one without using dependency trees and one based on word-embedding, all with large margins
[ { "created": "Mon, 9 Mar 2015 11:11:15 GMT", "version": "v1" }, { "created": "Tue, 10 Mar 2015 03:24:58 GMT", "version": "v2" }, { "created": "Thu, 12 Mar 2015 08:31:01 GMT", "version": "v3" }, { "created": "Fri, 24 Apr 2015 04:48:25 GMT", "version": "v4" }, { "created": "Mon, 18 May 2015 13:26:28 GMT", "version": "v5" }, { "created": "Fri, 12 Jun 2015 08:26:01 GMT", "version": "v6" } ]
2015-06-15
[ [ "Wang", "Mingxuan", "" ], [ "Lu", "Zhengdong", "" ], [ "Li", "Hang", "" ], [ "Liu", "Qun", "" ] ]
Many tasks in natural language processing, ranging from machine translation to question answering, can be reduced to the problem of matching two sentences or more generally two short texts. We propose a new approach to the problem, called Deep Match Tree (DeepMatch$_{tree}$), under a general setting. The approach consists of two components, 1) a mining algorithm to discover patterns for matching two short-texts, defined in the product space of dependency trees, and 2) a deep neural network for matching short texts using the mined patterns, as well as a learning algorithm to build the network having a sparse structure. We test our algorithm on the problem of matching a tweet and a response in social media, a hard matching problem proposed in [Wang et al., 2013], and show that DeepMatch$_{tree}$ can outperform a number of competitor models including one without using dependency trees and one based on word-embedding, all with large margins
2110.11736
Wanchuang Zhu Dr.
Wanchuang Zhu, Benjamin Zi Hao Zhao, Simon Luo, Tongliang Liu, Ke Deng
MANDERA: Malicious Node Detection in Federated Learning via Ranking
17 pages, 11 figures, ICML
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Byzantine attacks hinder the deployment of federated learning algorithms. Although we know that the benign gradients and Byzantine attacked gradients are distributed differently, to detect the malicious gradients is challenging due to (1) the gradient is high-dimensional and each dimension has its unique distribution and (2) the benign gradients and the attacked gradients are always mixed (two-sample test methods cannot apply directly). To address the above, for the first time, we propose MANDERA which is theoretically guaranteed to efficiently detect all malicious gradients under Byzantine attacks with no prior knowledge or history about the number of attacked nodes. More specifically, we transfer the original updating gradient space into a ranking matrix. By such an operation, the scales of different dimensions of the gradients in the ranking space become identical. The high-dimensional benign gradients and the malicious gradients can be easily separated. The effectiveness of MANDERA is further confirmed by experimentation on four Byzantine attack implementations (Gaussian, Zero Gradient, Sign Flipping, Shifted Mean), comparing with state-of-the-art defenses. The experiments cover both IID and Non-IID datasets.
[ { "created": "Fri, 22 Oct 2021 12:14:16 GMT", "version": "v1" }, { "created": "Tue, 17 Jan 2023 04:24:03 GMT", "version": "v2" } ]
2023-01-18
[ [ "Zhu", "Wanchuang", "" ], [ "Zhao", "Benjamin Zi Hao", "" ], [ "Luo", "Simon", "" ], [ "Liu", "Tongliang", "" ], [ "Deng", "Ke", "" ] ]
Byzantine attacks hinder the deployment of federated learning algorithms. Although we know that the benign gradients and Byzantine attacked gradients are distributed differently, to detect the malicious gradients is challenging due to (1) the gradient is high-dimensional and each dimension has its unique distribution and (2) the benign gradients and the attacked gradients are always mixed (two-sample test methods cannot apply directly). To address the above, for the first time, we propose MANDERA which is theoretically guaranteed to efficiently detect all malicious gradients under Byzantine attacks with no prior knowledge or history about the number of attacked nodes. More specifically, we transfer the original updating gradient space into a ranking matrix. By such an operation, the scales of different dimensions of the gradients in the ranking space become identical. The high-dimensional benign gradients and the malicious gradients can be easily separated. The effectiveness of MANDERA is further confirmed by experimentation on four Byzantine attack implementations (Gaussian, Zero Gradient, Sign Flipping, Shifted Mean), comparing with state-of-the-art defenses. The experiments cover both IID and Non-IID datasets.
2104.02392
Thomas Steiner
Thomas Steiner, Fran\c{c}ois Beaufort
Accessing HID Devices on the Web With the WebHID API: How to play the Chrome Dino Game by Jumping With a Nintendo Joy-Con Controller in One's Pocket
2 pages, accepted at the Developers Track of The Web Conference 2021
null
null
null
cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
In this demonstration, we show how special hardware like Nintendo Joy-Con controllers can be made accessible from the Web through the new WebHID API. This novel technology proposal allows developers to write Web drivers in pure JavaScript that talk to Human Interface Device (HID) devices via the HID protocol. One such example of a driver has been realized in the project Joy-Con-WebHID, which allows for fun pastimes like playing the Google Chrome browser's offline dinosaur game by jumping. This works thanks to the accelerometers built into Joy-Con controllers whose signals are read out by the driver and used to control the game character in the browser. A video of the experience is available.
[ { "created": "Tue, 6 Apr 2021 09:49:53 GMT", "version": "v1" } ]
2021-04-07
[ [ "Steiner", "Thomas", "" ], [ "Beaufort", "François", "" ] ]
In this demonstration, we show how special hardware like Nintendo Joy-Con controllers can be made accessible from the Web through the new WebHID API. This novel technology proposal allows developers to write Web drivers in pure JavaScript that talk to Human Interface Device (HID) devices via the HID protocol. One such example of a driver has been realized in the project Joy-Con-WebHID, which allows for fun pastimes like playing the Google Chrome browser's offline dinosaur game by jumping. This works thanks to the accelerometers built into Joy-Con controllers whose signals are read out by the driver and used to control the game character in the browser. A video of the experience is available.
2203.15233
I-Chao Shen
I-Chao Shen, Yu Ju Chen, Oliver van Kaick, Takeo Igarashi
AutoPoly: Predicting a Polygonal Mesh Construction Sequence from a Silhouette Image
8 pages
null
null
null
cs.CV cs.CG cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Polygonal modeling is a core task of content creation in Computer Graphics. The complexity of modeling, in terms of the number and the order of operations and time required to execute them makes it challenging to learn and execute. Our goal is to automatically derive a polygonal modeling sequence for a given target. Then, one can learn polygonal modeling by observing the resulting sequence and also expedite the modeling process by starting from the auto-generated result. As a starting point for building a system for 3D modeling in the future, we tackle the 2D shape modeling problem and present AutoPoly, a hybrid method that generates a polygonal mesh construction sequence from a silhouette image. The key idea of our method is the use of the Monte Carlo tree search (MCTS) algorithm and differentiable rendering to separately predict sequential topological actions and geometric actions. Our hybrid method can alter topology, whereas the recently proposed inverse shape estimation methods using differentiable rendering can only handle a fixed topology. Our novel reward function encourages MCTS to select topological actions that lead to a simpler shape without self-intersection. We further designed two deep learning-based methods to improve the expansion and simulation steps in the MCTS search process: an $n$-step "future action prediction" network (nFAP-Net) to generate candidates for potential topological actions, and a shape warping network (WarpNet) to predict polygonal shapes given the predicted rendered images and topological actions. We demonstrate the efficiency of our method on 2D polygonal shapes of multiple man-made object categories.
[ { "created": "Tue, 29 Mar 2022 04:48:47 GMT", "version": "v1" } ]
2022-03-30
[ [ "Shen", "I-Chao", "" ], [ "Chen", "Yu Ju", "" ], [ "van Kaick", "Oliver", "" ], [ "Igarashi", "Takeo", "" ] ]
Polygonal modeling is a core task of content creation in Computer Graphics. The complexity of modeling, in terms of the number and the order of operations and time required to execute them makes it challenging to learn and execute. Our goal is to automatically derive a polygonal modeling sequence for a given target. Then, one can learn polygonal modeling by observing the resulting sequence and also expedite the modeling process by starting from the auto-generated result. As a starting point for building a system for 3D modeling in the future, we tackle the 2D shape modeling problem and present AutoPoly, a hybrid method that generates a polygonal mesh construction sequence from a silhouette image. The key idea of our method is the use of the Monte Carlo tree search (MCTS) algorithm and differentiable rendering to separately predict sequential topological actions and geometric actions. Our hybrid method can alter topology, whereas the recently proposed inverse shape estimation methods using differentiable rendering can only handle a fixed topology. Our novel reward function encourages MCTS to select topological actions that lead to a simpler shape without self-intersection. We further designed two deep learning-based methods to improve the expansion and simulation steps in the MCTS search process: an $n$-step "future action prediction" network (nFAP-Net) to generate candidates for potential topological actions, and a shape warping network (WarpNet) to predict polygonal shapes given the predicted rendered images and topological actions. We demonstrate the efficiency of our method on 2D polygonal shapes of multiple man-made object categories.
1812.00898
Aishwarya Agrawal
Aishwarya Agrawal, Mateusz Malinowski, Felix Hill, Ali Eslami, Oriol Vinyals, Tejas Kulkarni
Generating Diverse Programs with Instruction Conditioned Reinforced Adversarial Learning
null
null
null
null
cs.LG cs.CL cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in Deep Reinforcement Learning have led to agents that perform well across a variety of sensory-motor domains. In this work, we study the setting in which an agent must learn to generate programs for diverse scenes conditioned on a given symbolic instruction. Final goals are specified to our agent via images of the scenes. A symbolic instruction consistent with the goal images is used as the conditioning input for our policies. Since a single instruction corresponds to a diverse set of different but still consistent end-goal images, the agent needs to learn to generate a distribution over programs given an instruction. We demonstrate that with simple changes to the reinforced adversarial learning objective, we can learn instruction conditioned policies to achieve the corresponding diverse set of goals. Most importantly, our agent's stochastic policy is shown to more accurately capture the diversity in the goal distribution than a fixed pixel-based reward function baseline. We demonstrate the efficacy of our approach on two domains: (1) drawing MNIST digits with a paint software conditioned on instructions and (2) constructing scenes in a 3D editor that satisfies a certain instruction.
[ { "created": "Mon, 3 Dec 2018 16:51:35 GMT", "version": "v1" } ]
2018-12-04
[ [ "Agrawal", "Aishwarya", "" ], [ "Malinowski", "Mateusz", "" ], [ "Hill", "Felix", "" ], [ "Eslami", "Ali", "" ], [ "Vinyals", "Oriol", "" ], [ "Kulkarni", "Tejas", "" ] ]
Advances in Deep Reinforcement Learning have led to agents that perform well across a variety of sensory-motor domains. In this work, we study the setting in which an agent must learn to generate programs for diverse scenes conditioned on a given symbolic instruction. Final goals are specified to our agent via images of the scenes. A symbolic instruction consistent with the goal images is used as the conditioning input for our policies. Since a single instruction corresponds to a diverse set of different but still consistent end-goal images, the agent needs to learn to generate a distribution over programs given an instruction. We demonstrate that with simple changes to the reinforced adversarial learning objective, we can learn instruction conditioned policies to achieve the corresponding diverse set of goals. Most importantly, our agent's stochastic policy is shown to more accurately capture the diversity in the goal distribution than a fixed pixel-based reward function baseline. We demonstrate the efficacy of our approach on two domains: (1) drawing MNIST digits with a paint software conditioned on instructions and (2) constructing scenes in a 3D editor that satisfies a certain instruction.
2311.15210
Pingyao Feng
Pingyao Feng, Siheng Yi, Qingrui Qu, Zhiwang Yu, Yifei Zhu
Topology combined machine learning for consonant recognition
null
null
null
null
cs.LG math.ST stat.TH
http://creativecommons.org/licenses/by/4.0/
In artificial-intelligence-aided signal processing, existing deep learning models often exhibit a black-box structure, and their validity and comprehensibility remain elusive. The integration of topological methods, despite its relatively nascent application, serves a dual purpose of making models more interpretable as well as extracting structural information from time-dependent data for smarter learning. Here, we provide a transparent and broadly applicable methodology, TopCap, to capture the most salient topological features inherent in time series for machine learning. Rooted in high-dimensional ambient spaces, TopCap is capable of capturing features rarely detected in datasets with low intrinsic dimensionality. Applying time-delay embedding and persistent homology, we obtain descriptors which encapsulate information such as the vibration of a time series, in terms of its variability of frequency, amplitude, and average line, demonstrated with simulated data. This information is then vectorised and fed into multiple machine learning algorithms such as k-nearest neighbours and support vector machine. Notably, in classifying voiced and voiceless consonants, TopCap achieves an accuracy exceeding 96% and is geared towards designing topological convolutional layers for deep learning of speech and audio signals.
[ { "created": "Sun, 26 Nov 2023 06:53:56 GMT", "version": "v1" } ]
2023-11-28
[ [ "Feng", "Pingyao", "" ], [ "Yi", "Siheng", "" ], [ "Qu", "Qingrui", "" ], [ "Yu", "Zhiwang", "" ], [ "Zhu", "Yifei", "" ] ]
In artificial-intelligence-aided signal processing, existing deep learning models often exhibit a black-box structure, and their validity and comprehensibility remain elusive. The integration of topological methods, despite its relatively nascent application, serves a dual purpose of making models more interpretable as well as extracting structural information from time-dependent data for smarter learning. Here, we provide a transparent and broadly applicable methodology, TopCap, to capture the most salient topological features inherent in time series for machine learning. Rooted in high-dimensional ambient spaces, TopCap is capable of capturing features rarely detected in datasets with low intrinsic dimensionality. Applying time-delay embedding and persistent homology, we obtain descriptors which encapsulate information such as the vibration of a time series, in terms of its variability of frequency, amplitude, and average line, demonstrated with simulated data. This information is then vectorised and fed into multiple machine learning algorithms such as k-nearest neighbours and support vector machine. Notably, in classifying voiced and voiceless consonants, TopCap achieves an accuracy exceeding 96% and is geared towards designing topological convolutional layers for deep learning of speech and audio signals.
2205.00479
Zhixian Yang
Zhixian Yang, Renliang Sun, Xiaojun Wan
Nearest Neighbor Knowledge Distillation for Neural Machine Translation
Accepted to NAACL 2022 Main Conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
k-nearest-neighbor machine translation (NN-MT), proposed by Khandelwal et al. (2021), has achieved many state-of-the-art results in machine translation tasks. Although effective, NN-MT requires conducting NN searches through the large datastore for each decoding step during inference, prohibitively increasing the decoding cost and thus leading to the difficulty for the deployment in real-world applications. In this paper, we propose to move the time-consuming NN search forward to the preprocessing phase, and then introduce Nearest Neighbor Knowledge Distillation (NN-KD) that trains the base NMT model to directly learn the knowledge of NN. Distilling knowledge retrieved by NN can encourage the NMT model to take more reasonable target tokens into consideration, thus addressing the overcorrection problem. Extensive experimental results show that, the proposed method achieves consistent improvement over the state-of-the-art baselines including NN-MT, while maintaining the same training and decoding speed as the standard NMT model.
[ { "created": "Sun, 1 May 2022 14:30:49 GMT", "version": "v1" } ]
2022-05-03
[ [ "Yang", "Zhixian", "" ], [ "Sun", "Renliang", "" ], [ "Wan", "Xiaojun", "" ] ]
k-nearest-neighbor machine translation (NN-MT), proposed by Khandelwal et al. (2021), has achieved many state-of-the-art results in machine translation tasks. Although effective, NN-MT requires conducting NN searches through the large datastore for each decoding step during inference, prohibitively increasing the decoding cost and thus leading to the difficulty for the deployment in real-world applications. In this paper, we propose to move the time-consuming NN search forward to the preprocessing phase, and then introduce Nearest Neighbor Knowledge Distillation (NN-KD) that trains the base NMT model to directly learn the knowledge of NN. Distilling knowledge retrieved by NN can encourage the NMT model to take more reasonable target tokens into consideration, thus addressing the overcorrection problem. Extensive experimental results show that, the proposed method achieves consistent improvement over the state-of-the-art baselines including NN-MT, while maintaining the same training and decoding speed as the standard NMT model.
1302.3721
Joseph Mellor
Joseph Mellor, Jonathan Shapiro
Thompson Sampling in Switching Environments with Bayesian Online Change Point Detection
A version will appear in the Sixteenth international conference on Artificial Intelligence and Statistics (AIStats 2013)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thompson Sampling has recently been shown to be optimal in the Bernoulli Multi-Armed Bandit setting[Kaufmann et al., 2012]. This bandit problem assumes stationary distributions for the rewards. It is often unrealistic to model the real world as a stationary distribution. In this paper we derive and evaluate algorithms using Thompson Sampling for a Switching Multi-Armed Bandit Problem. We propose a Thompson Sampling strategy equipped with a Bayesian change point mechanism to tackle this problem. We develop algorithms for a variety of cases with constant switching rate: when switching occurs all arms change (Global Switching), switching occurs independently for each arm (Per-Arm Switching), when the switching rate is known and when it must be inferred from data. This leads to a family of algorithms we collectively term Change-Point Thompson Sampling (CTS). We show empirical results of the algorithm in 4 artificial environments, and 2 derived from real world data; news click-through[Yahoo!, 2011] and foreign exchange data[Dukascopy, 2012], comparing them to some other bandit algorithms. In real world data CTS is the most effective.
[ { "created": "Fri, 15 Feb 2013 10:48:57 GMT", "version": "v1" } ]
2013-02-18
[ [ "Mellor", "Joseph", "" ], [ "Shapiro", "Jonathan", "" ] ]
Thompson Sampling has recently been shown to be optimal in the Bernoulli Multi-Armed Bandit setting[Kaufmann et al., 2012]. This bandit problem assumes stationary distributions for the rewards. It is often unrealistic to model the real world as a stationary distribution. In this paper we derive and evaluate algorithms using Thompson Sampling for a Switching Multi-Armed Bandit Problem. We propose a Thompson Sampling strategy equipped with a Bayesian change point mechanism to tackle this problem. We develop algorithms for a variety of cases with constant switching rate: when switching occurs all arms change (Global Switching), switching occurs independently for each arm (Per-Arm Switching), when the switching rate is known and when it must be inferred from data. This leads to a family of algorithms we collectively term Change-Point Thompson Sampling (CTS). We show empirical results of the algorithm in 4 artificial environments, and 2 derived from real world data; news click-through[Yahoo!, 2011] and foreign exchange data[Dukascopy, 2012], comparing them to some other bandit algorithms. In real world data CTS is the most effective.
2004.05773
Isabelle Augenstein
Pepa Atanasova and Jakob Grue Simonsen and Christina Lioma and Isabelle Augenstein
Generating Fact Checking Explanations
In Proceedings of the 2020 Annual Conference of the Association for Computational Linguistics (ACL 2020)
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process -- generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model.
[ { "created": "Mon, 13 Apr 2020 05:23:25 GMT", "version": "v1" } ]
2020-04-14
[ [ "Atanasova", "Pepa", "" ], [ "Simonsen", "Jakob Grue", "" ], [ "Lioma", "Christina", "" ], [ "Augenstein", "Isabelle", "" ] ]
Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process -- generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model.
1412.0751
John Wieting
John Wieting
Tiered Clustering to Improve Lexical Entailment
Paper for course project for Advanced NLP Spring 2013. 8 pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by/3.0/
Many tasks in Natural Language Processing involve recognizing lexical entailment. Two different approaches to this problem have been proposed recently that are quite different from each other. The first is an asymmetric similarity measure designed to give high scores when the contexts of the narrower term in the entailment are a subset of those of the broader term. The second is a supervised approach where a classifier is learned to predict entailment given a concatenated latent vector representation of the word. Both of these approaches are vector space models that use a single context vector as a representation of the word. In this work, I study the effects of clustering words into senses and using these multiple context vectors to infer entailment using extensions of these two algorithms. I find that this approach offers some improvement to these entailment algorithms.
[ { "created": "Tue, 2 Dec 2014 00:53:35 GMT", "version": "v1" } ]
2014-12-03
[ [ "Wieting", "John", "" ] ]
Many tasks in Natural Language Processing involve recognizing lexical entailment. Two different approaches to this problem have been proposed recently that are quite different from each other. The first is an asymmetric similarity measure designed to give high scores when the contexts of the narrower term in the entailment are a subset of those of the broader term. The second is a supervised approach where a classifier is learned to predict entailment given a concatenated latent vector representation of the word. Both of these approaches are vector space models that use a single context vector as a representation of the word. In this work, I study the effects of clustering words into senses and using these multiple context vectors to infer entailment using extensions of these two algorithms. I find that this approach offers some improvement to these entailment algorithms.
2310.03668
Iker Garc\'ia-Ferrero
Oscar Sainz, Iker Garc\'ia-Ferrero, Rodrigo Agerri, Oier Lopez de Lacalle, German Rigau, Eneko Agirre
GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction
The Twelfth International Conference on Learning Representations - ICLR 2024
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Large Language Models (LLMs) combined with instruction tuning have made significant progress when generalizing to unseen tasks. However, they have been less successful in Information Extraction (IE), lagging behind task-specific models. Typically, IE tasks are characterized by complex annotation guidelines that describe the task and give examples to humans. Previous attempts to leverage such information have failed, even with the largest models, as they are not able to follow the guidelines out of the box. In this paper, we propose GoLLIE (Guideline-following Large Language Model for IE), a model able to improve zero-shot results on unseen IE tasks by virtue of being fine-tuned to comply with annotation guidelines. Comprehensive evaluation empirically demonstrates that GoLLIE is able to generalize to and follow unseen guidelines, outperforming previous attempts at zero-shot information extraction. The ablation study shows that detailed guidelines are key for good results.
[ { "created": "Thu, 5 Oct 2023 16:43:13 GMT", "version": "v1" }, { "created": "Fri, 6 Oct 2023 17:41:15 GMT", "version": "v2" }, { "created": "Mon, 11 Dec 2023 08:24:40 GMT", "version": "v3" }, { "created": "Wed, 21 Feb 2024 15:51:58 GMT", "version": "v4" }, { "created": "Wed, 6 Mar 2024 16:38:03 GMT", "version": "v5" } ]
2024-03-07
[ [ "Sainz", "Oscar", "" ], [ "García-Ferrero", "Iker", "" ], [ "Agerri", "Rodrigo", "" ], [ "de Lacalle", "Oier Lopez", "" ], [ "Rigau", "German", "" ], [ "Agirre", "Eneko", "" ] ]
Large Language Models (LLMs) combined with instruction tuning have made significant progress when generalizing to unseen tasks. However, they have been less successful in Information Extraction (IE), lagging behind task-specific models. Typically, IE tasks are characterized by complex annotation guidelines that describe the task and give examples to humans. Previous attempts to leverage such information have failed, even with the largest models, as they are not able to follow the guidelines out of the box. In this paper, we propose GoLLIE (Guideline-following Large Language Model for IE), a model able to improve zero-shot results on unseen IE tasks by virtue of being fine-tuned to comply with annotation guidelines. Comprehensive evaluation empirically demonstrates that GoLLIE is able to generalize to and follow unseen guidelines, outperforming previous attempts at zero-shot information extraction. The ablation study shows that detailed guidelines are key for good results.
2204.02684
Xinyue Huo
Xinyue Huo, Lingxi Xie, Hengtong Hu, Wengang Zhou, Houqiang Li, Qi Tian
Domain-Agnostic Prior for Transfer Semantic Segmentation
Accepted by CVPR 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community. The key difficulty lies in defining a common property between the source and target domains so that the source-domain features can align with the target-domain semantics. In this paper, we present a simple and effective mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP) that constrains the features extracted from source and target domains to align with a domain-agnostic space. In practice, this is easily implemented as an extra loss term that requires a little extra costs. In the standard evaluation protocol of transferring synthesized data to real data, we validate the effectiveness of different types of DAP, especially that borrowed from a text embedding model that shows favorable performance beyond the state-of-the-art UDA approaches in terms of segmentation accuracy. Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
[ { "created": "Wed, 6 Apr 2022 09:13:25 GMT", "version": "v1" }, { "created": "Wed, 20 Apr 2022 07:53:49 GMT", "version": "v2" } ]
2022-04-21
[ [ "Huo", "Xinyue", "" ], [ "Xie", "Lingxi", "" ], [ "Hu", "Hengtong", "" ], [ "Zhou", "Wengang", "" ], [ "Li", "Houqiang", "" ], [ "Tian", "Qi", "" ] ]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community. The key difficulty lies in defining a common property between the source and target domains so that the source-domain features can align with the target-domain semantics. In this paper, we present a simple and effective mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP) that constrains the features extracted from source and target domains to align with a domain-agnostic space. In practice, this is easily implemented as an extra loss term that requires a little extra costs. In the standard evaluation protocol of transferring synthesized data to real data, we validate the effectiveness of different types of DAP, especially that borrowed from a text embedding model that shows favorable performance beyond the state-of-the-art UDA approaches in terms of segmentation accuracy. Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
2310.04192
Daniel Weber
Daniel Weber, Fabian Thomas, Lukas Gerlach, Ruiyi Zhang, Michael Schwarz
Reviving Meltdown 3a
published at ESORICS 2023
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the initial discovery of Meltdown and Spectre in 2017, different variants of these attacks have been discovered. One often overlooked variant is Meltdown 3a, also known as Meltdown-CPL-REG. Even though Meltdown-CPL-REG was initially discovered in 2018, the available information regarding the vulnerability is still sparse. In this paper, we analyze Meltdown-CPL-REG on 19 different CPUs from different vendors using an automated tool. We observe that the impact is more diverse than documented and differs from CPU to CPU. Surprisingly, while the newest Intel CPUs do not seem affected by Meltdown-CPL-REG, the newest available AMD CPUs (Zen3+) are still affected by the vulnerability. Furthermore, given our attack primitive CounterLeak, we show that besides up-to-date patches, Meltdown-CPL-REG can still be exploited as we reenable performance-counter-based attacks on cryptographic algorithms, break KASLR, and mount Spectre attacks. Although Meltdown-CPL-REG is not as powerful as other transient-execution attacks, its attack surface should not be underestimated.
[ { "created": "Fri, 6 Oct 2023 12:11:46 GMT", "version": "v1" } ]
2023-10-09
[ [ "Weber", "Daniel", "" ], [ "Thomas", "Fabian", "" ], [ "Gerlach", "Lukas", "" ], [ "Zhang", "Ruiyi", "" ], [ "Schwarz", "Michael", "" ] ]
Since the initial discovery of Meltdown and Spectre in 2017, different variants of these attacks have been discovered. One often overlooked variant is Meltdown 3a, also known as Meltdown-CPL-REG. Even though Meltdown-CPL-REG was initially discovered in 2018, the available information regarding the vulnerability is still sparse. In this paper, we analyze Meltdown-CPL-REG on 19 different CPUs from different vendors using an automated tool. We observe that the impact is more diverse than documented and differs from CPU to CPU. Surprisingly, while the newest Intel CPUs do not seem affected by Meltdown-CPL-REG, the newest available AMD CPUs (Zen3+) are still affected by the vulnerability. Furthermore, given our attack primitive CounterLeak, we show that besides up-to-date patches, Meltdown-CPL-REG can still be exploited as we reenable performance-counter-based attacks on cryptographic algorithms, break KASLR, and mount Spectre attacks. Although Meltdown-CPL-REG is not as powerful as other transient-execution attacks, its attack surface should not be underestimated.
2103.11683
Qi Shen
Qi Shen, Shijun Wu, Yanzhen Zou, Bing Xie
Comprehensive Integration of API Usage Patterns
11 pages, Accepted to the 29th IEEE/ACM International Conference on Program Comprehension
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, developers often reuse existing APIs to implement their programming tasks. A lot of API usage patterns are mined to help developers learn API usage rules. However, there are still many missing variables to be synthesized when developers integrate the patterns into their programming context. To deal with this issue, we propose a comprehensive approach to integrate API usage patterns in this paper. We first perform an empirical study by analyzing how API usage patterns are integrated in real-world projects. We find the expressions for variable synthesis is often non-trivial and can be divided into 5 syntax types. Based on the observation, we promote an approach to help developers interactively complete API usage patterns. Compared to the existing code completion techniques, our approach can recommend infrequent expressions accompanied with their real-world usage examples according to the user intent. The evaluation shows that our approach could assist users to integrate APIs more efficiently and complete the programming tasks faster than existing works.
[ { "created": "Mon, 22 Mar 2021 09:24:43 GMT", "version": "v1" } ]
2021-03-23
[ [ "Shen", "Qi", "" ], [ "Wu", "Shijun", "" ], [ "Zou", "Yanzhen", "" ], [ "Xie", "Bing", "" ] ]
Nowadays, developers often reuse existing APIs to implement their programming tasks. A lot of API usage patterns are mined to help developers learn API usage rules. However, there are still many missing variables to be synthesized when developers integrate the patterns into their programming context. To deal with this issue, we propose a comprehensive approach to integrate API usage patterns in this paper. We first perform an empirical study by analyzing how API usage patterns are integrated in real-world projects. We find the expressions for variable synthesis is often non-trivial and can be divided into 5 syntax types. Based on the observation, we promote an approach to help developers interactively complete API usage patterns. Compared to the existing code completion techniques, our approach can recommend infrequent expressions accompanied with their real-world usage examples according to the user intent. The evaluation shows that our approach could assist users to integrate APIs more efficiently and complete the programming tasks faster than existing works.
2402.13597
Wang Liu
Wang Liu, Cunhua Pan, Hong Ren, Jiangzhou Wang, Robert Schober, and Lajos Hanzo
Near-Field Multiuser Beam-Training for Extremely Large-Scale MIMO Systems
submitted to IEEE
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extremely large-scale multiple-input multiple-output (XL-MIMO) systems are capable of improving spectral efficiency by employing far more antennas than conventional massive MIMO at the base station (BS). However, beam training in multiuser XL-MIMO systems is challenging. To tackle these issues, we conceive a three-phase graph neural network (GNN)-based beam training scheme for multiuser XL-MIMO systems. In the first phase, only far-field wide beams have to be tested for each user and the GNN is utilized to map the beamforming gain information of the far-field wide beams to the optimal near-field beam for each user. In addition, the proposed GNN-based scheme can exploit the position-correlation between adjacent users for further improvement of the accuracy of beam training. In the second phase, a beam allocation scheme based on the probability vectors produced at the outputs of GNNs is proposed to address the above beam-direction conflicts between users. In the third phase, the hybrid TBF is designed for further reducing the inter-user interference. Our simulation results show that the proposed scheme improves the beam training performance of the benchmarks. Moreover, the performance of the proposed beam training scheme approaches that of an exhaustive search, despite requiring only about 7% of the pilot overhead.
[ { "created": "Wed, 21 Feb 2024 07:59:44 GMT", "version": "v1" }, { "created": "Tue, 26 Mar 2024 00:34:49 GMT", "version": "v2" } ]
2024-03-27
[ [ "Liu", "Wang", "" ], [ "Pan", "Cunhua", "" ], [ "Ren", "Hong", "" ], [ "Wang", "Jiangzhou", "" ], [ "Schober", "Robert", "" ], [ "Hanzo", "Lajos", "" ] ]
Extremely large-scale multiple-input multiple-output (XL-MIMO) systems are capable of improving spectral efficiency by employing far more antennas than conventional massive MIMO at the base station (BS). However, beam training in multiuser XL-MIMO systems is challenging. To tackle these issues, we conceive a three-phase graph neural network (GNN)-based beam training scheme for multiuser XL-MIMO systems. In the first phase, only far-field wide beams have to be tested for each user and the GNN is utilized to map the beamforming gain information of the far-field wide beams to the optimal near-field beam for each user. In addition, the proposed GNN-based scheme can exploit the position-correlation between adjacent users for further improvement of the accuracy of beam training. In the second phase, a beam allocation scheme based on the probability vectors produced at the outputs of GNNs is proposed to address the above beam-direction conflicts between users. In the third phase, the hybrid TBF is designed for further reducing the inter-user interference. Our simulation results show that the proposed scheme improves the beam training performance of the benchmarks. Moreover, the performance of the proposed beam training scheme approaches that of an exhaustive search, despite requiring only about 7% of the pilot overhead.
2003.03220
Ozan \c{C}atal
Ozan \c{C}atal, Samuel Wauthier, Tim Verbelen, Cedric De Boom, Bart Dhoedt
Deep Active Inference for Autonomous Robot Navigation
workshop paper at BAICS at ICLR 2020
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Active inference is a theory that underpins the way biological agent's perceive and act in the real world. At its core, active inference is based on the principle that the brain is an approximate Bayesian inference engine, building an internal generative model to drive agents towards minimal surprise. Although this theory has shown interesting results with grounding in cognitive neuroscience, its application remains limited to simulations with small, predefined sensor and state spaces. In this paper, we leverage recent advances in deep learning to build more complex generative models that can work without a predefined states space. State representations are learned end-to-end from real-world, high-dimensional sensory data such as camera frames. We also show that these generative models can be used to engage in active inference. To the best of our knowledge this is the first application of deep active inference for a real-world robot navigation task.
[ { "created": "Fri, 6 Mar 2020 14:01:01 GMT", "version": "v1" } ]
2020-03-09
[ [ "Çatal", "Ozan", "" ], [ "Wauthier", "Samuel", "" ], [ "Verbelen", "Tim", "" ], [ "De Boom", "Cedric", "" ], [ "Dhoedt", "Bart", "" ] ]
Active inference is a theory that underpins the way biological agent's perceive and act in the real world. At its core, active inference is based on the principle that the brain is an approximate Bayesian inference engine, building an internal generative model to drive agents towards minimal surprise. Although this theory has shown interesting results with grounding in cognitive neuroscience, its application remains limited to simulations with small, predefined sensor and state spaces. In this paper, we leverage recent advances in deep learning to build more complex generative models that can work without a predefined states space. State representations are learned end-to-end from real-world, high-dimensional sensory data such as camera frames. We also show that these generative models can be used to engage in active inference. To the best of our knowledge this is the first application of deep active inference for a real-world robot navigation task.
2403.11051
Akrati Saxena
Mariana Macedo, Akrati Saxena
Gender differences in online communication: A case study of Soccer
null
null
null
null
cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Social media and digital platforms allow us to express our opinions freely and easily to a vast number of people. In this study, we examine whether there are gender-based differences in how communication happens via Twitter in regard to soccer. Soccer is one of the most popular sports, and therefore, on social media, it engages a diverse audience regardless of their technical knowledge. We collected Twitter data for three months (March-June) for English and Portuguese that contains 9.5 million Tweets related to soccer, and only 18.38% tweets were identified as belonging to women, highlighting a possible gender gap already in the number of people who participated actively in this topic. We then conduct a fine-grained text-level and network-level analysis to identify the gender differences that might exist while communicating on Twitter. Our results show that women express their emotions more intensely than men, regardless of the differences in volume. The network generated from Portuguese has lower homophily than English. However, this difference in homophily does not impact how females express their emotions and sentiments, suggesting that these aspects are inherent norms or characteristics of genders. Our study unveils more gaps through qualitative and quantitative analyses, highlighting the importance of examining and reporting gender gaps in online communication to create a more inclusive space where people can openly share their opinions.
[ { "created": "Sun, 17 Mar 2024 01:26:38 GMT", "version": "v1" } ]
2024-03-19
[ [ "Macedo", "Mariana", "" ], [ "Saxena", "Akrati", "" ] ]
Social media and digital platforms allow us to express our opinions freely and easily to a vast number of people. In this study, we examine whether there are gender-based differences in how communication happens via Twitter in regard to soccer. Soccer is one of the most popular sports, and therefore, on social media, it engages a diverse audience regardless of their technical knowledge. We collected Twitter data for three months (March-June) for English and Portuguese that contains 9.5 million Tweets related to soccer, and only 18.38% tweets were identified as belonging to women, highlighting a possible gender gap already in the number of people who participated actively in this topic. We then conduct a fine-grained text-level and network-level analysis to identify the gender differences that might exist while communicating on Twitter. Our results show that women express their emotions more intensely than men, regardless of the differences in volume. The network generated from Portuguese has lower homophily than English. However, this difference in homophily does not impact how females express their emotions and sentiments, suggesting that these aspects are inherent norms or characteristics of genders. Our study unveils more gaps through qualitative and quantitative analyses, highlighting the importance of examining and reporting gender gaps in online communication to create a more inclusive space where people can openly share their opinions.
1906.10002
Daniel Loureiro
Daniel Loureiro and Alipio Jorge
LIAAD at SemDeep-5 Challenge: Word-in-Context (WiC)
Accepted at the SemDeep-5 Workshop in IJCAI 2019. Code and data: https://github.com/danlou/LMMS
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the LIAAD system that was ranked second place in the Word-in-Context challenge (WiC) featured in SemDeep-5. Our solution is based on a novel system for Word Sense Disambiguation (WSD) using contextual embeddings and full-inventory sense embeddings. We adapt this WSD system, in a straightforward manner, for the present task of detecting whether the same sense occurs in a pair of sentences. Additionally, we show that our solution is able to achieve competitive performance even without using the provided training or development sets, mitigating potential concerns related to task overfitting
[ { "created": "Mon, 24 Jun 2019 14:49:05 GMT", "version": "v1" } ]
2019-06-25
[ [ "Loureiro", "Daniel", "" ], [ "Jorge", "Alipio", "" ] ]
This paper describes the LIAAD system that was ranked second place in the Word-in-Context challenge (WiC) featured in SemDeep-5. Our solution is based on a novel system for Word Sense Disambiguation (WSD) using contextual embeddings and full-inventory sense embeddings. We adapt this WSD system, in a straightforward manner, for the present task of detecting whether the same sense occurs in a pair of sentences. Additionally, we show that our solution is able to achieve competitive performance even without using the provided training or development sets, mitigating potential concerns related to task overfitting
2309.12673
Jerry Yao-Chieh Hu
Jerry Yao-Chieh Hu, Donglin Yang, Dennis Wu, Chenwei Xu, Bo-Yu Chen, Han Liu
On Sparse Modern Hopfield Model
37 pages, accepted at NeurIPS 2023. [v2] updated to match with camera-ready version. Code is available at https://github.com/MAGICS-LAB/SparseModernHopfield
null
null
null
cs.LG cs.AI cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the sparse modern Hopfield model as a sparse extension of the modern Hopfield model. Like its dense counterpart, the sparse modern Hopfield model equips a memory-retrieval dynamics whose one-step approximation corresponds to the sparse attention mechanism. Theoretically, our key contribution is a principled derivation of a closed-form sparse Hopfield energy using the convex conjugate of the sparse entropic regularizer. Building upon this, we derive the sparse memory retrieval dynamics from the sparse energy function and show its one-step approximation is equivalent to the sparse-structured attention. Importantly, we provide a sparsity-dependent memory retrieval error bound which is provably tighter than its dense analog. The conditions for the benefits of sparsity to arise are therefore identified and discussed. In addition, we show that the sparse modern Hopfield model maintains the robust theoretical properties of its dense counterpart, including rapid fixed point convergence and exponential memory capacity. Empirically, we use both synthetic and real-world datasets to demonstrate that the sparse Hopfield model outperforms its dense counterpart in many situations.
[ { "created": "Fri, 22 Sep 2023 07:32:45 GMT", "version": "v1" }, { "created": "Wed, 29 Nov 2023 22:45:39 GMT", "version": "v2" } ]
2023-12-01
[ [ "Hu", "Jerry Yao-Chieh", "" ], [ "Yang", "Donglin", "" ], [ "Wu", "Dennis", "" ], [ "Xu", "Chenwei", "" ], [ "Chen", "Bo-Yu", "" ], [ "Liu", "Han", "" ] ]
We introduce the sparse modern Hopfield model as a sparse extension of the modern Hopfield model. Like its dense counterpart, the sparse modern Hopfield model equips a memory-retrieval dynamics whose one-step approximation corresponds to the sparse attention mechanism. Theoretically, our key contribution is a principled derivation of a closed-form sparse Hopfield energy using the convex conjugate of the sparse entropic regularizer. Building upon this, we derive the sparse memory retrieval dynamics from the sparse energy function and show its one-step approximation is equivalent to the sparse-structured attention. Importantly, we provide a sparsity-dependent memory retrieval error bound which is provably tighter than its dense analog. The conditions for the benefits of sparsity to arise are therefore identified and discussed. In addition, we show that the sparse modern Hopfield model maintains the robust theoretical properties of its dense counterpart, including rapid fixed point convergence and exponential memory capacity. Empirically, we use both synthetic and real-world datasets to demonstrate that the sparse Hopfield model outperforms its dense counterpart in many situations.
2203.00386
Zihao Wang
Zihao Wang, Wei Liu, Qian He, Xinglong Wu, Zili Yi
CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Training a text-to-image generator in the general domain (e.g., Dall.e, CogView) requires huge amounts of paired text-image data, which is too expensive to collect. In this paper, we propose a self-supervised scheme named as CLIP-GEN for general text-to-image generation with the language-image priors extracted with a pre-trained CLIP model. In our approach, we only require a set of unlabeled images in the general domain to train a text-to-image generator. Specifically, given an image without text labels, we first extract the embedding of the image in the united language-vision embedding space with the image encoder of CLIP. Next, we convert the image into a sequence of discrete tokens in the VQGAN codebook space (the VQGAN model can be trained with the unlabeled image dataset in hand). Finally, we train an autoregressive transformer that maps the image tokens from its unified language-vision representation. Once trained, the transformer can generate coherent image tokens based on the text embedding extracted from the text encoder of CLIP upon an input text. Such a strategy enables us to train a strong and general text-to-image generator with large text-free image dataset such as ImageNet. Qualitative and quantitative evaluations verify that our method significantly outperforms optimization-based text-to-image methods in terms of image quality while not compromising the text-image matching. Our method can even achieve comparable performance as flagship supervised models like CogView.
[ { "created": "Tue, 1 Mar 2022 12:11:32 GMT", "version": "v1" } ]
2022-03-02
[ [ "Wang", "Zihao", "" ], [ "Liu", "Wei", "" ], [ "He", "Qian", "" ], [ "Wu", "Xinglong", "" ], [ "Yi", "Zili", "" ] ]
Training a text-to-image generator in the general domain (e.g., Dall.e, CogView) requires huge amounts of paired text-image data, which is too expensive to collect. In this paper, we propose a self-supervised scheme named as CLIP-GEN for general text-to-image generation with the language-image priors extracted with a pre-trained CLIP model. In our approach, we only require a set of unlabeled images in the general domain to train a text-to-image generator. Specifically, given an image without text labels, we first extract the embedding of the image in the united language-vision embedding space with the image encoder of CLIP. Next, we convert the image into a sequence of discrete tokens in the VQGAN codebook space (the VQGAN model can be trained with the unlabeled image dataset in hand). Finally, we train an autoregressive transformer that maps the image tokens from its unified language-vision representation. Once trained, the transformer can generate coherent image tokens based on the text embedding extracted from the text encoder of CLIP upon an input text. Such a strategy enables us to train a strong and general text-to-image generator with large text-free image dataset such as ImageNet. Qualitative and quantitative evaluations verify that our method significantly outperforms optimization-based text-to-image methods in terms of image quality while not compromising the text-image matching. Our method can even achieve comparable performance as flagship supervised models like CogView.
1707.03886
Amit Dhurandhar
Amit Dhurandhar, Vijay Iyengar, Ronny Luss and Karthikeyan Shanmugam
A Formal Framework to Characterize Interpretability of Procedures
presented at 2017 ICML Workshop on Human Interpretability in Machine Learning (WHI 2017), Sydney, NSW, Australia
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide a novel notion of what it means to be interpretable, looking past the usual association with human understanding. Our key insight is that interpretability is not an absolute concept and so we define it relative to a target model, which may or may not be a human. We define a framework that allows for comparing interpretable procedures by linking it to important practical aspects such as accuracy and robustness. We characterize many of the current state-of-the-art interpretable methods in our framework portraying its general applicability.
[ { "created": "Wed, 12 Jul 2017 19:42:08 GMT", "version": "v1" } ]
2017-07-14
[ [ "Dhurandhar", "Amit", "" ], [ "Iyengar", "Vijay", "" ], [ "Luss", "Ronny", "" ], [ "Shanmugam", "Karthikeyan", "" ] ]
We provide a novel notion of what it means to be interpretable, looking past the usual association with human understanding. Our key insight is that interpretability is not an absolute concept and so we define it relative to a target model, which may or may not be a human. We define a framework that allows for comparing interpretable procedures by linking it to important practical aspects such as accuracy and robustness. We characterize many of the current state-of-the-art interpretable methods in our framework portraying its general applicability.
1801.01612
Jaouhar Fattahi
Jaouhar Fattahi and Mohamed Mejri
Secrecy by Witness-Functions under Equational Theories
http://ieeexplore.ieee.org/document/7301205/
7th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), 2015
10.1109/ECAI.2015.7301205
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we use the witness-functions to analyze cryptographic protocols for secrecy under nonempty equational theories. The witness-functions are safe metrics used to compute security. An analysis with a witness-function consists in making sure that the security of every atomic message does not decrease during its lifecycle in the protocol. The analysis gets more difficult under nonempty equational theories. Indeed, the intruder can take advantage of the algebraic properties of the cryptographic primitives to derive secrets. These properties arise from the use of mathematical functions, such as multiplication, addition, exclusive-or or modular exponentiation in the cryptosystems and the protocols. Here, we show how to use the witness-functions under nonempty equational theories and we run an analysis on the Needham-Schroeder-Lowe protocol under the cipher homomorphism. This analysis reveals that although this protocol is proved secure under the perfect encryption assumption, its security collapses under the homomorphic primitives. We show how the witness-functions help to illustrate an attack scenario on it and we propose an amended version to fix it.
[ { "created": "Fri, 5 Jan 2018 02:21:14 GMT", "version": "v1" } ]
2018-01-08
[ [ "Fattahi", "Jaouhar", "" ], [ "Mejri", "Mohamed", "" ] ]
In this paper, we use the witness-functions to analyze cryptographic protocols for secrecy under nonempty equational theories. The witness-functions are safe metrics used to compute security. An analysis with a witness-function consists in making sure that the security of every atomic message does not decrease during its lifecycle in the protocol. The analysis gets more difficult under nonempty equational theories. Indeed, the intruder can take advantage of the algebraic properties of the cryptographic primitives to derive secrets. These properties arise from the use of mathematical functions, such as multiplication, addition, exclusive-or or modular exponentiation in the cryptosystems and the protocols. Here, we show how to use the witness-functions under nonempty equational theories and we run an analysis on the Needham-Schroeder-Lowe protocol under the cipher homomorphism. This analysis reveals that although this protocol is proved secure under the perfect encryption assumption, its security collapses under the homomorphic primitives. We show how the witness-functions help to illustrate an attack scenario on it and we propose an amended version to fix it.
1402.7015
Fabian Pedregosa
Fabian Pedregosa (INRIA Saclay - Ile de France, INRIA Paris - Rocquencourt), Michael Eickenberg (INRIA Saclay - Ile de France, LNAO), Philippe Ciuciu (INRIA Saclay - Ile de France, NEUROSPIN), Bertrand Thirion (INRIA Saclay - Ile de France, NEUROSPIN), Alexandre Gramfort (LTCI)
Data-driven HRF estimation for encoding and decoding models
appears in NeuroImage (2015)
null
10.1016/j.neuroimage.2014.09.060
null
cs.CE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the common usage of a canonical, data-independent, hemodynamic response function (HRF), it is known that the shape of the HRF varies across brain regions and subjects. This suggests that a data-driven estimation of this function could lead to more statistical power when modeling BOLD fMRI data. However, unconstrained estimation of the HRF can yield highly unstable results when the number of free parameters is large. We develop a method for the joint estimation of activation and HRF using a rank constraint causing the estimated HRF to be equal across events/conditions, yet permitting it to be different across voxels. Model estimation leads to an optimization problem that we propose to solve with an efficient quasi-Newton method exploiting fast gradient computations. This model, called GLM with Rank-1 constraint (R1-GLM), can be extended to the setting of GLM with separate designs which has been shown to improve decoding accuracy in brain activity decoding experiments. We compare 10 different HRF modeling methods in terms of encoding and decoding score in two different datasets. Our results show that the R1-GLM model significantly outperforms competing methods in both encoding and decoding settings, positioning it as an attractive method both from the points of view of accuracy and computational efficiency.
[ { "created": "Thu, 27 Feb 2014 18:50:58 GMT", "version": "v1" }, { "created": "Sun, 6 Apr 2014 06:11:17 GMT", "version": "v2" }, { "created": "Tue, 15 Jul 2014 11:14:00 GMT", "version": "v3" }, { "created": "Mon, 6 Oct 2014 16:39:55 GMT", "version": "v4" }, { "created": "Fri, 31 Oct 2014 13:47:01 GMT", "version": "v5" }, { "created": "Fri, 7 Nov 2014 11:27:19 GMT", "version": "v6" } ]
2014-11-10
[ [ "Pedregosa", "Fabian", "", "INRIA Saclay - Ile de France, INRIA Paris -\n Rocquencourt" ], [ "Eickenberg", "Michael", "", "INRIA Saclay - Ile de France, LNAO" ], [ "Ciuciu", "Philippe", "", "INRIA Saclay - Ile de France, NEUROSPIN" ], [ "Thirion", "Bertrand", "", "INRIA Saclay - Ile de France, NEUROSPIN" ], [ "Gramfort", "Alexandre", "", "LTCI" ] ]
Despite the common usage of a canonical, data-independent, hemodynamic response function (HRF), it is known that the shape of the HRF varies across brain regions and subjects. This suggests that a data-driven estimation of this function could lead to more statistical power when modeling BOLD fMRI data. However, unconstrained estimation of the HRF can yield highly unstable results when the number of free parameters is large. We develop a method for the joint estimation of activation and HRF using a rank constraint causing the estimated HRF to be equal across events/conditions, yet permitting it to be different across voxels. Model estimation leads to an optimization problem that we propose to solve with an efficient quasi-Newton method exploiting fast gradient computations. This model, called GLM with Rank-1 constraint (R1-GLM), can be extended to the setting of GLM with separate designs which has been shown to improve decoding accuracy in brain activity decoding experiments. We compare 10 different HRF modeling methods in terms of encoding and decoding score in two different datasets. Our results show that the R1-GLM model significantly outperforms competing methods in both encoding and decoding settings, positioning it as an attractive method both from the points of view of accuracy and computational efficiency.
2312.12891
Steven James
William Hill, Ireton Liu, Anita De Mello Koch, Damion Harvey, Nishanth Kumar, George Konidaris, Steven James
MinePlanner: A Benchmark for Long-Horizon Planning in Large Minecraft Worlds
Accepted to the 6th ICAPS Workshop on the International Planning Competition (WIPC 2024)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new benchmark for planning tasks based on the Minecraft game. Our benchmark contains 45 tasks overall, but also provides support for creating both propositional and numeric instances of new Minecraft tasks automatically. We benchmark numeric and propositional planning systems on these tasks, with results demonstrating that state-of-the-art planners are currently incapable of dealing with many of the challenges advanced by our new benchmark, such as scaling to instances with thousands of objects. Based on these results, we identify areas of improvement for future planners. Our framework is made available at https://github.com/IretonLiu/mine-pddl/.
[ { "created": "Wed, 20 Dec 2023 10:04:39 GMT", "version": "v1" }, { "created": "Sun, 28 Apr 2024 11:22:36 GMT", "version": "v2" } ]
2024-04-30
[ [ "Hill", "William", "" ], [ "Liu", "Ireton", "" ], [ "Koch", "Anita De Mello", "" ], [ "Harvey", "Damion", "" ], [ "Kumar", "Nishanth", "" ], [ "Konidaris", "George", "" ], [ "James", "Steven", "" ] ]
We propose a new benchmark for planning tasks based on the Minecraft game. Our benchmark contains 45 tasks overall, but also provides support for creating both propositional and numeric instances of new Minecraft tasks automatically. We benchmark numeric and propositional planning systems on these tasks, with results demonstrating that state-of-the-art planners are currently incapable of dealing with many of the challenges advanced by our new benchmark, such as scaling to instances with thousands of objects. Based on these results, we identify areas of improvement for future planners. Our framework is made available at https://github.com/IretonLiu/mine-pddl/.
2212.08872
Salman Mohebi
Salman Mohebi, Andrea Zanella and Michele Zorzi
Pilot Reuse in Cell-Free Massive MIMO Systems: A Diverse Clustering Approach
29 pages, 9 figures, submitted to IEEE
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed or Cell-free (CF) massive Multiple-Input, Multiple-Output (mMIMO), has been recently proposed as an answer to the limitations of the current network-centric systems in providing high-rate ubiquitous transmission. The capability of providing uniform service level makes CF mMIMO a potential technology for beyond-5G and 6G networks. The acquisition of accurate Channel State Information (CSI) is critical for different CF mMIMO operations. Hence, an uplink pilot training phase is used to efficiently estimate transmission channels. The number of available orthogonal pilot signals is limited, and reusing these pilots will increase co-pilot interference. This causes an undesirable effect known as pilot contamination that could reduce the system performance. Hence, a proper pilot reuse strategy is needed to mitigate the effects of pilot contamination. In this paper, we formulate pilot assignment in CF mMIMO as a diverse clustering problem and propose an iterative maxima search scheme to solve it. In this approach, we first form the clusters of User Equipments (UEs) so that the intra-cluster diversity maximizes and then assign the same pilots for all UEs in the same cluster. The numerical results show the proposed techniques' superiority over other methods concerning the achieved uplink and downlink average and per-user data rate.
[ { "created": "Sat, 17 Dec 2022 13:56:49 GMT", "version": "v1" } ]
2022-12-20
[ [ "Mohebi", "Salman", "" ], [ "Zanella", "Andrea", "" ], [ "Zorzi", "Michele", "" ] ]
Distributed or Cell-free (CF) massive Multiple-Input, Multiple-Output (mMIMO), has been recently proposed as an answer to the limitations of the current network-centric systems in providing high-rate ubiquitous transmission. The capability of providing uniform service level makes CF mMIMO a potential technology for beyond-5G and 6G networks. The acquisition of accurate Channel State Information (CSI) is critical for different CF mMIMO operations. Hence, an uplink pilot training phase is used to efficiently estimate transmission channels. The number of available orthogonal pilot signals is limited, and reusing these pilots will increase co-pilot interference. This causes an undesirable effect known as pilot contamination that could reduce the system performance. Hence, a proper pilot reuse strategy is needed to mitigate the effects of pilot contamination. In this paper, we formulate pilot assignment in CF mMIMO as a diverse clustering problem and propose an iterative maxima search scheme to solve it. In this approach, we first form the clusters of User Equipments (UEs) so that the intra-cluster diversity maximizes and then assign the same pilots for all UEs in the same cluster. The numerical results show the proposed techniques' superiority over other methods concerning the achieved uplink and downlink average and per-user data rate.
1807.03165
Jeremy Kepner
Jeremy Kepner, Vijay Gadepally, Hayden Jananthan, Lauren Milechin, Sid Samsi
Sparse Deep Neural Network Exact Solutions
8 pages, 10 figures, accepted to IEEE HPEC 2018. arXiv admin note: text overlap with arXiv:1708.02937
null
10.1109/HPEC.2018.8547742
null
cs.LG cs.CV cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) have emerged as key enablers of machine learning. Applying larger DNNs to more diverse applications is an important challenge. The computations performed during DNN training and inference are dominated by operations on the weight matrices describing the DNN. As DNNs incorporate more layers and more neurons per layers, these weight matrices may be required to be sparse because of memory limitations. Sparse DNNs are one possible approach, but the underlying theory is in the early stages of development and presents a number of challenges, including determining the accuracy of inference and selecting nonzero weights for training. Associative array algebra has been developed by the big data community to combine and extend database, matrix, and graph/network concepts for use in large, sparse data problems. Applying this mathematics to DNNs simplifies the formulation of DNN mathematics and reveals that DNNs are linear over oscillating semirings. This work uses associative array DNNs to construct exact solutions and corresponding perturbation models to the rectified linear unit (ReLU) DNN equations that can be used to construct test vectors for sparse DNN implementations over various precisions. These solutions can be used for DNN verification, theoretical explorations of DNN properties, and a starting point for the challenge of sparse training.
[ { "created": "Fri, 6 Jul 2018 00:47:12 GMT", "version": "v1" } ]
2018-12-17
[ [ "Kepner", "Jeremy", "" ], [ "Gadepally", "Vijay", "" ], [ "Jananthan", "Hayden", "" ], [ "Milechin", "Lauren", "" ], [ "Samsi", "Sid", "" ] ]
Deep neural networks (DNNs) have emerged as key enablers of machine learning. Applying larger DNNs to more diverse applications is an important challenge. The computations performed during DNN training and inference are dominated by operations on the weight matrices describing the DNN. As DNNs incorporate more layers and more neurons per layers, these weight matrices may be required to be sparse because of memory limitations. Sparse DNNs are one possible approach, but the underlying theory is in the early stages of development and presents a number of challenges, including determining the accuracy of inference and selecting nonzero weights for training. Associative array algebra has been developed by the big data community to combine and extend database, matrix, and graph/network concepts for use in large, sparse data problems. Applying this mathematics to DNNs simplifies the formulation of DNN mathematics and reveals that DNNs are linear over oscillating semirings. This work uses associative array DNNs to construct exact solutions and corresponding perturbation models to the rectified linear unit (ReLU) DNN equations that can be used to construct test vectors for sparse DNN implementations over various precisions. These solutions can be used for DNN verification, theoretical explorations of DNN properties, and a starting point for the challenge of sparse training.
2402.09164
Ruoyu Chen
Ruoyu Chen, Hua Zhang, Siyuan Liang, Jingzhi Li, Xiaochun Cao
Less is More: Fewer Interpretable Region via Submodular Subset Selection
Accepted to ICLR 2024 (Oral)
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image attribution algorithms aim to identify important regions that are highly relevant to model decisions. Although existing attribution solutions can effectively assign importance to target elements, they still face the following challenges: 1) existing attribution methods generate inaccurate small regions thus misleading the direction of correct attribution, and 2) the model cannot produce good attribution results for samples with wrong predictions. To address the above challenges, this paper re-models the above image attribution problem as a submodular subset selection problem, aiming to enhance model interpretability using fewer regions. To address the lack of attention to local regions, we construct a novel submodular function to discover more accurate small interpretation regions. To enhance the attribution effect for all samples, we also impose four different constraints on the selection of sub-regions, i.e., confidence, effectiveness, consistency, and collaboration scores, to assess the importance of various subsets. Moreover, our theoretical analysis substantiates that the proposed function is in fact submodular. Extensive experiments show that the proposed method outperforms SOTA methods on two face datasets (Celeb-A and VGG-Face2) and one fine-grained dataset (CUB-200-2011). For correctly predicted samples, the proposed method improves the Deletion and Insertion scores with an average of 4.9% and 2.5% gain relative to HSIC-Attribution. For incorrectly predicted samples, our method achieves gains of 81.0% and 18.4% compared to the HSIC-Attribution algorithm in the average highest confidence and Insertion score respectively. The code is released at https://github.com/RuoyuChen10/SMDL-Attribution.
[ { "created": "Wed, 14 Feb 2024 13:30:02 GMT", "version": "v1" }, { "created": "Thu, 29 Feb 2024 03:29:41 GMT", "version": "v2" } ]
2024-03-01
[ [ "Chen", "Ruoyu", "" ], [ "Zhang", "Hua", "" ], [ "Liang", "Siyuan", "" ], [ "Li", "Jingzhi", "" ], [ "Cao", "Xiaochun", "" ] ]
Image attribution algorithms aim to identify important regions that are highly relevant to model decisions. Although existing attribution solutions can effectively assign importance to target elements, they still face the following challenges: 1) existing attribution methods generate inaccurate small regions thus misleading the direction of correct attribution, and 2) the model cannot produce good attribution results for samples with wrong predictions. To address the above challenges, this paper re-models the above image attribution problem as a submodular subset selection problem, aiming to enhance model interpretability using fewer regions. To address the lack of attention to local regions, we construct a novel submodular function to discover more accurate small interpretation regions. To enhance the attribution effect for all samples, we also impose four different constraints on the selection of sub-regions, i.e., confidence, effectiveness, consistency, and collaboration scores, to assess the importance of various subsets. Moreover, our theoretical analysis substantiates that the proposed function is in fact submodular. Extensive experiments show that the proposed method outperforms SOTA methods on two face datasets (Celeb-A and VGG-Face2) and one fine-grained dataset (CUB-200-2011). For correctly predicted samples, the proposed method improves the Deletion and Insertion scores with an average of 4.9% and 2.5% gain relative to HSIC-Attribution. For incorrectly predicted samples, our method achieves gains of 81.0% and 18.4% compared to the HSIC-Attribution algorithm in the average highest confidence and Insertion score respectively. The code is released at https://github.com/RuoyuChen10/SMDL-Attribution.
2103.07241
Giovani Guizzo
Giovani Guizzo, Federica Sarro, Jens Krinke, Silvia Regina Vergilio
Sentinel: A Hyper-Heuristic for the Generation of Mutant Reduction Strategies
in IEEE Transactions on Software Engineering
null
10.1109/TSE.2020.3002496
null
cs.SE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mutation testing is an effective approach to evaluate and strengthen software test suites, but its adoption is currently limited by the mutants' execution computational cost. Several strategies have been proposed to reduce this cost (a.k.a. mutation cost reduction strategies), however none of them has proven to be effective for all scenarios since they often need an ad-hoc manual selection and configuration depending on the software under test (SUT). In this paper, we propose a novel multi-objective evolutionary hyper-heuristic approach, dubbed Sentinel, to automate the generation of optimal cost reduction strategies for every new SUT. We evaluate Sentinel by carrying out a thorough empirical study involving 40 releases of 10 open-source real-world software systems and both baseline and state-of-the-art strategies as a benchmark. We execute a total of 4,800 experiments, and evaluate their results with both quality indicators and statistical significance tests, following the most recent best practice in the literature. The results show that strategies generated by Sentinel outperform the baseline strategies in 95% of the cases always with large effect sizes. They also obtain statistically significantly better results than state-of-the-art strategies in 88% of the cases, with large effect sizes for 95% of them. Also, our study reveals that the mutation strategies generated by Sentinel for a given software version can be used without any loss in quality for subsequently developed versions in 95% of the cases. These results show that Sentinel is able to automatically generate mutation strategies that reduce mutation testing cost without affecting its testing effectiveness (i.e. mutation score), thus taking off from the tester's shoulders the burden of manually selecting and configuring strategies for each SUT.
[ { "created": "Fri, 12 Mar 2021 12:38:51 GMT", "version": "v1" } ]
2021-03-15
[ [ "Guizzo", "Giovani", "" ], [ "Sarro", "Federica", "" ], [ "Krinke", "Jens", "" ], [ "Vergilio", "Silvia Regina", "" ] ]
Mutation testing is an effective approach to evaluate and strengthen software test suites, but its adoption is currently limited by the mutants' execution computational cost. Several strategies have been proposed to reduce this cost (a.k.a. mutation cost reduction strategies), however none of them has proven to be effective for all scenarios since they often need an ad-hoc manual selection and configuration depending on the software under test (SUT). In this paper, we propose a novel multi-objective evolutionary hyper-heuristic approach, dubbed Sentinel, to automate the generation of optimal cost reduction strategies for every new SUT. We evaluate Sentinel by carrying out a thorough empirical study involving 40 releases of 10 open-source real-world software systems and both baseline and state-of-the-art strategies as a benchmark. We execute a total of 4,800 experiments, and evaluate their results with both quality indicators and statistical significance tests, following the most recent best practice in the literature. The results show that strategies generated by Sentinel outperform the baseline strategies in 95% of the cases always with large effect sizes. They also obtain statistically significantly better results than state-of-the-art strategies in 88% of the cases, with large effect sizes for 95% of them. Also, our study reveals that the mutation strategies generated by Sentinel for a given software version can be used without any loss in quality for subsequently developed versions in 95% of the cases. These results show that Sentinel is able to automatically generate mutation strategies that reduce mutation testing cost without affecting its testing effectiveness (i.e. mutation score), thus taking off from the tester's shoulders the burden of manually selecting and configuring strategies for each SUT.
1605.04344
Ludovic Righetti
Brahayam Ponton, Stefan Schaal, Ludovic Righetti
On the Effects of Measurement Uncertainty in Optimal Control of Contact Interactions
17 pages, 5 figures - this version is the one published at WAFR 2016 to fulfill the open access requirements of the EU commission, please refer to the previous version for the complete derivation of the algorithm
null
10.1007/978-3-030-43089-4_50
null
cs.SY cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic Optimal Control (SOC) typically considers noise only in the process model, i.e. unknown disturbances. However, in many robotic applications involving interaction with the environment, such as locomotion and manipulation, uncertainty also comes from lack of precise knowledge of the world, which is not an actual disturbance. We analyze the effects of also considering noise in the measurement model, by developing a SOC algorithm based on risk-sensitive control, that includes the dynamics of an observer in such a way that the control law explicitly depends on the current measurement uncertainty. In simulation results on a simple 2D manipulator, we have observed that measurement uncertainty leads to low impedance behaviors, a result in contrast with the effects of process noise that creates stiff behaviors. This suggests that taking into account measurement uncertainty could be a potentially very interesting way to approach problems involving uncertain contact interactions.
[ { "created": "Fri, 13 May 2016 22:12:10 GMT", "version": "v1" }, { "created": "Tue, 16 Jan 2018 18:01:59 GMT", "version": "v2" }, { "created": "Sat, 5 Jun 2021 19:48:03 GMT", "version": "v3" } ]
2021-06-08
[ [ "Ponton", "Brahayam", "" ], [ "Schaal", "Stefan", "" ], [ "Righetti", "Ludovic", "" ] ]
Stochastic Optimal Control (SOC) typically considers noise only in the process model, i.e. unknown disturbances. However, in many robotic applications involving interaction with the environment, such as locomotion and manipulation, uncertainty also comes from lack of precise knowledge of the world, which is not an actual disturbance. We analyze the effects of also considering noise in the measurement model, by developing a SOC algorithm based on risk-sensitive control, that includes the dynamics of an observer in such a way that the control law explicitly depends on the current measurement uncertainty. In simulation results on a simple 2D manipulator, we have observed that measurement uncertainty leads to low impedance behaviors, a result in contrast with the effects of process noise that creates stiff behaviors. This suggests that taking into account measurement uncertainty could be a potentially very interesting way to approach problems involving uncertain contact interactions.
1611.01148
Ted Alcorn
John W. Ayers (San Diego State University), Benjamin M. Althouse (Santa Fe Institute), Eric C. Leas (UC San Diego), Ted Alcorn (Everytown for Gun Safety), Mark Dredze (Johns Hopkins University)
Can Big Media Data Revolutionarize Gun Violence Prevention?
Presented at the Data For Good Exchange 2016
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The scientific method drives improvements in public health, but a strategy of obstructionism has impeded scientists from gathering even a minimal amount of information to address America's gun violence epidemic. We argue that in spite of a lack of federal investment, large amounts of publicly available data offer scientists an opportunity to measure a range of firearm-related behaviors. Given the diversity of available data - including news coverage, social media, web forums, online advertisements, and Internet searches (to name a few) - there are ample opportunities for scientists to study everything from trends in particular types of gun violence to gun-related behaviors (such as purchases and safety practices) to public understanding of and sentiment towards various gun violence reduction measures. Science has been sidelined in the gun violence debate for too long. Scientists must tap the big media data stream and help resolve this crisis.
[ { "created": "Thu, 3 Nov 2016 19:52:00 GMT", "version": "v1" } ]
2016-11-04
[ [ "Ayers", "John W.", "", "San Diego State University" ], [ "Althouse", "Benjamin M.", "", "Santa Fe Institute" ], [ "Leas", "Eric C.", "", "UC San Diego" ], [ "Alcorn", "Ted", "", "Everytown for\n Gun Safety" ], [ "Dredze", "Mark", "", "Johns Hopkins University" ] ]
The scientific method drives improvements in public health, but a strategy of obstructionism has impeded scientists from gathering even a minimal amount of information to address America's gun violence epidemic. We argue that in spite of a lack of federal investment, large amounts of publicly available data offer scientists an opportunity to measure a range of firearm-related behaviors. Given the diversity of available data - including news coverage, social media, web forums, online advertisements, and Internet searches (to name a few) - there are ample opportunities for scientists to study everything from trends in particular types of gun violence to gun-related behaviors (such as purchases and safety practices) to public understanding of and sentiment towards various gun violence reduction measures. Science has been sidelined in the gun violence debate for too long. Scientists must tap the big media data stream and help resolve this crisis.
1904.00742
Renato Krohling
Giuliano L. Manso, Helder Knidel, Renato A. Krohling, Jose A. Ventura
A smartphone application to detection and classification of coffee leaf miner and coffee leaf rust
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generally, the identification and classification of plant diseases and/or pests are performed by an expert . One of the problems facing coffee farmers in Brazil is crop infestation, particularly by leaf rust Hemileia vastatrix and leaf miner Leucoptera coffeella. The progression of the diseases and or pests occurs spatially and temporarily. So, it is very important to automatically identify the degree of severity. The main goal of this article consists on the development of a method and its i implementation as an App that allow the detection of the foliar damages from images of coffee leaf that are captured using a smartphone, and identify whether it is rust or leaf miner, and in turn the calculation of its severity degree. The method consists of identifying a leaf from the image and separates it from the background with the use of a segmentation algorithm. In the segmentation process, various types of backgrounds for the image using the HSV and YCbCr color spaces are tested. In the segmentation of foliar damages, the Otsu algorithm and the iterative threshold algorithm, in the YCgCr color space, have been used and compared to k-means. Next, features of the segmented foliar damages are calculated. For the classification, artificial neural network trained with extreme learning machine have been used. The results obtained shows the feasibility and effectiveness of the approach to identify and classify foliar damages, and the automatic calculation of the severity. The results obtained are very promising according to experts.
[ { "created": "Tue, 19 Mar 2019 21:45:47 GMT", "version": "v1" } ]
2019-04-02
[ [ "Manso", "Giuliano L.", "" ], [ "Knidel", "Helder", "" ], [ "Krohling", "Renato A.", "" ], [ "Ventura", "Jose A.", "" ] ]
Generally, the identification and classification of plant diseases and/or pests are performed by an expert . One of the problems facing coffee farmers in Brazil is crop infestation, particularly by leaf rust Hemileia vastatrix and leaf miner Leucoptera coffeella. The progression of the diseases and or pests occurs spatially and temporarily. So, it is very important to automatically identify the degree of severity. The main goal of this article consists on the development of a method and its i implementation as an App that allow the detection of the foliar damages from images of coffee leaf that are captured using a smartphone, and identify whether it is rust or leaf miner, and in turn the calculation of its severity degree. The method consists of identifying a leaf from the image and separates it from the background with the use of a segmentation algorithm. In the segmentation process, various types of backgrounds for the image using the HSV and YCbCr color spaces are tested. In the segmentation of foliar damages, the Otsu algorithm and the iterative threshold algorithm, in the YCgCr color space, have been used and compared to k-means. Next, features of the segmented foliar damages are calculated. For the classification, artificial neural network trained with extreme learning machine have been used. The results obtained shows the feasibility and effectiveness of the approach to identify and classify foliar damages, and the automatic calculation of the severity. The results obtained are very promising according to experts.
1502.07979
Anastasios Noulas Anastasios Noulas
Anastasios Noulas, Blake Shaw, Renaud Lambiotte, Cecilia Mascolo
Topological Properties and Temporal Dynamics of Place Networks in Urban Environments
null
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the spatial networks formed by the trajectories of mobile users can be beneficial to applications ranging from epidemiology to local search. Despite the potential for impact in a number of fields, several aspects of human mobility networks remain largely unexplored due to the lack of large-scale data at a fine spatiotemporal resolution. Using a longitudinal dataset from the location-based service Foursquare, we perform an empirical analysis of the topological properties of place networks and note their resemblance to online social networks in terms of heavy-tailed degree distributions, triadic closure mechanisms and the small world property. Unlike social networks however, place networks present a mixture of connectivity trends in terms of assortativity that are surprisingly similar to those of the web graph. We take advantage of additional semantic information to interpret how nodes that take on functional roles such as `travel hub', or `food spot' behave in these networks. Finally, motivated by the large volume of new links appearing in place networks over time, we formulate the classic link prediction problem in this new domain. We propose a novel variant of gravity models that brings together three essential elements of inter-place connectivity in urban environments: network-level interactions, human mobility dynamics, and geographic distance. We evaluate this model and find it outperforms a number of baseline predictors and supervised learning algorithms on a task of predicting new links in a sample of one hundred popular cities.
[ { "created": "Fri, 27 Feb 2015 17:30:16 GMT", "version": "v1" }, { "created": "Tue, 17 Mar 2015 14:03:02 GMT", "version": "v2" } ]
2015-03-18
[ [ "Noulas", "Anastasios", "" ], [ "Shaw", "Blake", "" ], [ "Lambiotte", "Renaud", "" ], [ "Mascolo", "Cecilia", "" ] ]
Understanding the spatial networks formed by the trajectories of mobile users can be beneficial to applications ranging from epidemiology to local search. Despite the potential for impact in a number of fields, several aspects of human mobility networks remain largely unexplored due to the lack of large-scale data at a fine spatiotemporal resolution. Using a longitudinal dataset from the location-based service Foursquare, we perform an empirical analysis of the topological properties of place networks and note their resemblance to online social networks in terms of heavy-tailed degree distributions, triadic closure mechanisms and the small world property. Unlike social networks however, place networks present a mixture of connectivity trends in terms of assortativity that are surprisingly similar to those of the web graph. We take advantage of additional semantic information to interpret how nodes that take on functional roles such as `travel hub', or `food spot' behave in these networks. Finally, motivated by the large volume of new links appearing in place networks over time, we formulate the classic link prediction problem in this new domain. We propose a novel variant of gravity models that brings together three essential elements of inter-place connectivity in urban environments: network-level interactions, human mobility dynamics, and geographic distance. We evaluate this model and find it outperforms a number of baseline predictors and supervised learning algorithms on a task of predicting new links in a sample of one hundred popular cities.