id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2207.11511
Ho Man Kwan
Ho Man Kwan and Shenghui Song
SSBNet: Improving Visual Recognition Efficiency by Adaptive Sampling
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Downsampling is widely adopted to achieve a good trade-off between accuracy and latency for visual recognition. Unfortunately, the commonly used pooling layers are not learned, and thus cannot preserve important information. As another dimension reduction method, adaptive sampling weights and processes regions that are relevant to the task, and is thus able to better preserve useful information. However, the use of adaptive sampling has been limited to certain layers. In this paper, we show that using adaptive sampling in the building blocks of a deep neural network can improve its efficiency. In particular, we propose SSBNet which is built by inserting sampling layers repeatedly into existing networks like ResNet. Experiment results show that the proposed SSBNet can achieve competitive image classification and object detection performance on ImageNet and COCO datasets. For example, the SSB-ResNet-RS-200 achieved 82.6% accuracy on ImageNet dataset, which is 0.6% higher than the baseline ResNet-RS-152 with a similar complexity. Visualization shows the advantage of SSBNet in allowing different layers to focus on different positions, and ablation studies further validate the advantage of adaptive sampling over uniform methods.
[ { "created": "Sat, 23 Jul 2022 13:01:55 GMT", "version": "v1" } ]
2022-07-26
[ [ "Kwan", "Ho Man", "" ], [ "Song", "Shenghui", "" ] ]
Downsampling is widely adopted to achieve a good trade-off between accuracy and latency for visual recognition. Unfortunately, the commonly used pooling layers are not learned, and thus cannot preserve important information. As another dimension reduction method, adaptive sampling weights and processes regions that are relevant to the task, and is thus able to better preserve useful information. However, the use of adaptive sampling has been limited to certain layers. In this paper, we show that using adaptive sampling in the building blocks of a deep neural network can improve its efficiency. In particular, we propose SSBNet which is built by inserting sampling layers repeatedly into existing networks like ResNet. Experiment results show that the proposed SSBNet can achieve competitive image classification and object detection performance on ImageNet and COCO datasets. For example, the SSB-ResNet-RS-200 achieved 82.6% accuracy on ImageNet dataset, which is 0.6% higher than the baseline ResNet-RS-152 with a similar complexity. Visualization shows the advantage of SSBNet in allowing different layers to focus on different positions, and ablation studies further validate the advantage of adaptive sampling over uniform methods.
2406.14979
Zihan Niu
Yuanjie Lyu, Zihan Niu, Zheyong Xie, Chao Zhang, Tong Xu, Yang Wang, Enhong Chen
Retrieve-Plan-Generation: An Iterative Planning and Answering Framework for Knowledge-Intensive LLM Generation
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the significant progress of large language models (LLMs) in various tasks, they often produce factual errors due to their limited internal knowledge. Retrieval-Augmented Generation (RAG), which enhances LLMs with external knowledge sources, offers a promising solution. However, these methods can be misled by irrelevant paragraphs in retrieved documents. Due to the inherent uncertainty in LLM generation, inputting the entire document may introduce off-topic information, causing the model to deviate from the central topic and affecting the relevance of the generated content. To address these issues, we propose the Retrieve-Plan-Generation (RPG) framework. RPG generates plan tokens to guide subsequent generation in the plan stage. In the answer stage, the model selects relevant fine-grained paragraphs based on the plan and uses them for further answer generation. This plan-answer process is repeated iteratively until completion, enhancing generation relevance by focusing on specific topics. To implement this framework efficiently, we utilize a simple but effective multi-task prompt-tuning method, enabling the existing LLMs to handle both planning and answering. We comprehensively compare RPG with baselines across 5 knowledge-intensive generation tasks, demonstrating the effectiveness of our approach.
[ { "created": "Fri, 21 Jun 2024 08:45:52 GMT", "version": "v1" } ]
2024-06-24
[ [ "Lyu", "Yuanjie", "" ], [ "Niu", "Zihan", "" ], [ "Xie", "Zheyong", "" ], [ "Zhang", "Chao", "" ], [ "Xu", "Tong", "" ], [ "Wang", "Yang", "" ], [ "Chen", "Enhong", "" ] ]
Despite the significant progress of large language models (LLMs) in various tasks, they often produce factual errors due to their limited internal knowledge. Retrieval-Augmented Generation (RAG), which enhances LLMs with external knowledge sources, offers a promising solution. However, these methods can be misled by irrelevant paragraphs in retrieved documents. Due to the inherent uncertainty in LLM generation, inputting the entire document may introduce off-topic information, causing the model to deviate from the central topic and affecting the relevance of the generated content. To address these issues, we propose the Retrieve-Plan-Generation (RPG) framework. RPG generates plan tokens to guide subsequent generation in the plan stage. In the answer stage, the model selects relevant fine-grained paragraphs based on the plan and uses them for further answer generation. This plan-answer process is repeated iteratively until completion, enhancing generation relevance by focusing on specific topics. To implement this framework efficiently, we utilize a simple but effective multi-task prompt-tuning method, enabling the existing LLMs to handle both planning and answering. We comprehensively compare RPG with baselines across 5 knowledge-intensive generation tasks, demonstrating the effectiveness of our approach.
1902.07420
Yitao Han
Yitao Han, Lingjie Duan, Rui Zhang
Jamming-assisted Eavesdropping over Parallel Fading Channels
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper advances the proactive eavesdropping research by considering a practical half-duplex mode for the legitimate monitor and dealing with the challenging case that the suspicious link opportunistically communicates over parallel fading channels. To increase eavesdropping success probability, we propose cognitive jamming for the monitor to change the suspicious link's long-term belief on the parallel channels' distributions, and thereby induce it to transmit more likely over a smaller subset of unjammed channels with a lower transmission rate. As the half-duplex monitor cannot eavesdrop the channel that it is simultaneously jamming to, our jamming design should also control the probability of such "own goal" that occurs when the suspicious link chooses one of the jammed (uneavesdroppable) channels to transmit. We formulate the optimal jamming design problem as a mixed integer nonlinear programming and show that it is non-convex. Nevertheless, we prove that the monitor should optimally use the maximum jamming power if it decides to jam, for maximally reducing suspicious link's communication rate and driving the suspicious link out of the jammed channels. Then we manage to simplify the MINLP to integer programming and reveal a fundamental trade-off in deciding the number of jammed channels: jamming more channels helps reduce the suspicious link's communication rate for overhearing more clearly, but increases own goal probability and thus decreases eavesdropping success probability. Finally, we extend our study to the two-way suspicious communication scenario, and show there is another interesting trade-off in deciding the common jammed channels for balancing bidirectional eavesdropping performances. Numerical results show that our optimized jamming-assisted eavesdropping scheme greatly increase eavesdropping success probability as compared with the conventional passive eavesdropping.
[ { "created": "Wed, 20 Feb 2019 05:51:30 GMT", "version": "v1" } ]
2019-02-21
[ [ "Han", "Yitao", "" ], [ "Duan", "Lingjie", "" ], [ "Zhang", "Rui", "" ] ]
This paper advances the proactive eavesdropping research by considering a practical half-duplex mode for the legitimate monitor and dealing with the challenging case that the suspicious link opportunistically communicates over parallel fading channels. To increase eavesdropping success probability, we propose cognitive jamming for the monitor to change the suspicious link's long-term belief on the parallel channels' distributions, and thereby induce it to transmit more likely over a smaller subset of unjammed channels with a lower transmission rate. As the half-duplex monitor cannot eavesdrop the channel that it is simultaneously jamming to, our jamming design should also control the probability of such "own goal" that occurs when the suspicious link chooses one of the jammed (uneavesdroppable) channels to transmit. We formulate the optimal jamming design problem as a mixed integer nonlinear programming and show that it is non-convex. Nevertheless, we prove that the monitor should optimally use the maximum jamming power if it decides to jam, for maximally reducing suspicious link's communication rate and driving the suspicious link out of the jammed channels. Then we manage to simplify the MINLP to integer programming and reveal a fundamental trade-off in deciding the number of jammed channels: jamming more channels helps reduce the suspicious link's communication rate for overhearing more clearly, but increases own goal probability and thus decreases eavesdropping success probability. Finally, we extend our study to the two-way suspicious communication scenario, and show there is another interesting trade-off in deciding the common jammed channels for balancing bidirectional eavesdropping performances. Numerical results show that our optimized jamming-assisted eavesdropping scheme greatly increase eavesdropping success probability as compared with the conventional passive eavesdropping.
2210.00960
Jiancong Xiao
Jiancong Xiao, Yanbo Fan, Ruoyu Sun, Jue Wang, Zhi-Quan Luo
Stability Analysis and Generalization Bounds of Adversarial Training
Published as a conference paper in NeurIPS2022
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In adversarial machine learning, deep neural networks can fit the adversarial examples on the training dataset but have poor generalization ability on the test set. This phenomenon is called robust overfitting, and it can be observed when adversarially training neural nets on common datasets, including SVHN, CIFAR-10, CIFAR-100, and ImageNet. In this paper, we study the robust overfitting issue of adversarial training by using tools from uniform stability. One major challenge is that the outer function (as a maximization of the inner function) is nonsmooth, so the standard technique (e.g., hardt et al., 2016) cannot be applied. Our approach is to consider $\eta$-approximate smoothness: we show that the outer function satisfies this modified smoothness assumption with $\eta$ being a constant related to the adversarial perturbation $\epsilon$. Based on this, we derive stability-based generalization bounds for stochastic gradient descent (SGD) on the general class of $\eta$-approximate smooth functions, which covers the adversarial loss. Our results suggest that robust test accuracy decreases in $\epsilon$ when $T$ is large, with a speed between $\Omega(\epsilon\sqrt{T})$ and $\mathcal{O}(\epsilon T)$. This phenomenon is also observed in practice. Additionally, we show that a few popular techniques for adversarial training (e.g., early stopping, cyclic learning rate, and stochastic weight averaging) are stability-promoting in theory.
[ { "created": "Mon, 3 Oct 2022 14:21:46 GMT", "version": "v1" }, { "created": "Mon, 31 Oct 2022 09:39:54 GMT", "version": "v2" } ]
2022-11-01
[ [ "Xiao", "Jiancong", "" ], [ "Fan", "Yanbo", "" ], [ "Sun", "Ruoyu", "" ], [ "Wang", "Jue", "" ], [ "Luo", "Zhi-Quan", "" ] ]
In adversarial machine learning, deep neural networks can fit the adversarial examples on the training dataset but have poor generalization ability on the test set. This phenomenon is called robust overfitting, and it can be observed when adversarially training neural nets on common datasets, including SVHN, CIFAR-10, CIFAR-100, and ImageNet. In this paper, we study the robust overfitting issue of adversarial training by using tools from uniform stability. One major challenge is that the outer function (as a maximization of the inner function) is nonsmooth, so the standard technique (e.g., hardt et al., 2016) cannot be applied. Our approach is to consider $\eta$-approximate smoothness: we show that the outer function satisfies this modified smoothness assumption with $\eta$ being a constant related to the adversarial perturbation $\epsilon$. Based on this, we derive stability-based generalization bounds for stochastic gradient descent (SGD) on the general class of $\eta$-approximate smooth functions, which covers the adversarial loss. Our results suggest that robust test accuracy decreases in $\epsilon$ when $T$ is large, with a speed between $\Omega(\epsilon\sqrt{T})$ and $\mathcal{O}(\epsilon T)$. This phenomenon is also observed in practice. Additionally, we show that a few popular techniques for adversarial training (e.g., early stopping, cyclic learning rate, and stochastic weight averaging) are stability-promoting in theory.
1806.00810
William Farmer
William M. Farmer
A New Style of Proof for Mathematics Organized as a Network of Axiomatic Theories
14 pages. This is a longer, revised version with a modified title
null
null
null
cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A theory graph is a network of axiomatic theories connected with meaning-preserving mappings called theory morphisms. Theory graphs are well suited for organizing large bodies of mathematical knowledge. Traditional and formal proofs do not adequately fulfill all the purposes that mathematical proofs have, and they do not exploit the structure inherent in a theory graph. We propose a new style of proof that fulfills the principal purposes of a mathematical proof as well as capitalizes on the connections provided by the theory morphisms in a theory graph. This new style of proof combines the strengths of traditional proofs with the strengths of formal proofs.
[ { "created": "Sun, 3 Jun 2018 15:18:01 GMT", "version": "v1" }, { "created": "Sat, 1 Dec 2018 12:25:07 GMT", "version": "v2" } ]
2018-12-04
[ [ "Farmer", "William M.", "" ] ]
A theory graph is a network of axiomatic theories connected with meaning-preserving mappings called theory morphisms. Theory graphs are well suited for organizing large bodies of mathematical knowledge. Traditional and formal proofs do not adequately fulfill all the purposes that mathematical proofs have, and they do not exploit the structure inherent in a theory graph. We propose a new style of proof that fulfills the principal purposes of a mathematical proof as well as capitalizes on the connections provided by the theory morphisms in a theory graph. This new style of proof combines the strengths of traditional proofs with the strengths of formal proofs.
1206.6487
Csaba Szepesvari
Gabor Bartok (University of Alberta), Navid Zolghadr (University of Alberta), Csaba Szepesvari (University of Alberta)
An Adaptive Algorithm for Finite Stochastic Partial Monitoring
Appears in Proceedings of the 29th International Conference on Machine Learning (ICML 2012)
null
null
null
cs.LG cs.GT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new anytime algorithm that achieves near-optimal regret for any instance of finite stochastic partial monitoring. In particular, the new algorithm achieves the minimax regret, within logarithmic factors, for both "easy" and "hard" problems. For easy problems, it additionally achieves logarithmic individual regret. Most importantly, the algorithm is adaptive in the sense that if the opponent strategy is in an "easy region" of the strategy space then the regret grows as if the problem was easy. As an implication, we show that under some reasonable additional assumptions, the algorithm enjoys an O(\sqrt{T}) regret in Dynamic Pricing, proven to be hard by Bartok et al. (2011).
[ { "created": "Wed, 27 Jun 2012 19:59:59 GMT", "version": "v1" } ]
2012-07-03
[ [ "Bartok", "Gabor", "", "University of Alberta" ], [ "Zolghadr", "Navid", "", "University of\n Alberta" ], [ "Szepesvari", "Csaba", "", "University of Alberta" ] ]
We present a new anytime algorithm that achieves near-optimal regret for any instance of finite stochastic partial monitoring. In particular, the new algorithm achieves the minimax regret, within logarithmic factors, for both "easy" and "hard" problems. For easy problems, it additionally achieves logarithmic individual regret. Most importantly, the algorithm is adaptive in the sense that if the opponent strategy is in an "easy region" of the strategy space then the regret grows as if the problem was easy. As an implication, we show that under some reasonable additional assumptions, the algorithm enjoys an O(\sqrt{T}) regret in Dynamic Pricing, proven to be hard by Bartok et al. (2011).
2204.04937
Piotr Gramacki
Krzysztof Rajda, {\L}ukasz Augustyniak, Piotr Gramacki, Marcin Gruza, Szymon Wo\'zniak, Tomasz Kajdanowicz
Assessment of Massively Multilingual Sentiment Classifiers
Accepted for WASSA at ACL 2022
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Models are increasing in size and complexity in the hunt for SOTA. But what if those 2\% increase in performance does not make a difference in a production use case? Maybe benefits from a smaller, faster model outweigh those slight performance gains. Also, equally good performance across languages in multilingual tasks is more important than SOTA results on a single one. We present the biggest, unified, multilingual collection of sentiment analysis datasets. We use these to assess 11 models and 80 high-quality sentiment datasets (out of 342 raw datasets collected) in 27 languages and included results on the internally annotated datasets. We deeply evaluate multiple setups, including fine-tuning transformer-based models for measuring performance. We compare results in numerous dimensions addressing the imbalance in both languages coverage and dataset sizes. Finally, we present some best practices for working with such a massive collection of datasets and models from a multilingual perspective.
[ { "created": "Mon, 11 Apr 2022 08:22:05 GMT", "version": "v1" } ]
2022-04-12
[ [ "Rajda", "Krzysztof", "" ], [ "Augustyniak", "Łukasz", "" ], [ "Gramacki", "Piotr", "" ], [ "Gruza", "Marcin", "" ], [ "Woźniak", "Szymon", "" ], [ "Kajdanowicz", "Tomasz", "" ] ]
Models are increasing in size and complexity in the hunt for SOTA. But what if those 2\% increase in performance does not make a difference in a production use case? Maybe benefits from a smaller, faster model outweigh those slight performance gains. Also, equally good performance across languages in multilingual tasks is more important than SOTA results on a single one. We present the biggest, unified, multilingual collection of sentiment analysis datasets. We use these to assess 11 models and 80 high-quality sentiment datasets (out of 342 raw datasets collected) in 27 languages and included results on the internally annotated datasets. We deeply evaluate multiple setups, including fine-tuning transformer-based models for measuring performance. We compare results in numerous dimensions addressing the imbalance in both languages coverage and dataset sizes. Finally, we present some best practices for working with such a massive collection of datasets and models from a multilingual perspective.
2209.02370
Quang Pham
Quang Pham, Chenghao Liu, Steven C. H. Hoi
Continual Learning, Fast and Slow
arXiv admin note: substantial text overlap with arXiv:2110.00175
null
null
null
cs.AI cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
According to the Complementary Learning Systems (CLS) theory~\cite{mcclelland1995there} in neuroscience, humans do effective \emph{continual learning} through two complementary systems: a fast learning system centered on the hippocampus for rapid learning of the specifics, individual experiences; and a slow learning system located in the neocortex for the gradual acquisition of structured knowledge about the environment. Motivated by this theory, we propose \emph{DualNets} (for Dual Networks), a general continual learning framework comprising a fast learning system for supervised learning of pattern-separated representation from specific tasks and a slow learning system for representation learning of task-agnostic general representation via Self-Supervised Learning (SSL). DualNets can seamlessly incorporate both representation types into a holistic framework to facilitate better continual learning in deep neural networks. Via extensive experiments, we demonstrate the promising results of DualNets on a wide range of continual learning protocols, ranging from the standard offline, task-aware setting to the challenging online, task-free scenario. Notably, on the CTrL~\cite{veniat2020efficient} benchmark that has unrelated tasks with vastly different visual images, DualNets can achieve competitive performance with existing state-of-the-art dynamic architecture strategies~\cite{ostapenko2021continual}. Furthermore, we conduct comprehensive ablation studies to validate DualNets efficacy, robustness, and scalability. Code will be made available at \url{https://github.com/phquang/DualNet}.
[ { "created": "Tue, 6 Sep 2022 10:48:45 GMT", "version": "v1" }, { "created": "Wed, 26 Oct 2022 12:27:25 GMT", "version": "v2" }, { "created": "Sun, 9 Jul 2023 10:02:41 GMT", "version": "v3" } ]
2023-07-11
[ [ "Pham", "Quang", "" ], [ "Liu", "Chenghao", "" ], [ "Hoi", "Steven C. H.", "" ] ]
According to the Complementary Learning Systems (CLS) theory~\cite{mcclelland1995there} in neuroscience, humans do effective \emph{continual learning} through two complementary systems: a fast learning system centered on the hippocampus for rapid learning of the specifics, individual experiences; and a slow learning system located in the neocortex for the gradual acquisition of structured knowledge about the environment. Motivated by this theory, we propose \emph{DualNets} (for Dual Networks), a general continual learning framework comprising a fast learning system for supervised learning of pattern-separated representation from specific tasks and a slow learning system for representation learning of task-agnostic general representation via Self-Supervised Learning (SSL). DualNets can seamlessly incorporate both representation types into a holistic framework to facilitate better continual learning in deep neural networks. Via extensive experiments, we demonstrate the promising results of DualNets on a wide range of continual learning protocols, ranging from the standard offline, task-aware setting to the challenging online, task-free scenario. Notably, on the CTrL~\cite{veniat2020efficient} benchmark that has unrelated tasks with vastly different visual images, DualNets can achieve competitive performance with existing state-of-the-art dynamic architecture strategies~\cite{ostapenko2021continual}. Furthermore, we conduct comprehensive ablation studies to validate DualNets efficacy, robustness, and scalability. Code will be made available at \url{https://github.com/phquang/DualNet}.
2002.11497
Sanghyun Hong
Sanghyun Hong, Varun Chandrasekaran, Yi\u{g}itcan Kaya, Tudor Dumitra\c{s}, Nicolas Papernot
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
null
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning algorithms are vulnerable to data poisoning attacks. Prior taxonomies that focus on specific scenarios, e.g., indiscriminate or targeted, have enabled defenses for the corresponding subset of known attacks. Yet, this introduces an inevitable arms race between adversaries and defenders. In this work, we study the feasibility of an attack-agnostic defense relying on artifacts that are common to all poisoning attacks. Specifically, we focus on a common element between all attacks: they modify gradients computed to train the model. We identify two main artifacts of gradients computed in the presence of poison: (1) their $\ell_2$ norms have significantly higher magnitudes than those of clean gradients, and (2) their orientation differs from clean gradients. Based on these observations, we propose the prerequisite for a generic poisoning defense: it must bound gradient magnitudes and minimize differences in orientation. We call this gradient shaping. As an exemplar tool to evaluate the feasibility of gradient shaping, we use differentially private stochastic gradient descent (DP-SGD), which clips and perturbs individual gradients during training to obtain privacy guarantees. We find that DP-SGD, even in configurations that do not result in meaningful privacy guarantees, increases the model's robustness to indiscriminate attacks. It also mitigates worst-case targeted attacks and increases the adversary's cost in multi-poison scenarios. The only attack we find DP-SGD to be ineffective against is a strong, yet unrealistic, indiscriminate attack. Our results suggest that, while we currently lack a generic poisoning defense, gradient shaping is a promising direction for future research.
[ { "created": "Wed, 26 Feb 2020 14:04:16 GMT", "version": "v1" }, { "created": "Thu, 27 Feb 2020 19:00:01 GMT", "version": "v2" } ]
2020-03-02
[ [ "Hong", "Sanghyun", "" ], [ "Chandrasekaran", "Varun", "" ], [ "Kaya", "Yiğitcan", "" ], [ "Dumitraş", "Tudor", "" ], [ "Papernot", "Nicolas", "" ] ]
Machine learning algorithms are vulnerable to data poisoning attacks. Prior taxonomies that focus on specific scenarios, e.g., indiscriminate or targeted, have enabled defenses for the corresponding subset of known attacks. Yet, this introduces an inevitable arms race between adversaries and defenders. In this work, we study the feasibility of an attack-agnostic defense relying on artifacts that are common to all poisoning attacks. Specifically, we focus on a common element between all attacks: they modify gradients computed to train the model. We identify two main artifacts of gradients computed in the presence of poison: (1) their $\ell_2$ norms have significantly higher magnitudes than those of clean gradients, and (2) their orientation differs from clean gradients. Based on these observations, we propose the prerequisite for a generic poisoning defense: it must bound gradient magnitudes and minimize differences in orientation. We call this gradient shaping. As an exemplar tool to evaluate the feasibility of gradient shaping, we use differentially private stochastic gradient descent (DP-SGD), which clips and perturbs individual gradients during training to obtain privacy guarantees. We find that DP-SGD, even in configurations that do not result in meaningful privacy guarantees, increases the model's robustness to indiscriminate attacks. It also mitigates worst-case targeted attacks and increases the adversary's cost in multi-poison scenarios. The only attack we find DP-SGD to be ineffective against is a strong, yet unrealistic, indiscriminate attack. Our results suggest that, while we currently lack a generic poisoning defense, gradient shaping is a promising direction for future research.
1703.09400
Naeemul Hassan
Md Main Uddin Rony, Naeemul Hassan, Mohammad Yousuf
Diving Deep into Clickbaits: Who Use Them to What Extents in Which Topics with What Effects?
null
null
null
null
cs.SI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of alluring headlines (clickbait) to tempt the readers has become a growing practice nowadays. For the sake of existence in the highly competitive media industry, most of the on-line media including the mainstream ones, have started following this practice. Although the wide-spread practice of clickbait makes the reader's reliability on media vulnerable, a large scale analysis to reveal this fact is still absent. In this paper, we analyze 1.67 million Facebook posts created by 153 media organizations to understand the extent of clickbait practice, its impact and user engagement by using our own developed clickbait detection model. The model uses distributed sub-word embeddings learned from a large corpus. The accuracy of the model is 98.3%. Powered with this model, we further study the distribution of topics in clickbait and non-clickbait contents.
[ { "created": "Tue, 28 Mar 2017 05:07:38 GMT", "version": "v1" } ]
2017-03-29
[ [ "Rony", "Md Main Uddin", "" ], [ "Hassan", "Naeemul", "" ], [ "Yousuf", "Mohammad", "" ] ]
The use of alluring headlines (clickbait) to tempt the readers has become a growing practice nowadays. For the sake of existence in the highly competitive media industry, most of the on-line media including the mainstream ones, have started following this practice. Although the wide-spread practice of clickbait makes the reader's reliability on media vulnerable, a large scale analysis to reveal this fact is still absent. In this paper, we analyze 1.67 million Facebook posts created by 153 media organizations to understand the extent of clickbait practice, its impact and user engagement by using our own developed clickbait detection model. The model uses distributed sub-word embeddings learned from a large corpus. The accuracy of the model is 98.3%. Powered with this model, we further study the distribution of topics in clickbait and non-clickbait contents.
2405.14977
Robert Alexander Marsden
Mario D\"obler, Robert A. Marsden, Tobias Raichle, Bin Yang
A Lost Opportunity for Vision-Language Models: A Comparative Study of Online Test-time Adaptation for Vision-Language Models
Accepted at CVPR 2024 MAT Workshop Community Track
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the realm of deep learning, maintaining model robustness against distribution shifts is critical. This paper investigates test-time adaptation strategies for vision-language models, with a specific focus on CLIP and its variants. Through a systematic exploration of prompt-based techniques and existing test-time adaptation methods, the study aims to enhance the adaptability and robustness of vision-language models in diverse real-world scenarios. The investigation includes an analysis of prompt engineering strategies, such as hand-crafted prompts, prompt ensembles, and prompt learning techniques. We introduce a vision-text-space ensemble that significantly boosts the average performance compared to a text-space-only ensemble. Additionally, our comparative study delves into leveraging existing test-time adaptation methods originally designed for image classification tasks. Experimental evaluations conducted across various datasets and model architectures demonstrate the efficacy of different adaptation strategies. We further give insights into the importance of updating the vision encoder and whether it is beneficial to update the text encoder. Code is available at https://github.com/mariodoebler/test-time-adaptation
[ { "created": "Thu, 23 May 2024 18:27:07 GMT", "version": "v1" } ]
2024-05-27
[ [ "Döbler", "Mario", "" ], [ "Marsden", "Robert A.", "" ], [ "Raichle", "Tobias", "" ], [ "Yang", "Bin", "" ] ]
In the realm of deep learning, maintaining model robustness against distribution shifts is critical. This paper investigates test-time adaptation strategies for vision-language models, with a specific focus on CLIP and its variants. Through a systematic exploration of prompt-based techniques and existing test-time adaptation methods, the study aims to enhance the adaptability and robustness of vision-language models in diverse real-world scenarios. The investigation includes an analysis of prompt engineering strategies, such as hand-crafted prompts, prompt ensembles, and prompt learning techniques. We introduce a vision-text-space ensemble that significantly boosts the average performance compared to a text-space-only ensemble. Additionally, our comparative study delves into leveraging existing test-time adaptation methods originally designed for image classification tasks. Experimental evaluations conducted across various datasets and model architectures demonstrate the efficacy of different adaptation strategies. We further give insights into the importance of updating the vision encoder and whether it is beneficial to update the text encoder. Code is available at https://github.com/mariodoebler/test-time-adaptation
1712.00368
Adrien Lagrange
Adrien Lagrange, Mathieu Fauvel, St\'ephane May and Nicolas Dobigeon
Hierarchical Bayesian image analysis: from low-level modeling to robust supervised learning
null
null
null
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Within a supervised classification framework, labeled data are used to learn classifier parameters. Prior to that, it is generally required to perform dimensionality reduction via feature extraction. These preprocessing steps have motivated numerous research works aiming at recovering latent variables in an unsupervised context. This paper proposes a unified framework to perform classification and low-level modeling jointly. The main objective is to use the estimated latent variables as features for classification and to incorporate simultaneously supervised information to help latent variable extraction. The proposed hierarchical Bayesian model is divided into three stages: a first low-level modeling stage to estimate latent variables, a second stage clustering these features into statistically homogeneous groups and a last classification stage exploiting the (possibly badly) labeled data. Performance of the model is assessed in the specific context of hyperspectral image interpretation, unifying two standard analysis techniques, namely unmixing and classification.
[ { "created": "Fri, 1 Dec 2017 15:32:58 GMT", "version": "v1" } ]
2017-12-04
[ [ "Lagrange", "Adrien", "" ], [ "Fauvel", "Mathieu", "" ], [ "May", "Stéphane", "" ], [ "Dobigeon", "Nicolas", "" ] ]
Within a supervised classification framework, labeled data are used to learn classifier parameters. Prior to that, it is generally required to perform dimensionality reduction via feature extraction. These preprocessing steps have motivated numerous research works aiming at recovering latent variables in an unsupervised context. This paper proposes a unified framework to perform classification and low-level modeling jointly. The main objective is to use the estimated latent variables as features for classification and to incorporate simultaneously supervised information to help latent variable extraction. The proposed hierarchical Bayesian model is divided into three stages: a first low-level modeling stage to estimate latent variables, a second stage clustering these features into statistically homogeneous groups and a last classification stage exploiting the (possibly badly) labeled data. Performance of the model is assessed in the specific context of hyperspectral image interpretation, unifying two standard analysis techniques, namely unmixing and classification.
0710.3779
Sumanth Gangasani
Sumanth Kumar Reddy Gangasani
Testing D-Sequences for their Randomness
8 pages, 5 figures
null
null
null
cs.CR
null
This paper examines the randomness of d-sequences, which are decimal sequences to an arbitrary base. Our motivation is to check their suitability for application to cryptography, spread-spectrum systems and use as pseudorandom sequence.
[ { "created": "Fri, 19 Oct 2007 20:18:42 GMT", "version": "v1" } ]
2007-10-23
[ [ "Gangasani", "Sumanth Kumar Reddy", "" ] ]
This paper examines the randomness of d-sequences, which are decimal sequences to an arbitrary base. Our motivation is to check their suitability for application to cryptography, spread-spectrum systems and use as pseudorandom sequence.
1409.3696
Peter Bezd\u{e}k
Peter Bezd\v{e}k and Nikola Bene\v{s} and Ji\v{r}\'i Barnat and Ivana \v{C}ern\'a
LTL Parameter Synthesis of Parametric Timed Automata
23 pages, extended version
null
null
null
cs.FL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The parameter synthesis problem for parametric timed automata is undecidable in general even for very simple reachability properties. In this paper we introduce restrictions on parameter valuations under which the parameter synthesis problem is decidable for LTL properties. The investigated bounded integer parameter synthesis problem could be solved using an explicit enumeration of all possible parameter valuations. We propose an alternative symbolic zone-based method for this problem which results in a faster computation. Our technique extends the ideas of the automata-based approach to LTL model checking of timed automata. To justify the usefulness of our approach, we provide experimental evaluation and compare our method with explicit enumeration technique.
[ { "created": "Fri, 12 Sep 2014 10:53:32 GMT", "version": "v1" }, { "created": "Fri, 4 Mar 2016 17:03:49 GMT", "version": "v2" } ]
2016-03-07
[ [ "Bezděk", "Peter", "" ], [ "Beneš", "Nikola", "" ], [ "Barnat", "Jiří", "" ], [ "Černá", "Ivana", "" ] ]
The parameter synthesis problem for parametric timed automata is undecidable in general even for very simple reachability properties. In this paper we introduce restrictions on parameter valuations under which the parameter synthesis problem is decidable for LTL properties. The investigated bounded integer parameter synthesis problem could be solved using an explicit enumeration of all possible parameter valuations. We propose an alternative symbolic zone-based method for this problem which results in a faster computation. Our technique extends the ideas of the automata-based approach to LTL model checking of timed automata. To justify the usefulness of our approach, we provide experimental evaluation and compare our method with explicit enumeration technique.
2009.01465
Damla Cay
Damla \c{C}ay, Till Nagel, As{\i}m Evren Yanta\c{c}
Understanding User Experience of COVID-19 Maps through Remote Elicitation Interviews
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During the coronavirus pandemic, visualizations gained a new level of popularity and meaning for a wider audience. People were bombarded with a wide set of public health visualizations ranging from simple graphs to complex interactive dashboards. In a pandemic setting, where large amounts of the world population are socially distancing themselves, it becomes an urgent need to refine existing user experience evaluation methods for remote settings to understand how people make sense out of COVID-19 related visualizations. When evaluating visualizations aimed towards the general public with vastly different socio-demographic backgrounds and varying levels of technical savviness and data literacy, it is important to understand user feedback beyond aspects such as speed, task accuracy, or usability problems. As a part of this wider evaluation perspective, micro-phenomenology has been used to evaluate static and narrative visualizations to reveal the lived experience in a detailed way. Building upon these studies, we conducted a user study to understand how to employ Elicitation (aka Micro-phenomenological) interviews in remote settings. In a case study, we investigated what experiences the participants had with map-based interactive visualizations. Our findings reveal positive and negative aspects of conducting Elicitation interviews remotely. Our results can inform the process of planning and executing remote Elicitation interviews to evaluate interactive visualizations. In addition, we share recommendations regarding visualization techniques and interaction design about public health data.
[ { "created": "Thu, 3 Sep 2020 06:12:09 GMT", "version": "v1" } ]
2020-09-04
[ [ "Çay", "Damla", "" ], [ "Nagel", "Till", "" ], [ "Yantaç", "Asım Evren", "" ] ]
During the coronavirus pandemic, visualizations gained a new level of popularity and meaning for a wider audience. People were bombarded with a wide set of public health visualizations ranging from simple graphs to complex interactive dashboards. In a pandemic setting, where large amounts of the world population are socially distancing themselves, it becomes an urgent need to refine existing user experience evaluation methods for remote settings to understand how people make sense out of COVID-19 related visualizations. When evaluating visualizations aimed towards the general public with vastly different socio-demographic backgrounds and varying levels of technical savviness and data literacy, it is important to understand user feedback beyond aspects such as speed, task accuracy, or usability problems. As a part of this wider evaluation perspective, micro-phenomenology has been used to evaluate static and narrative visualizations to reveal the lived experience in a detailed way. Building upon these studies, we conducted a user study to understand how to employ Elicitation (aka Micro-phenomenological) interviews in remote settings. In a case study, we investigated what experiences the participants had with map-based interactive visualizations. Our findings reveal positive and negative aspects of conducting Elicitation interviews remotely. Our results can inform the process of planning and executing remote Elicitation interviews to evaluate interactive visualizations. In addition, we share recommendations regarding visualization techniques and interaction design about public health data.
1401.3476
Piero A. Bonatti
Piero A. Bonatti, Carsten Lutz, Frank Wolter
The Complexity of Circumscription in DLs
null
Journal Of Artificial Intelligence Research, Volume 35, pages 717-773, 2009
10.1613/jair.2763
null
cs.LO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As fragments of first-order logic, Description logics (DLs) do not provide nonmonotonic features such as defeasible inheritance and default rules. Since many applications would benefit from the availability of such features, several families of nonmonotonic DLs have been developed that are mostly based on default logic and autoepistemic logic. In this paper, we consider circumscription as an interesting alternative approach to nonmonotonic DLs that, in particular, supports defeasible inheritance in a natural way. We study DLs extended with circumscription under different language restrictions and under different constraints on the sets of minimized, fixed, and varying predicates, and pinpoint the exact computational complexity of reasoning for DLs ranging from ALC to ALCIO and ALCQO. When the minimized and fixed predicates include only concept names but no role names, then reasoning is complete for NExpTime^NP. It becomes complete for NP^NExpTime when the number of minimized and fixed predicates is bounded by a constant. If roles can be minimized or fixed, then complexity ranges from NExpTime^NP to undecidability.
[ { "created": "Wed, 15 Jan 2014 05:32:08 GMT", "version": "v1" } ]
2014-01-16
[ [ "Bonatti", "Piero A.", "" ], [ "Lutz", "Carsten", "" ], [ "Wolter", "Frank", "" ] ]
As fragments of first-order logic, Description logics (DLs) do not provide nonmonotonic features such as defeasible inheritance and default rules. Since many applications would benefit from the availability of such features, several families of nonmonotonic DLs have been developed that are mostly based on default logic and autoepistemic logic. In this paper, we consider circumscription as an interesting alternative approach to nonmonotonic DLs that, in particular, supports defeasible inheritance in a natural way. We study DLs extended with circumscription under different language restrictions and under different constraints on the sets of minimized, fixed, and varying predicates, and pinpoint the exact computational complexity of reasoning for DLs ranging from ALC to ALCIO and ALCQO. When the minimized and fixed predicates include only concept names but no role names, then reasoning is complete for NExpTime^NP. It becomes complete for NP^NExpTime when the number of minimized and fixed predicates is bounded by a constant. If roles can be minimized or fixed, then complexity ranges from NExpTime^NP to undecidability.
1907.00719
Sun Chunlong
Junyong Eom, Manabu Machida, Gen Nakamura, Goro Nishimura, and Chunlong Sun
Expression of the peak time for time-domain boundary measurements in diffuse light
null
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Light propagation through diffusive media can be described by the diffusion equation in a space-time domain. Further, fluorescence can be described by a system of coupled diffusion equations. This paper analyzes time-domain measurements, which measure the temporal point-spread function (TPSF), at a boundary of such diffusive media with a given source and detector. We focus on the temporal position of the TPSF maximum, which we refer to as the peak time. Although some unique properties of solutions of this system have been numerically studied, we give a mathematical analysis of peak time, providing proof of the existence, uniqueness, and the explicit expression of the peak time. We clearly show the relationship between the peak time and the object position in a medium.
[ { "created": "Thu, 27 Jun 2019 04:03:54 GMT", "version": "v1" }, { "created": "Wed, 8 Dec 2021 04:39:04 GMT", "version": "v2" } ]
2021-12-09
[ [ "Eom", "Junyong", "" ], [ "Machida", "Manabu", "" ], [ "Nakamura", "Gen", "" ], [ "Nishimura", "Goro", "" ], [ "Sun", "Chunlong", "" ] ]
Light propagation through diffusive media can be described by the diffusion equation in a space-time domain. Further, fluorescence can be described by a system of coupled diffusion equations. This paper analyzes time-domain measurements, which measure the temporal point-spread function (TPSF), at a boundary of such diffusive media with a given source and detector. We focus on the temporal position of the TPSF maximum, which we refer to as the peak time. Although some unique properties of solutions of this system have been numerically studied, we give a mathematical analysis of peak time, providing proof of the existence, uniqueness, and the explicit expression of the peak time. We clearly show the relationship between the peak time and the object position in a medium.
1909.03934
Jan Karwowski
Jan Karwowski and Jacek Ma\'ndziuk
Double-oracle sampling method for Stackelberg Equilibrium approximation in general-sum extensive-form games
null
Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, 2054-2061
10.1609/aaai.v34i02.5578
null
cs.GT cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper presents a new method for approximating Strong Stackelberg Equilibrium in general-sum sequential games with imperfect information and perfect recall. The proposed approach is generic as it does not rely on any specific properties of a particular game model. The method is based on iterative interleaving of the two following phases: (1) guided Monte Carlo Tree Search sampling of the Follower's strategy space and (2) building the Leader's behavior strategy tree for which the sampled Follower's strategy is an optimal response. The above solution scheme is evaluated with respect to expected Leader's utility and time requirements on three sets of interception games with variable characteristics, played on graphs. A comparison with three state-of-the-art MILP/LP-based methods shows that in vast majority of test cases proposed simulation-based approach leads to optimal Leader's strategies, while excelling the competitive methods in terms of better time scalability and lower memory requirements.
[ { "created": "Mon, 9 Sep 2019 15:34:04 GMT", "version": "v1" } ]
2022-08-16
[ [ "Karwowski", "Jan", "" ], [ "Mańdziuk", "Jacek", "" ] ]
The paper presents a new method for approximating Strong Stackelberg Equilibrium in general-sum sequential games with imperfect information and perfect recall. The proposed approach is generic as it does not rely on any specific properties of a particular game model. The method is based on iterative interleaving of the two following phases: (1) guided Monte Carlo Tree Search sampling of the Follower's strategy space and (2) building the Leader's behavior strategy tree for which the sampled Follower's strategy is an optimal response. The above solution scheme is evaluated with respect to expected Leader's utility and time requirements on three sets of interception games with variable characteristics, played on graphs. A comparison with three state-of-the-art MILP/LP-based methods shows that in vast majority of test cases proposed simulation-based approach leads to optimal Leader's strategies, while excelling the competitive methods in terms of better time scalability and lower memory requirements.
2204.11337
Anastassia Kornilova
Anastassia Kornilova, Daniel Argyle, Vladimir Eidelman
An Item Response Theory Framework for Persuasion
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we apply Item Response Theory, popular in education and political science research, to the analysis of argument persuasiveness in language. We empirically evaluate the model's performance on three datasets, including a novel dataset in the area of political advocacy. We show the advantages of separating these components under several style and content representations, including evaluating the ability of the speaker embeddings generated by the model to parallel real-world observations about persuadability.
[ { "created": "Sun, 24 Apr 2022 19:14:11 GMT", "version": "v1" } ]
2022-04-26
[ [ "Kornilova", "Anastassia", "" ], [ "Argyle", "Daniel", "" ], [ "Eidelman", "Vladimir", "" ] ]
In this paper, we apply Item Response Theory, popular in education and political science research, to the analysis of argument persuasiveness in language. We empirically evaluate the model's performance on three datasets, including a novel dataset in the area of political advocacy. We show the advantages of separating these components under several style and content representations, including evaluating the ability of the speaker embeddings generated by the model to parallel real-world observations about persuadability.
2306.11719
Ayush Tewari
Ayush Tewari, Tianwei Yin, George Cazenavette, Semon Rezchikov, Joshua B. Tenenbaum, Fr\'edo Durand, William T. Freeman, Vincent Sitzmann
Diffusion with Forward Models: Solving Stochastic Inverse Problems Without Direct Supervision
Project page: https://diffusion-with-forward-models.github.io/
null
null
null
cs.CV cs.GR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Denoising diffusion models are a powerful type of generative models used to capture complex distributions of real-world signals. However, their applicability is limited to scenarios where training samples are readily available, which is not always the case in real-world applications. For example, in inverse graphics, the goal is to generate samples from a distribution of 3D scenes that align with a given image, but ground-truth 3D scenes are unavailable and only 2D images are accessible. To address this limitation, we propose a novel class of denoising diffusion probabilistic models that learn to sample from distributions of signals that are never directly observed. Instead, these signals are measured indirectly through a known differentiable forward model, which produces partial observations of the unknown signal. Our approach involves integrating the forward model directly into the denoising process. This integration effectively connects the generative modeling of observations with the generative modeling of the underlying signals, allowing for end-to-end training of a conditional generative model over signals. During inference, our approach enables sampling from the distribution of underlying signals that are consistent with a given partial observation. We demonstrate the effectiveness of our method on three challenging computer vision tasks. For instance, in the context of inverse graphics, our model enables direct sampling from the distribution of 3D scenes that align with a single 2D input image.
[ { "created": "Tue, 20 Jun 2023 17:53:00 GMT", "version": "v1" }, { "created": "Fri, 17 Nov 2023 04:17:34 GMT", "version": "v2" } ]
2023-11-20
[ [ "Tewari", "Ayush", "" ], [ "Yin", "Tianwei", "" ], [ "Cazenavette", "George", "" ], [ "Rezchikov", "Semon", "" ], [ "Tenenbaum", "Joshua B.", "" ], [ "Durand", "Frédo", "" ], [ "Freeman", "William T.", "" ], [ "Sitzmann", "Vincent", "" ] ]
Denoising diffusion models are a powerful type of generative models used to capture complex distributions of real-world signals. However, their applicability is limited to scenarios where training samples are readily available, which is not always the case in real-world applications. For example, in inverse graphics, the goal is to generate samples from a distribution of 3D scenes that align with a given image, but ground-truth 3D scenes are unavailable and only 2D images are accessible. To address this limitation, we propose a novel class of denoising diffusion probabilistic models that learn to sample from distributions of signals that are never directly observed. Instead, these signals are measured indirectly through a known differentiable forward model, which produces partial observations of the unknown signal. Our approach involves integrating the forward model directly into the denoising process. This integration effectively connects the generative modeling of observations with the generative modeling of the underlying signals, allowing for end-to-end training of a conditional generative model over signals. During inference, our approach enables sampling from the distribution of underlying signals that are consistent with a given partial observation. We demonstrate the effectiveness of our method on three challenging computer vision tasks. For instance, in the context of inverse graphics, our model enables direct sampling from the distribution of 3D scenes that align with a single 2D input image.
2011.08130
Thomas Zimmermann
Paige Rodeghero, Thomas Zimmermann, Brian Houck, Denae Ford
Please Turn Your Cameras On: Remote Onboarding of Software Developers during a Pandemic
10 pages. Final version of the paper accepted at ICSE 2021 in the SEIP track
null
null
null
cs.SE cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The COVID-19 pandemic has impacted the way that software development teams onboard new hires. Previously, most software developers worked in physical offices and new hires onboarded to their teams in the physical office, following a standard onboarding process. However, when companies transitioned employees to work from home due to the pandemic, there was little to no time to develop new onboarding procedures. In this paper, we present a survey of 267 new hires at Microsoft that onboarded to software development teams during the pandemic. We explored their remote onboarding process, including the challenges that the new hires encountered and their social connectedness with their teams. We found that most developers onboarded remotely and never had an opportunity to meet their teammates in person. This leads to one of the biggest challenges faced by these new hires, building a strong social connection with their team. We use these results to provide recommendations for onboarding remote hires.
[ { "created": "Mon, 16 Nov 2020 17:52:03 GMT", "version": "v1" }, { "created": "Sun, 7 Mar 2021 03:33:28 GMT", "version": "v2" } ]
2021-03-09
[ [ "Rodeghero", "Paige", "" ], [ "Zimmermann", "Thomas", "" ], [ "Houck", "Brian", "" ], [ "Ford", "Denae", "" ] ]
The COVID-19 pandemic has impacted the way that software development teams onboard new hires. Previously, most software developers worked in physical offices and new hires onboarded to their teams in the physical office, following a standard onboarding process. However, when companies transitioned employees to work from home due to the pandemic, there was little to no time to develop new onboarding procedures. In this paper, we present a survey of 267 new hires at Microsoft that onboarded to software development teams during the pandemic. We explored their remote onboarding process, including the challenges that the new hires encountered and their social connectedness with their teams. We found that most developers onboarded remotely and never had an opportunity to meet their teammates in person. This leads to one of the biggest challenges faced by these new hires, building a strong social connection with their team. We use these results to provide recommendations for onboarding remote hires.
2109.01727
Yiqing Hua
Yiqing Hua, Armin Namavari, Kaishuo Cheng, Mor Naaman, Thomas Ristenpart
Increasing Adversarial Uncertainty to Scale Private Similarity Testing
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Social media and other platforms rely on automated detection of abusive content to help combat disinformation, harassment, and abuse. One common approach is to check user content for similarity against a server-side database of problematic items. However, this method fundamentally endangers user privacy. Instead, we target client-side detection, notifying only the users when such matches occur to warn them against abusive content. Our solution is based on privacy-preserving similarity testing. Existing approaches rely on expensive cryptographic protocols that do not scale well to large databases and may sacrifice the correctness of the matching. To contend with this challenge, we propose and formalize the concept of similarity-based bucketization~(SBB). With SBB, a client reveals a small amount of information to a database-holding server so that it can generate a bucket of potentially similar items. The bucket is small enough for efficient application of privacy-preserving protocols for similarity. To analyze the privacy risk of the revealed information, we introduce a framework for measuring an adversary's confidence in inferring a predicate about the client input correctly. We develop a practical SBB protocol for image content, and evaluate its client privacy guarantee with real-world social media data. We then combine SBB with various similarity protocols, showing that the combination with SBB provides a speedup of at least 29x on large-scale databases compared to that without, while retaining correctness of over 95%.
[ { "created": "Fri, 3 Sep 2021 20:54:34 GMT", "version": "v1" }, { "created": "Tue, 7 Sep 2021 19:54:51 GMT", "version": "v2" }, { "created": "Wed, 29 Sep 2021 22:02:14 GMT", "version": "v3" }, { "created": "Mon, 4 Oct 2021 20:14:17 GMT", "version": "v4" } ]
2021-10-06
[ [ "Hua", "Yiqing", "" ], [ "Namavari", "Armin", "" ], [ "Cheng", "Kaishuo", "" ], [ "Naaman", "Mor", "" ], [ "Ristenpart", "Thomas", "" ] ]
Social media and other platforms rely on automated detection of abusive content to help combat disinformation, harassment, and abuse. One common approach is to check user content for similarity against a server-side database of problematic items. However, this method fundamentally endangers user privacy. Instead, we target client-side detection, notifying only the users when such matches occur to warn them against abusive content. Our solution is based on privacy-preserving similarity testing. Existing approaches rely on expensive cryptographic protocols that do not scale well to large databases and may sacrifice the correctness of the matching. To contend with this challenge, we propose and formalize the concept of similarity-based bucketization~(SBB). With SBB, a client reveals a small amount of information to a database-holding server so that it can generate a bucket of potentially similar items. The bucket is small enough for efficient application of privacy-preserving protocols for similarity. To analyze the privacy risk of the revealed information, we introduce a framework for measuring an adversary's confidence in inferring a predicate about the client input correctly. We develop a practical SBB protocol for image content, and evaluate its client privacy guarantee with real-world social media data. We then combine SBB with various similarity protocols, showing that the combination with SBB provides a speedup of at least 29x on large-scale databases compared to that without, while retaining correctness of over 95%.
2104.09993
Lo\"ic J\'ez\'equel
Loic Jezequel, Ngoc-Son Vu, Jean Beaudet, Aymeric Histace
Fine-grained Anomaly Detection via Multi-task Self-Supervision
null
L. J\'ez\'equel, N. -S. Vu, J. Beaudet and A. Histace, "Fine-grained anomaly detection via multi-task self-supervision," 2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2021, pp. 1-8
10.1109/AVSS52988.2021.9663783
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Detecting anomalies using deep learning has become a major challenge over the last years, and is becoming increasingly promising in several fields. The introduction of self-supervised learning has greatly helped many methods including anomaly detection where simple geometric transformation recognition tasks are used. However these methods do not perform well on fine-grained problems since they lack finer features. By combining in a multi-task framework high-scale shape features oriented task with low-scale fine features oriented task, our method greatly improves fine-grained anomaly detection. It outperforms state-of-the-art with up to 31% relative error reduction measured with AUROC on various anomaly detection problems.
[ { "created": "Tue, 20 Apr 2021 14:19:08 GMT", "version": "v1" }, { "created": "Thu, 17 Mar 2022 09:53:56 GMT", "version": "v2" } ]
2022-03-18
[ [ "Jezequel", "Loic", "" ], [ "Vu", "Ngoc-Son", "" ], [ "Beaudet", "Jean", "" ], [ "Histace", "Aymeric", "" ] ]
Detecting anomalies using deep learning has become a major challenge over the last years, and is becoming increasingly promising in several fields. The introduction of self-supervised learning has greatly helped many methods including anomaly detection where simple geometric transformation recognition tasks are used. However these methods do not perform well on fine-grained problems since they lack finer features. By combining in a multi-task framework high-scale shape features oriented task with low-scale fine features oriented task, our method greatly improves fine-grained anomaly detection. It outperforms state-of-the-art with up to 31% relative error reduction measured with AUROC on various anomaly detection problems.
2401.15865
Sifan Zhou
Sifan Zhou, Liang Li, Xinyu Zhang, Bo Zhang, Shipeng Bai, Miao Sun, Ziyu Zhao, Xiaobo Lu, Xiangxiang Chu
LiDAR-PTQ: Post-Training Quantization for Point Cloud 3D Object Detection
Accepted in ICLR 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Due to highly constrained computing power and memory, deploying 3D lidar-based detectors on edge devices equipped in autonomous vehicles and robots poses a crucial challenge. Being a convenient and straightforward model compression approach, Post-Training Quantization (PTQ) has been widely adopted in 2D vision tasks. However, applying it directly to 3D lidar-based tasks inevitably leads to performance degradation. As a remedy, we propose an effective PTQ method called LiDAR-PTQ, which is particularly curated for 3D lidar detection (both SPConv-based and SPConv-free). Our LiDAR-PTQ features three main components, \textbf{(1)} a sparsity-based calibration method to determine the initialization of quantization parameters, \textbf{(2)} a Task-guided Global Positive Loss (TGPL) to reduce the disparity between the final predictions before and after quantization, \textbf{(3)} an adaptive rounding-to-nearest operation to minimize the layerwise reconstruction error. Extensive experiments demonstrate that our LiDAR-PTQ can achieve state-of-the-art quantization performance when applied to CenterPoint (both Pillar-based and Voxel-based). To our knowledge, for the very first time in lidar-based 3D detection tasks, the PTQ INT8 model's accuracy is almost the same as the FP32 model while enjoying $3\times$ inference speedup. Moreover, our LiDAR-PTQ is cost-effective being $30\times$ faster than the quantization-aware training method. Code will be released at \url{https://github.com/StiphyJay/LiDAR-PTQ}.
[ { "created": "Mon, 29 Jan 2024 03:35:55 GMT", "version": "v1" } ]
2024-01-30
[ [ "Zhou", "Sifan", "" ], [ "Li", "Liang", "" ], [ "Zhang", "Xinyu", "" ], [ "Zhang", "Bo", "" ], [ "Bai", "Shipeng", "" ], [ "Sun", "Miao", "" ], [ "Zhao", "Ziyu", "" ], [ "Lu", "Xiaobo", "" ], [ "Chu", "Xiangxiang", "" ] ]
Due to highly constrained computing power and memory, deploying 3D lidar-based detectors on edge devices equipped in autonomous vehicles and robots poses a crucial challenge. Being a convenient and straightforward model compression approach, Post-Training Quantization (PTQ) has been widely adopted in 2D vision tasks. However, applying it directly to 3D lidar-based tasks inevitably leads to performance degradation. As a remedy, we propose an effective PTQ method called LiDAR-PTQ, which is particularly curated for 3D lidar detection (both SPConv-based and SPConv-free). Our LiDAR-PTQ features three main components, \textbf{(1)} a sparsity-based calibration method to determine the initialization of quantization parameters, \textbf{(2)} a Task-guided Global Positive Loss (TGPL) to reduce the disparity between the final predictions before and after quantization, \textbf{(3)} an adaptive rounding-to-nearest operation to minimize the layerwise reconstruction error. Extensive experiments demonstrate that our LiDAR-PTQ can achieve state-of-the-art quantization performance when applied to CenterPoint (both Pillar-based and Voxel-based). To our knowledge, for the very first time in lidar-based 3D detection tasks, the PTQ INT8 model's accuracy is almost the same as the FP32 model while enjoying $3\times$ inference speedup. Moreover, our LiDAR-PTQ is cost-effective being $30\times$ faster than the quantization-aware training method. Code will be released at \url{https://github.com/StiphyJay/LiDAR-PTQ}.
2401.02582
Daoan Zhang
Daoan Zhang, Junming Yang, Hanjia Lyu, Zijian Jin, Yuan Yao, Mingkai Chen, Jiebo Luo
CoCoT: Contrastive Chain-of-Thought Prompting for Large Multimodal Models with Multiple Image Inputs
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When exploring the development of Artificial General Intelligence (AGI), a critical task for these models involves interpreting and processing information from multiple image inputs. However, Large Multimodal Models (LMMs) encounter two issues in such scenarios: (1) a lack of fine-grained perception, and (2) a tendency to blend information across multiple images. We first extensively investigate the capability of LMMs to perceive fine-grained visual details when dealing with multiple input images. The research focuses on two aspects: first, image-to-image matching (to evaluate whether LMMs can effectively reason and pair relevant images), and second, multi-image-to-text matching (to assess whether LMMs can accurately capture and summarize detailed image information). We conduct evaluations on a range of both open-source and closed-source large models, including GPT-4V, Gemini, OpenFlamingo, and MMICL. To enhance model performance, we further develop a Contrastive Chain-of-Thought (CoCoT) prompting approach based on multi-input multimodal models. This method requires LMMs to compare the similarities and differences among multiple image inputs, and then guide the models to answer detailed questions about multi-image inputs based on the identified similarities and differences. Our experimental results showcase CoCoT's proficiency in enhancing the multi-image comprehension capabilities of large multimodal models.
[ { "created": "Fri, 5 Jan 2024 00:26:07 GMT", "version": "v1" } ]
2024-01-08
[ [ "Zhang", "Daoan", "" ], [ "Yang", "Junming", "" ], [ "Lyu", "Hanjia", "" ], [ "Jin", "Zijian", "" ], [ "Yao", "Yuan", "" ], [ "Chen", "Mingkai", "" ], [ "Luo", "Jiebo", "" ] ]
When exploring the development of Artificial General Intelligence (AGI), a critical task for these models involves interpreting and processing information from multiple image inputs. However, Large Multimodal Models (LMMs) encounter two issues in such scenarios: (1) a lack of fine-grained perception, and (2) a tendency to blend information across multiple images. We first extensively investigate the capability of LMMs to perceive fine-grained visual details when dealing with multiple input images. The research focuses on two aspects: first, image-to-image matching (to evaluate whether LMMs can effectively reason and pair relevant images), and second, multi-image-to-text matching (to assess whether LMMs can accurately capture and summarize detailed image information). We conduct evaluations on a range of both open-source and closed-source large models, including GPT-4V, Gemini, OpenFlamingo, and MMICL. To enhance model performance, we further develop a Contrastive Chain-of-Thought (CoCoT) prompting approach based on multi-input multimodal models. This method requires LMMs to compare the similarities and differences among multiple image inputs, and then guide the models to answer detailed questions about multi-image inputs based on the identified similarities and differences. Our experimental results showcase CoCoT's proficiency in enhancing the multi-image comprehension capabilities of large multimodal models.
1704.04205
Maxim Buzdalov
Margarita Markina and Maxim Buzdalov
Hybridizing Non-dominated Sorting Algorithms: Divide-and-Conquer Meets Best Order Sort
A two-page abstract of this paper will appear in the proceedings companion of the 2017 Genetic and Evolutionary Computation Conference (GECCO 2017)
null
10.1145/3067695.3076074
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many production-grade algorithms benefit from combining an asymptotically efficient algorithm for solving big problem instances, by splitting them into smaller ones, and an asymptotically inefficient algorithm with a very small implementation constant for solving small subproblems. A well-known example is stable sorting, where mergesort is often combined with insertion sort to achieve a constant but noticeable speed-up. We apply this idea to non-dominated sorting. Namely, we combine the divide-and-conquer algorithm, which has the currently best known asymptotic runtime of $O(N (\log N)^{M - 1})$, with the Best Order Sort algorithm, which has the runtime of $O(N^2 M)$ but demonstrates the best practical performance out of quadratic algorithms. Empirical evaluation shows that the hybrid's running time is typically not worse than of both original algorithms, while for large numbers of points it outperforms them by at least 20%. For smaller numbers of objectives, the speedup can be as large as four times.
[ { "created": "Thu, 13 Apr 2017 16:36:44 GMT", "version": "v1" } ]
2017-04-14
[ [ "Markina", "Margarita", "" ], [ "Buzdalov", "Maxim", "" ] ]
Many production-grade algorithms benefit from combining an asymptotically efficient algorithm for solving big problem instances, by splitting them into smaller ones, and an asymptotically inefficient algorithm with a very small implementation constant for solving small subproblems. A well-known example is stable sorting, where mergesort is often combined with insertion sort to achieve a constant but noticeable speed-up. We apply this idea to non-dominated sorting. Namely, we combine the divide-and-conquer algorithm, which has the currently best known asymptotic runtime of $O(N (\log N)^{M - 1})$, with the Best Order Sort algorithm, which has the runtime of $O(N^2 M)$ but demonstrates the best practical performance out of quadratic algorithms. Empirical evaluation shows that the hybrid's running time is typically not worse than of both original algorithms, while for large numbers of points it outperforms them by at least 20%. For smaller numbers of objectives, the speedup can be as large as four times.
2402.06782
Akbir M Khan Mr
Akbir Khan, John Hughes, Dan Valentine, Laura Ruis, Kshitij Sachan, Ansh Radhakrishnan, Edward Grefenstette, Samuel R. Bowman, Tim Rockt\"aschel and Ethan Perez
Debating with More Persuasive LLMs Leads to More Truthful Answers
For code please check: https://github.com/ucl-dark/llm_debate
null
null
null
cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Common methods for aligning large language models (LLMs) with desired behaviour heavily rely on human-labelled data. However, as models grow increasingly sophisticated, they will surpass human expertise, and the role of human evaluation will evolve into non-experts overseeing experts. In anticipation of this, we ask: can weaker models assess the correctness of stronger models? We investigate this question in an analogous setting, where stronger models (experts) possess the necessary information to answer questions and weaker models (non-experts) lack this information. The method we evaluate is debate, where two LLM experts each argue for a different answer, and a non-expert selects the answer. We find that debate consistently helps both non-expert models and humans answer questions, achieving 76% and 88% accuracy respectively (naive baselines obtain 48% and 60%). Furthermore, optimising expert debaters for persuasiveness in an unsupervised manner improves non-expert ability to identify the truth in debates. Our results provide encouraging empirical evidence for the viability of aligning models with debate in the absence of ground truth.
[ { "created": "Fri, 9 Feb 2024 21:05:01 GMT", "version": "v1" }, { "created": "Thu, 15 Feb 2024 22:09:52 GMT", "version": "v2" }, { "created": "Thu, 30 May 2024 13:59:34 GMT", "version": "v3" }, { "created": "Thu, 25 Jul 2024 23:32:21 GMT", "version": "v4" } ]
2024-07-29
[ [ "Khan", "Akbir", "" ], [ "Hughes", "John", "" ], [ "Valentine", "Dan", "" ], [ "Ruis", "Laura", "" ], [ "Sachan", "Kshitij", "" ], [ "Radhakrishnan", "Ansh", "" ], [ "Grefenstette", "Edward", "" ], [ "Bowman", "Samuel R.", "" ], [ "Rocktäschel", "Tim", "" ], [ "Perez", "Ethan", "" ] ]
Common methods for aligning large language models (LLMs) with desired behaviour heavily rely on human-labelled data. However, as models grow increasingly sophisticated, they will surpass human expertise, and the role of human evaluation will evolve into non-experts overseeing experts. In anticipation of this, we ask: can weaker models assess the correctness of stronger models? We investigate this question in an analogous setting, where stronger models (experts) possess the necessary information to answer questions and weaker models (non-experts) lack this information. The method we evaluate is debate, where two LLM experts each argue for a different answer, and a non-expert selects the answer. We find that debate consistently helps both non-expert models and humans answer questions, achieving 76% and 88% accuracy respectively (naive baselines obtain 48% and 60%). Furthermore, optimising expert debaters for persuasiveness in an unsupervised manner improves non-expert ability to identify the truth in debates. Our results provide encouraging empirical evidence for the viability of aligning models with debate in the absence of ground truth.
2011.03841
Jean Pablo Vieira de Mello
Jean Pablo Vieira de Mello, Lucas Tabelini, Rodrigo F. Berriel, Thiago M. Paix\~ao, Alberto F. de Souza, Claudine Badue, Nicu Sebe, Thiago Oliveira-Santos
Deep traffic light detection by overlaying synthetic context on arbitrary natural images
null
Computers & Graphics (2020)
10.1016/j.cag.2020.09.012
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Deep neural networks come as an effective solution to many problems associated with autonomous driving. By providing real image samples with traffic context to the network, the model learns to detect and classify elements of interest, such as pedestrians, traffic signs, and traffic lights. However, acquiring and annotating real data can be extremely costly in terms of time and effort. In this context, we propose a method to generate artificial traffic-related training data for deep traffic light detectors. This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds that are not related to the traffic domain. Thus, a large amount of training data can be generated without annotation efforts. Furthermore, it also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state. Experiments show that it is possible to achieve results comparable to those obtained with real training data from the problem domain, yielding an average mAP and an average F1-score which are each nearly 4 p.p. higher than the respective metrics obtained with a real-world reference model.
[ { "created": "Sat, 7 Nov 2020 19:57:22 GMT", "version": "v1" }, { "created": "Tue, 10 Nov 2020 02:30:51 GMT", "version": "v2" }, { "created": "Thu, 10 Dec 2020 22:44:41 GMT", "version": "v3" } ]
2020-12-14
[ [ "de Mello", "Jean Pablo Vieira", "" ], [ "Tabelini", "Lucas", "" ], [ "Berriel", "Rodrigo F.", "" ], [ "Paixão", "Thiago M.", "" ], [ "de Souza", "Alberto F.", "" ], [ "Badue", "Claudine", "" ], [ "Sebe", "Nicu", "" ], [ "Oliveira-Santos", "Thiago", "" ] ]
Deep neural networks come as an effective solution to many problems associated with autonomous driving. By providing real image samples with traffic context to the network, the model learns to detect and classify elements of interest, such as pedestrians, traffic signs, and traffic lights. However, acquiring and annotating real data can be extremely costly in terms of time and effort. In this context, we propose a method to generate artificial traffic-related training data for deep traffic light detectors. This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds that are not related to the traffic domain. Thus, a large amount of training data can be generated without annotation efforts. Furthermore, it also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state. Experiments show that it is possible to achieve results comparable to those obtained with real training data from the problem domain, yielding an average mAP and an average F1-score which are each nearly 4 p.p. higher than the respective metrics obtained with a real-world reference model.
2309.15417
Tobias Weinzierl
Peter Noble, Tobias Weinzierl
Parallel local time stepping for rigid bodies represented by triangulated meshes
null
null
null
null
cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discrete Element Methods (DEM), i.e.~the simulation of many rigid particles, suffer from very stiff differential equations plus multiscale challenges in space and time. The particles move smoothly through space until they interact almost instantaneously due to collisions. Dense particle packings hence require tiny time step sizes, while free particles can advance with large time steps. Admissible time step sizes can span multiple orders of magnitudes. We propose an adaptive local time stepping algorithm which identifies clusters of particles that can be updated independently, advances them optimistically and independently in time, determines collision time stamps in space-time such that we maximise the time step sizes used, and resolves the momentum exchange implicitly. It is combined with various acceleration techniques which exploit multiscale geometry representations and multiscale behaviour in time. The collision time stamp detection in space-time in combination with the implicit solve of the actual collision equations avoids that particles get locked into tiny time step sizes, the clustering yields a high concurrency level, and the acceleration techniques plus local time stepping avoid unnecessary computations. This brings a scaling, adaptive time stepping for DEM for real-world challenges into reach.
[ { "created": "Wed, 27 Sep 2023 05:46:57 GMT", "version": "v1" } ]
2023-09-28
[ [ "Noble", "Peter", "" ], [ "Weinzierl", "Tobias", "" ] ]
Discrete Element Methods (DEM), i.e.~the simulation of many rigid particles, suffer from very stiff differential equations plus multiscale challenges in space and time. The particles move smoothly through space until they interact almost instantaneously due to collisions. Dense particle packings hence require tiny time step sizes, while free particles can advance with large time steps. Admissible time step sizes can span multiple orders of magnitudes. We propose an adaptive local time stepping algorithm which identifies clusters of particles that can be updated independently, advances them optimistically and independently in time, determines collision time stamps in space-time such that we maximise the time step sizes used, and resolves the momentum exchange implicitly. It is combined with various acceleration techniques which exploit multiscale geometry representations and multiscale behaviour in time. The collision time stamp detection in space-time in combination with the implicit solve of the actual collision equations avoids that particles get locked into tiny time step sizes, the clustering yields a high concurrency level, and the acceleration techniques plus local time stepping avoid unnecessary computations. This brings a scaling, adaptive time stepping for DEM for real-world challenges into reach.
2404.09593
Zepeng Ding
Zepeng Ding, Wenhao Huang, Jiaqing Liang, Deqing Yang, Yanghua Xiao
Improving Recall of Large Language Models: A Model Collaboration Approach for Relational Triple Extraction
Accepted at LREC-COLING 2024 main conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relation triple extraction, which outputs a set of triples from long sentences, plays a vital role in knowledge acquisition. Large language models can accurately extract triples from simple sentences through few-shot learning or fine-tuning when given appropriate instructions. However, they often miss out when extracting from complex sentences. In this paper, we design an evaluation-filtering framework that integrates large language models with small models for relational triple extraction tasks. The framework includes an evaluation model that can extract related entity pairs with high precision. We propose a simple labeling principle and a deep neural network to build the model, embedding the outputs as prompts into the extraction process of the large model. We conduct extensive experiments to demonstrate that the proposed method can assist large language models in obtaining more accurate extraction results, especially from complex sentences containing multiple relational triples. Our evaluation model can also be embedded into traditional extraction models to enhance their extraction precision from complex sentences.
[ { "created": "Mon, 15 Apr 2024 09:03:05 GMT", "version": "v1" } ]
2024-04-16
[ [ "Ding", "Zepeng", "" ], [ "Huang", "Wenhao", "" ], [ "Liang", "Jiaqing", "" ], [ "Yang", "Deqing", "" ], [ "Xiao", "Yanghua", "" ] ]
Relation triple extraction, which outputs a set of triples from long sentences, plays a vital role in knowledge acquisition. Large language models can accurately extract triples from simple sentences through few-shot learning or fine-tuning when given appropriate instructions. However, they often miss out when extracting from complex sentences. In this paper, we design an evaluation-filtering framework that integrates large language models with small models for relational triple extraction tasks. The framework includes an evaluation model that can extract related entity pairs with high precision. We propose a simple labeling principle and a deep neural network to build the model, embedding the outputs as prompts into the extraction process of the large model. We conduct extensive experiments to demonstrate that the proposed method can assist large language models in obtaining more accurate extraction results, especially from complex sentences containing multiple relational triples. Our evaluation model can also be embedded into traditional extraction models to enhance their extraction precision from complex sentences.
1711.06837
Iqbal H. Sarker
Iqbal H. Sarker, Muhammad Ashad Kabir, Alan Colman, Jun Han
Identifying Recent Behavioral Data Length in Mobile Phone Log
14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous 2017), Melbourne, Australia
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile phone log data (e.g., phone call log) is not static as it is progressively added to day-by-day according to individ- ual's diverse behaviors with mobile phones. Since human behavior changes over time, the most recent pattern is more interesting and significant than older ones for predicting in- dividual's behavior. The goal of this poster paper is to iden- tify the recent behavioral data length dynamically from the entire phone log for recency-based behavior modeling. To the best of our knowledge, this is the first dynamic recent log-based study that takes into account individual's recent behavioral patterns for modeling their phone call behaviors.
[ { "created": "Sat, 18 Nov 2017 10:10:51 GMT", "version": "v1" }, { "created": "Mon, 18 Dec 2017 06:32:43 GMT", "version": "v2" } ]
2017-12-19
[ [ "Sarker", "Iqbal H.", "" ], [ "Kabir", "Muhammad Ashad", "" ], [ "Colman", "Alan", "" ], [ "Han", "Jun", "" ] ]
Mobile phone log data (e.g., phone call log) is not static as it is progressively added to day-by-day according to individ- ual's diverse behaviors with mobile phones. Since human behavior changes over time, the most recent pattern is more interesting and significant than older ones for predicting in- dividual's behavior. The goal of this poster paper is to iden- tify the recent behavioral data length dynamically from the entire phone log for recency-based behavior modeling. To the best of our knowledge, this is the first dynamic recent log-based study that takes into account individual's recent behavioral patterns for modeling their phone call behaviors.
2209.15029
Richard Brath
Richard Brath
Multimodal analogs to infer humanities visualization requirements
6 pages, 11 figures. Visualization for Digital Humanities 2022
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gaps and requirements for multi-modal interfaces for humanities can be explored by observing the configuration of real-world environments and the tasks of visitors within them compared to digital environments. Examples include stores, museums, galleries, and stages with tasks similar to visualization tasks such as overview, zoom and detail; multi-dimensional reduction; collaboration; and comparison; with real-world environments offering much richer interactions. Some of these capabilities exist with the technology and visualization research, but not routinely available in implementations.
[ { "created": "Thu, 29 Sep 2022 18:09:16 GMT", "version": "v1" } ]
2022-10-03
[ [ "Brath", "Richard", "" ] ]
Gaps and requirements for multi-modal interfaces for humanities can be explored by observing the configuration of real-world environments and the tasks of visitors within them compared to digital environments. Examples include stores, museums, galleries, and stages with tasks similar to visualization tasks such as overview, zoom and detail; multi-dimensional reduction; collaboration; and comparison; with real-world environments offering much richer interactions. Some of these capabilities exist with the technology and visualization research, but not routinely available in implementations.
2301.09567
Mathieu Marquis Bolduc
Mathieu Marquis Bolduc, Hau Nghiep Phan
Rig Inversion by Training a Differentiable Rig Function
Presented at Siggraph Asia '22 in Daegu, South Korea
SA '22: SIGGRAPH Asia 2022 Technical Communications, December 2022, Article No.: 15
10.1145/3550340.3564218
null
cs.GR cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rig inversion is the problem of creating a method that can find the rig parameter vector that best approximates a given input mesh. In this paper we propose to solve this problem by first obtaining a differentiable rig function by training a multi layer perceptron to approximate the rig function. This differentiable rig function can then be used to train a deep learning model of rig inversion.
[ { "created": "Wed, 11 Jan 2023 20:21:58 GMT", "version": "v1" } ]
2023-01-24
[ [ "Bolduc", "Mathieu Marquis", "" ], [ "Phan", "Hau Nghiep", "" ] ]
Rig inversion is the problem of creating a method that can find the rig parameter vector that best approximates a given input mesh. In this paper we propose to solve this problem by first obtaining a differentiable rig function by training a multi layer perceptron to approximate the rig function. This differentiable rig function can then be used to train a deep learning model of rig inversion.
2407.09984
Yu Zhang
Yu Zhang, Haoyu Zhang, Yongxiang Zou, Houcheng Li and Long Cheng
Stabilizing Dynamic Systems through Neural Network Learning: A Robust Approach
arXiv admin note: text overlap with arXiv:2309.08849
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point-to-point and periodic motions are ubiquitous in the world of robotics. To master these motions, Autonomous Dynamic System (DS) based algorithms are fundamental in the domain of Learning from Demonstration (LfD). However, these algorithms face the significant challenge of balancing precision in learning with the maintenance of system stability. This paper addresses this challenge by presenting a novel ADS algorithm that leverages neural network technology. The proposed algorithm is designed to distill essential knowledge from demonstration data, ensuring stability during the learning of both point-to-point and periodic motions. For point-to-point motions, a neural Lyapunov function is proposed to align with the provided demonstrations. In the case of periodic motions, the neural Lyapunov function is used with the transversal contraction to ensure that all generated motions converge to a stable limit cycle. The model utilizes a streamlined neural network architecture, adept at achieving dual objectives: optimizing learning accuracy while maintaining global stability. To thoroughly assess the efficacy of the proposed algorithm, rigorous evaluations are conducted using the LASA dataset and a manually designed dataset. These assessments were complemented by empirical validation through robotic experiments, providing robust evidence of the algorithm's performance
[ { "created": "Sat, 13 Jul 2024 19:13:43 GMT", "version": "v1" } ]
2024-07-16
[ [ "Zhang", "Yu", "" ], [ "Zhang", "Haoyu", "" ], [ "Zou", "Yongxiang", "" ], [ "Li", "Houcheng", "" ], [ "Cheng", "Long", "" ] ]
Point-to-point and periodic motions are ubiquitous in the world of robotics. To master these motions, Autonomous Dynamic System (DS) based algorithms are fundamental in the domain of Learning from Demonstration (LfD). However, these algorithms face the significant challenge of balancing precision in learning with the maintenance of system stability. This paper addresses this challenge by presenting a novel ADS algorithm that leverages neural network technology. The proposed algorithm is designed to distill essential knowledge from demonstration data, ensuring stability during the learning of both point-to-point and periodic motions. For point-to-point motions, a neural Lyapunov function is proposed to align with the provided demonstrations. In the case of periodic motions, the neural Lyapunov function is used with the transversal contraction to ensure that all generated motions converge to a stable limit cycle. The model utilizes a streamlined neural network architecture, adept at achieving dual objectives: optimizing learning accuracy while maintaining global stability. To thoroughly assess the efficacy of the proposed algorithm, rigorous evaluations are conducted using the LASA dataset and a manually designed dataset. These assessments were complemented by empirical validation through robotic experiments, providing robust evidence of the algorithm's performance
2102.09009
Louis Tiao
Louis C. Tiao, Aaron Klein, Matthias Seeger, Edwin V. Bonilla, Cedric Archambeau, Fabio Ramos
BORE: Bayesian Optimization by Density-Ratio Estimation
preprint, under review
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Bayesian optimization (BO) is among the most effective and widely-used blackbox optimization methods. BO proposes solutions according to an explore-exploit trade-off criterion encoded in an acquisition function, many of which are computed from the posterior predictive of a probabilistic surrogate model. Prevalent among these is the expected improvement (EI) function. The need to ensure analytical tractability of the predictive often poses limitations that can hinder the efficiency and applicability of BO. In this paper, we cast the computation of EI as a binary classification problem, building on the link between class-probability estimation and density-ratio estimation, and the lesser-known link between density-ratios and EI. By circumventing the tractability constraints, this reformulation provides numerous advantages, not least in terms of expressiveness, versatility, and scalability.
[ { "created": "Wed, 17 Feb 2021 20:04:11 GMT", "version": "v1" } ]
2021-02-19
[ [ "Tiao", "Louis C.", "" ], [ "Klein", "Aaron", "" ], [ "Seeger", "Matthias", "" ], [ "Bonilla", "Edwin V.", "" ], [ "Archambeau", "Cedric", "" ], [ "Ramos", "Fabio", "" ] ]
Bayesian optimization (BO) is among the most effective and widely-used blackbox optimization methods. BO proposes solutions according to an explore-exploit trade-off criterion encoded in an acquisition function, many of which are computed from the posterior predictive of a probabilistic surrogate model. Prevalent among these is the expected improvement (EI) function. The need to ensure analytical tractability of the predictive often poses limitations that can hinder the efficiency and applicability of BO. In this paper, we cast the computation of EI as a binary classification problem, building on the link between class-probability estimation and density-ratio estimation, and the lesser-known link between density-ratios and EI. By circumventing the tractability constraints, this reformulation provides numerous advantages, not least in terms of expressiveness, versatility, and scalability.
1611.01761
Konstantin Turitsyn
Petr Vorobev, Po-Hsu Huang, Mohamed Al Hosani, James L. Kirtley, Konstantin Turitsyn
High-Fidelity Model Order Reduction for Microgrids Stability Assessment
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proper modeling of inverter-based microgrids is crucial for accurate assessment of stability boundaries. It has been recently realized that the stability conditions for such microgrids are significantly different from those known for large- scale power systems. While detailed models are available, they are both computationally expensive and can not provide the insight into the instability mechanisms and factors. In this paper, a computationally efficient and accurate reduced-order model is proposed for modeling the inverter-based microgrids. The main factors affecting microgrid stability are analyzed using the developed reduced-order model and are shown to be unique for the microgrid-based network, which has no direct analogy to large-scale power systems. Particularly, it has been discovered that the stability limits for the conventional droop-based system (omega - P/V - Q) are determined by the ratio of inverter rating to network capacity, leading to a smaller stability region for microgrids with shorter lines. The theoretical derivation has been provided to verify the above investigation based on both the simplified and generalized network configurations. More impor- tantly, the proposed reduced-order model not only maintains the modeling accuracy but also enhances the computation efficiency. Finally, the results are verified with the detailed model via both frequency and time domain analyses.
[ { "created": "Sun, 6 Nov 2016 11:50:06 GMT", "version": "v1" } ]
2016-11-08
[ [ "Vorobev", "Petr", "" ], [ "Huang", "Po-Hsu", "" ], [ "Hosani", "Mohamed Al", "" ], [ "Kirtley", "James L.", "" ], [ "Turitsyn", "Konstantin", "" ] ]
Proper modeling of inverter-based microgrids is crucial for accurate assessment of stability boundaries. It has been recently realized that the stability conditions for such microgrids are significantly different from those known for large- scale power systems. While detailed models are available, they are both computationally expensive and can not provide the insight into the instability mechanisms and factors. In this paper, a computationally efficient and accurate reduced-order model is proposed for modeling the inverter-based microgrids. The main factors affecting microgrid stability are analyzed using the developed reduced-order model and are shown to be unique for the microgrid-based network, which has no direct analogy to large-scale power systems. Particularly, it has been discovered that the stability limits for the conventional droop-based system (omega - P/V - Q) are determined by the ratio of inverter rating to network capacity, leading to a smaller stability region for microgrids with shorter lines. The theoretical derivation has been provided to verify the above investigation based on both the simplified and generalized network configurations. More impor- tantly, the proposed reduced-order model not only maintains the modeling accuracy but also enhances the computation efficiency. Finally, the results are verified with the detailed model via both frequency and time domain analyses.
1202.5482
Richard McClatchey
Hanene Boussi Rahmouni, Kamran Munir, Mohammed Odeh and Richard McClatchey
Risk-Driven Compliant Access Controls for Clouds
9 pages, 3 figures. International Arab Conference on Information Technology (ACIT 2011) / Riyadh, Saudi Arabia. December 2012
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is widespread agreement that cloud computing have proven cost cutting and agility benefits. However, security and regulatory compliance issues are continuing to challenge the wide acceptance of such technology both from social and commercial stakeholders. An important facture behind this is the fact that clouds and in particular public clouds are usually deployed and used within broad geographical or even international domains. This implies that the exchange of private and other protected data within the cloud environment would be governed by multiple jurisdictions. These jurisdictions have a great degree of harmonisation; however, they present possible conflicts that are hard to negotiate at run time. So far, important efforts were played in order to deal with regulatory compliance management for large distributed systems. However, measurable solutions are required for the context of cloud. In this position paper, we are suggesting an approach that starts with a conceptual model of explicit regulatory requirements for exchanging private data on a multijurisdictional environment and build on it in order to define metrics for non-compliance or, in other terms, risks to compliance. These metrics will be integrated within usual data access-control policies and will be checked at policy analysis time before a decision to allow/deny the data access is made.
[ { "created": "Fri, 24 Feb 2012 15:49:39 GMT", "version": "v1" }, { "created": "Tue, 13 Nov 2012 08:55:44 GMT", "version": "v2" } ]
2012-11-14
[ [ "Rahmouni", "Hanene Boussi", "" ], [ "Munir", "Kamran", "" ], [ "Odeh", "Mohammed", "" ], [ "McClatchey", "Richard", "" ] ]
There is widespread agreement that cloud computing have proven cost cutting and agility benefits. However, security and regulatory compliance issues are continuing to challenge the wide acceptance of such technology both from social and commercial stakeholders. An important facture behind this is the fact that clouds and in particular public clouds are usually deployed and used within broad geographical or even international domains. This implies that the exchange of private and other protected data within the cloud environment would be governed by multiple jurisdictions. These jurisdictions have a great degree of harmonisation; however, they present possible conflicts that are hard to negotiate at run time. So far, important efforts were played in order to deal with regulatory compliance management for large distributed systems. However, measurable solutions are required for the context of cloud. In this position paper, we are suggesting an approach that starts with a conceptual model of explicit regulatory requirements for exchanging private data on a multijurisdictional environment and build on it in order to define metrics for non-compliance or, in other terms, risks to compliance. These metrics will be integrated within usual data access-control policies and will be checked at policy analysis time before a decision to allow/deny the data access is made.
2109.04954
Gobinda Saha
Gobinda Saha and Kaushik Roy
Saliency Guided Experience Packing for Replay in Continual Learning
To appear in IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial learning systems aspire to mimic human intelligence by continually learning from a stream of tasks without forgetting past knowledge. One way to enable such learning is to store past experiences in the form of input examples in episodic memory and replay them when learning new tasks. However, performance of such method suffers as the size of the memory becomes smaller. In this paper, we propose a new approach for experience replay, where we select the past experiences by looking at the saliency maps which provide visual explanations for the model's decision. Guided by these saliency maps, we pack the memory with only the parts or patches of the input images important for the model's prediction. While learning a new task, we replay these memory patches with appropriate zero-padding to remind the model about its past decisions. We evaluate our algorithm on CIFAR-100, miniImageNet and CUB datasets and report better performance than the state-of-the-art approaches. With qualitative and quantitative analyses we show that our method captures richer summaries of past experiences without any memory increase, and hence performs well with small episodic memory.
[ { "created": "Fri, 10 Sep 2021 15:54:58 GMT", "version": "v1" }, { "created": "Wed, 12 Oct 2022 05:17:55 GMT", "version": "v2" } ]
2022-10-13
[ [ "Saha", "Gobinda", "" ], [ "Roy", "Kaushik", "" ] ]
Artificial learning systems aspire to mimic human intelligence by continually learning from a stream of tasks without forgetting past knowledge. One way to enable such learning is to store past experiences in the form of input examples in episodic memory and replay them when learning new tasks. However, performance of such method suffers as the size of the memory becomes smaller. In this paper, we propose a new approach for experience replay, where we select the past experiences by looking at the saliency maps which provide visual explanations for the model's decision. Guided by these saliency maps, we pack the memory with only the parts or patches of the input images important for the model's prediction. While learning a new task, we replay these memory patches with appropriate zero-padding to remind the model about its past decisions. We evaluate our algorithm on CIFAR-100, miniImageNet and CUB datasets and report better performance than the state-of-the-art approaches. With qualitative and quantitative analyses we show that our method captures richer summaries of past experiences without any memory increase, and hence performs well with small episodic memory.
1506.08907
Sidharth Kashyap N
Sidharth N. Kashyap, Ade J. Fewings, Jay Davies, Ian Morris, Andrew Thomas Thomas Green, Martyn F. Guest
Big Data at HPC Wales
Accepted for publication at the 'Big Data Analytics Workshop' - 2014 http://web.ornl.gov/sci/knowledgediscovery/CloudComputing/PDAC-SC14/BDAC-14-Agenda.htm
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes an automated approach to handling Big Data workloads on HPC systems. We describe a solution that dynamically creates a unified cluster based on YARN in an HPC Environment, without the need to configure and allocate a dedicated Hadoop cluster. The end user can choose to write the solution in any combination of supported frameworks, a solution that scales seamlessly from a few cores to thousands of cores. This coupling of environments creates a platform for applications to utilize the native HPC solutions along with the Big Data Frameworks. The user will be provided with HPC Wales APIs in multiple languages that will let them integrate this flow into their environment, thereby ensuring that the traditional means of HPC access do not become a bottleneck. We describe the behavior of the cluster creation and performance results on Terasort.
[ { "created": "Tue, 30 Jun 2015 00:18:11 GMT", "version": "v1" } ]
2015-07-01
[ [ "Kashyap", "Sidharth N.", "" ], [ "Fewings", "Ade J.", "" ], [ "Davies", "Jay", "" ], [ "Morris", "Ian", "" ], [ "Green", "Andrew Thomas Thomas", "" ], [ "Guest", "Martyn F.", "" ] ]
This paper describes an automated approach to handling Big Data workloads on HPC systems. We describe a solution that dynamically creates a unified cluster based on YARN in an HPC Environment, without the need to configure and allocate a dedicated Hadoop cluster. The end user can choose to write the solution in any combination of supported frameworks, a solution that scales seamlessly from a few cores to thousands of cores. This coupling of environments creates a platform for applications to utilize the native HPC solutions along with the Big Data Frameworks. The user will be provided with HPC Wales APIs in multiple languages that will let them integrate this flow into their environment, thereby ensuring that the traditional means of HPC access do not become a bottleneck. We describe the behavior of the cluster creation and performance results on Terasort.
2205.11710
Davide Modolo
Michael Dorkenwald, Fanyi Xiao, Biagio Brattoli, Joseph Tighe, Davide Modolo
SCVRL: Shuffled Contrastive Video Representation Learning
CVPR 2022 - L3DIVU workshop
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose SCVRL, a novel contrastive-based framework for self-supervised learning for videos. Differently from previous contrast learning based methods that mostly focus on learning visual semantics (e.g., CVRL), SCVRL is capable of learning both semantic and motion patterns. For that, we reformulate the popular shuffling pretext task within a modern contrastive learning paradigm. We show that our transformer-based network has a natural capacity to learn motion in self-supervised settings and achieves strong performance, outperforming CVRL on four benchmarks.
[ { "created": "Tue, 24 May 2022 01:24:47 GMT", "version": "v1" } ]
2022-05-25
[ [ "Dorkenwald", "Michael", "" ], [ "Xiao", "Fanyi", "" ], [ "Brattoli", "Biagio", "" ], [ "Tighe", "Joseph", "" ], [ "Modolo", "Davide", "" ] ]
We propose SCVRL, a novel contrastive-based framework for self-supervised learning for videos. Differently from previous contrast learning based methods that mostly focus on learning visual semantics (e.g., CVRL), SCVRL is capable of learning both semantic and motion patterns. For that, we reformulate the popular shuffling pretext task within a modern contrastive learning paradigm. We show that our transformer-based network has a natural capacity to learn motion in self-supervised settings and achieves strong performance, outperforming CVRL on four benchmarks.
2406.05815
Yinan Huang
Yinan Huang, Siqi Miao, Pan Li
What Can We Learn from State Space Models for Machine Learning on Graphs?
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Machine learning on graphs has recently found extensive applications across domains. However, the commonly used Message Passing Neural Networks (MPNNs) suffer from limited expressive power and struggle to capture long-range dependencies. Graph transformers offer a strong alternative due to their global attention mechanism, but they come with great computational overheads, especially for large graphs. In recent years, State Space Models (SSMs) have emerged as a compelling approach to replace full attention in transformers to model sequential data. It blends the strengths of RNNs and CNNs, offering a) efficient computation, b) the ability to capture long-range dependencies, and c) good generalization across sequences of various lengths. However, extending SSMs to graph-structured data presents unique challenges due to the lack of canonical node ordering in graphs. In this work, we propose Graph State Space Convolution (GSSC) as a principled extension of SSMs to graph-structured data. By leveraging global permutation-equivariant set aggregation and factorizable graph kernels that rely on relative node distances as the convolution kernels, GSSC preserves all three advantages of SSMs. We demonstrate the provably stronger expressiveness of GSSC than MPNNs in counting graph substructures and show its effectiveness across 10 real-world, widely used benchmark datasets, where GSSC achieves best results on 7 out of 10 datasets with all significant improvements compared to the state-of-the-art baselines and second-best results on the other 3 datasets. Our findings highlight the potential of GSSC as a powerful and scalable model for graph machine learning. Our code is available at https://github.com/Graph-COM/GSSC.
[ { "created": "Sun, 9 Jun 2024 15:03:36 GMT", "version": "v1" } ]
2024-06-11
[ [ "Huang", "Yinan", "" ], [ "Miao", "Siqi", "" ], [ "Li", "Pan", "" ] ]
Machine learning on graphs has recently found extensive applications across domains. However, the commonly used Message Passing Neural Networks (MPNNs) suffer from limited expressive power and struggle to capture long-range dependencies. Graph transformers offer a strong alternative due to their global attention mechanism, but they come with great computational overheads, especially for large graphs. In recent years, State Space Models (SSMs) have emerged as a compelling approach to replace full attention in transformers to model sequential data. It blends the strengths of RNNs and CNNs, offering a) efficient computation, b) the ability to capture long-range dependencies, and c) good generalization across sequences of various lengths. However, extending SSMs to graph-structured data presents unique challenges due to the lack of canonical node ordering in graphs. In this work, we propose Graph State Space Convolution (GSSC) as a principled extension of SSMs to graph-structured data. By leveraging global permutation-equivariant set aggregation and factorizable graph kernels that rely on relative node distances as the convolution kernels, GSSC preserves all three advantages of SSMs. We demonstrate the provably stronger expressiveness of GSSC than MPNNs in counting graph substructures and show its effectiveness across 10 real-world, widely used benchmark datasets, where GSSC achieves best results on 7 out of 10 datasets with all significant improvements compared to the state-of-the-art baselines and second-best results on the other 3 datasets. Our findings highlight the potential of GSSC as a powerful and scalable model for graph machine learning. Our code is available at https://github.com/Graph-COM/GSSC.
1807.01748
Pablo Fernandez Carmona
Pablo Fernandez Carmona, Michael Eichin, Alexandre Mayor, Harald Regele, Martin Grossmann, Damien Charles Weber
Significant acceleration of development by automating quality assurance of a medical particle accelerator safety system using a formal language driven test stand
6 pages, 9 figures, 21st IEEE Real Time Conference, 9-15 June 2018 Colonial Williamsburg, USA
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At the Centre for Proton Therapy at the Paul Scherrer Institute cancer patients are treated with a fixed beamline and in two gantries for ocular and non-ocular malignancies, respectively. For the installation of a third gantry a new patient safety system (PaSS) was developed and is sequentially being rolled out to update the existing areas. The aim of PaSS is to interrupt the treatment whenever any sub-system detects a hazardous condition. To ensure correct treatment delivery, this system needs to be thoroughly tested as part of the regular quality assurance (QA) protocols as well as after any upgrade. In the legacy safety systems, unit testing required an extensive use of resources: two weeks of work per area in the laboratory in addition to QA beam time. In order to significantly reduce the time, an automated PaSS test stand for unit testing was developed based on a PXI chassis with virtually unlimited IOs that are synchronously stimulated or sampled at 1 MHz. It can emulate the rest of the facility using adapters to connect each type of interface. With it PaSS can be tested under arbitrary conditions. A VHDL-based formal language was developed to describe stimuli, expected behaviour and specific measurements, interpreted by a LabView runtime environment. This article describes the tools and methodology being applied for unit testing and QA release tests for the new PaSS. It shows how automation and formalization made possible an increase in test coverage while significantly cutting down the laboratory testing time and facility's beam usage.
[ { "created": "Sat, 23 Jun 2018 12:43:04 GMT", "version": "v1" } ]
2018-07-06
[ [ "Carmona", "Pablo Fernandez", "" ], [ "Eichin", "Michael", "" ], [ "Mayor", "Alexandre", "" ], [ "Regele", "Harald", "" ], [ "Grossmann", "Martin", "" ], [ "Weber", "Damien Charles", "" ] ]
At the Centre for Proton Therapy at the Paul Scherrer Institute cancer patients are treated with a fixed beamline and in two gantries for ocular and non-ocular malignancies, respectively. For the installation of a third gantry a new patient safety system (PaSS) was developed and is sequentially being rolled out to update the existing areas. The aim of PaSS is to interrupt the treatment whenever any sub-system detects a hazardous condition. To ensure correct treatment delivery, this system needs to be thoroughly tested as part of the regular quality assurance (QA) protocols as well as after any upgrade. In the legacy safety systems, unit testing required an extensive use of resources: two weeks of work per area in the laboratory in addition to QA beam time. In order to significantly reduce the time, an automated PaSS test stand for unit testing was developed based on a PXI chassis with virtually unlimited IOs that are synchronously stimulated or sampled at 1 MHz. It can emulate the rest of the facility using adapters to connect each type of interface. With it PaSS can be tested under arbitrary conditions. A VHDL-based formal language was developed to describe stimuli, expected behaviour and specific measurements, interpreted by a LabView runtime environment. This article describes the tools and methodology being applied for unit testing and QA release tests for the new PaSS. It shows how automation and formalization made possible an increase in test coverage while significantly cutting down the laboratory testing time and facility's beam usage.
2210.10618
Chen Tang
Henglin Huang, Chen Tang, Tyler Loakman, Frank Guerin and Chenghua Lin
Improving Chinese Story Generation via Awareness of Syntactic Dependencies and Semantics
null
AACL 2022
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Story generation aims to generate a long narrative conditioned on a given input. In spite of the success of prior works with the application of pre-trained models, current neural models for Chinese stories still struggle to generate high-quality long text narratives. We hypothesise that this stems from ambiguity in syntactically parsing the Chinese language, which does not have explicit delimiters for word segmentation. Consequently, neural models suffer from the inefficient capturing of features in Chinese narratives. In this paper, we present a new generation framework that enhances the feature capturing mechanism by informing the generation model of dependencies between words and additionally augmenting the semantic representation learning through synonym denoising training. We conduct a range of experiments, and the results demonstrate that our framework outperforms the state-of-the-art Chinese generation models on all evaluation metrics, demonstrating the benefits of enhanced dependency and semantic representation learning.
[ { "created": "Wed, 19 Oct 2022 15:01:52 GMT", "version": "v1" } ]
2022-10-20
[ [ "Huang", "Henglin", "" ], [ "Tang", "Chen", "" ], [ "Loakman", "Tyler", "" ], [ "Guerin", "Frank", "" ], [ "Lin", "Chenghua", "" ] ]
Story generation aims to generate a long narrative conditioned on a given input. In spite of the success of prior works with the application of pre-trained models, current neural models for Chinese stories still struggle to generate high-quality long text narratives. We hypothesise that this stems from ambiguity in syntactically parsing the Chinese language, which does not have explicit delimiters for word segmentation. Consequently, neural models suffer from the inefficient capturing of features in Chinese narratives. In this paper, we present a new generation framework that enhances the feature capturing mechanism by informing the generation model of dependencies between words and additionally augmenting the semantic representation learning through synonym denoising training. We conduct a range of experiments, and the results demonstrate that our framework outperforms the state-of-the-art Chinese generation models on all evaluation metrics, demonstrating the benefits of enhanced dependency and semantic representation learning.
1905.08388
Angel Beltre
Pankaj Saha, Angel Beltre, Madhusudhan Govindaraju
Exploring the Fairness and Resource Distribution in an Apache Mesos Environment
null
2018 IEEE 11th International Conference on Cloud Computing (CLOUD)
10.1109/CLOUD.2018.00061
null
cs.PF cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Apache Mesos, a cluster-wide resource manager, is widely deployed in massive scale at several Clouds and Data Centers. Mesos aims to provide high cluster utilization via fine grained resource co-scheduling and resource fairness among multiple users through Dominant Resource Fairness (DRF) based allocation. DRF takes into account different resource types (CPU, Memory, Disk I/O) requested by each application and determines the share of each cluster resource that could be allocated to the applications. Mesos has adopted a two-level scheduling policy: (1) DRF to allocate resources to competing frameworks and (2) task level scheduling by each framework for the resources allocated during the previous step. We have conducted experiments in a local Mesos cluster when used with frameworks such as Apache Aurora, Marathon, and our own framework Scylla, to study resource fairness and cluster utilization. Experimental results show how informed decision regarding second level scheduling policy of frameworks and attributes like offer holding period, offer refusal cycle and task arrival rate can reduce unfair resource distribution. Bin-Packing scheduling policy on Scylla with Marathon can reduce unfair allocation from 38\% to 3\%. By reducing unused free resources in offers we bring down the unfairness from to 90\% to 28\%. We also show the effect of task arrival rate to reduce the unfairness from 23\% to 7\%.
[ { "created": "Tue, 21 May 2019 00:00:07 GMT", "version": "v1" } ]
2019-05-22
[ [ "Saha", "Pankaj", "" ], [ "Beltre", "Angel", "" ], [ "Govindaraju", "Madhusudhan", "" ] ]
Apache Mesos, a cluster-wide resource manager, is widely deployed in massive scale at several Clouds and Data Centers. Mesos aims to provide high cluster utilization via fine grained resource co-scheduling and resource fairness among multiple users through Dominant Resource Fairness (DRF) based allocation. DRF takes into account different resource types (CPU, Memory, Disk I/O) requested by each application and determines the share of each cluster resource that could be allocated to the applications. Mesos has adopted a two-level scheduling policy: (1) DRF to allocate resources to competing frameworks and (2) task level scheduling by each framework for the resources allocated during the previous step. We have conducted experiments in a local Mesos cluster when used with frameworks such as Apache Aurora, Marathon, and our own framework Scylla, to study resource fairness and cluster utilization. Experimental results show how informed decision regarding second level scheduling policy of frameworks and attributes like offer holding period, offer refusal cycle and task arrival rate can reduce unfair resource distribution. Bin-Packing scheduling policy on Scylla with Marathon can reduce unfair allocation from 38\% to 3\%. By reducing unused free resources in offers we bring down the unfairness from to 90\% to 28\%. We also show the effect of task arrival rate to reduce the unfairness from 23\% to 7\%.
2112.10332
Limeng Dong
Limeng Dong, Hui-Ming Wang, Jiale Bai
Active Reconfigurable Intelligent Surface Aided Secure Transmission
Accepted by IEEE TVT
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconfigurable Intelligent Surface (RIS) draws great attentions in academic and industry due to its passive and low power consumption nature, and has currently been used in physical layer security to enhance the secure transmission. However, due to the existence of double fading effect on the reflecting channel link between transmitter and user, RIS helps achieve limited secrecy performance gain compared with the case without RIS. In this correspondence, we propose a novel active RIS design to enhance the secure wireless transmission, where the reflecting elements in RIS not only adjust the phase shift but also amplify the amplitude of signals. To solve the non convex secrecy rate optimization based on this design, an efficient alternating optimization algorithm is proposed to jointly optimize the beamformer at transmitter and reflecting coefficient matrix at RIS. Simulation results show that with the aid of active RIS design, the impact of double fading effect can be effectively relieved, resulting in a significantly higher secrecy performance gain compared with existing solutions with passive RIS and without RIS design.
[ { "created": "Mon, 20 Dec 2021 04:34:26 GMT", "version": "v1" } ]
2021-12-21
[ [ "Dong", "Limeng", "" ], [ "Wang", "Hui-Ming", "" ], [ "Bai", "Jiale", "" ] ]
Reconfigurable Intelligent Surface (RIS) draws great attentions in academic and industry due to its passive and low power consumption nature, and has currently been used in physical layer security to enhance the secure transmission. However, due to the existence of double fading effect on the reflecting channel link between transmitter and user, RIS helps achieve limited secrecy performance gain compared with the case without RIS. In this correspondence, we propose a novel active RIS design to enhance the secure wireless transmission, where the reflecting elements in RIS not only adjust the phase shift but also amplify the amplitude of signals. To solve the non convex secrecy rate optimization based on this design, an efficient alternating optimization algorithm is proposed to jointly optimize the beamformer at transmitter and reflecting coefficient matrix at RIS. Simulation results show that with the aid of active RIS design, the impact of double fading effect can be effectively relieved, resulting in a significantly higher secrecy performance gain compared with existing solutions with passive RIS and without RIS design.
2101.04799
Ananda Samajdar
Ananda Samajdar, Michael Pellauer, Tushar Krishna
Self-Adaptive Reconfigurable Arrays (SARA): Using ML to Assist Scaling GEMM Acceleration
null
null
null
null
cs.AR cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
With increasing diversity in Deep Neural Network(DNN) models in terms of layer shapes and sizes, the research community has been investigating flexible/reconfigurable accelerator substrates. This line of research has opened up two challenges. The first is to determine the appropriate amount of flexibility within an accelerator array that that can trade-off the performance benefits versus the area overheads of the reconfigurability. The second is being able to determine the right configuration of the array for the current DNN model and/or layer and reconfigure the accelerator at runtime. This work introduces a new class of accelerators that we call Self Adaptive Reconfigurable Array (SARA). SARA architectures comprise of both a reconfigurable array and a hardware unit capable of determining an optimized configuration for the array at runtime. We demonstrate an instance of SARA with an accelerator we call SAGAR, which introduces a novel reconfigurable systolic array that can be configured to work as a distributed collection of smaller arrays of various sizes or as a single array with flexible aspect ratios. We also develop a novel recommendation neural network called ADAPTNET which recommends an array configuration and dataflow for the current layer parameters. ADAPTNET runs on an integrated custom hardware ADAPTNETX that runs ADAPTNET at runtime and reconfigures the array, making the entire accelerator self-sufficient. SAGAR is capable of providing the same mapping flexibility as a collection of 1024 4x4 arrays working as a distributed system while achieving 3.5x more power efficiency and 3.2x higher compute density Furthermore, the runtime achieved on the recommended parameters from ADAPTNET is 99.93% of the best achievable runtime.
[ { "created": "Tue, 12 Jan 2021 23:20:23 GMT", "version": "v1" }, { "created": "Sat, 23 Apr 2022 18:33:06 GMT", "version": "v2" } ]
2022-04-26
[ [ "Samajdar", "Ananda", "" ], [ "Pellauer", "Michael", "" ], [ "Krishna", "Tushar", "" ] ]
With increasing diversity in Deep Neural Network(DNN) models in terms of layer shapes and sizes, the research community has been investigating flexible/reconfigurable accelerator substrates. This line of research has opened up two challenges. The first is to determine the appropriate amount of flexibility within an accelerator array that that can trade-off the performance benefits versus the area overheads of the reconfigurability. The second is being able to determine the right configuration of the array for the current DNN model and/or layer and reconfigure the accelerator at runtime. This work introduces a new class of accelerators that we call Self Adaptive Reconfigurable Array (SARA). SARA architectures comprise of both a reconfigurable array and a hardware unit capable of determining an optimized configuration for the array at runtime. We demonstrate an instance of SARA with an accelerator we call SAGAR, which introduces a novel reconfigurable systolic array that can be configured to work as a distributed collection of smaller arrays of various sizes or as a single array with flexible aspect ratios. We also develop a novel recommendation neural network called ADAPTNET which recommends an array configuration and dataflow for the current layer parameters. ADAPTNET runs on an integrated custom hardware ADAPTNETX that runs ADAPTNET at runtime and reconfigures the array, making the entire accelerator self-sufficient. SAGAR is capable of providing the same mapping flexibility as a collection of 1024 4x4 arrays working as a distributed system while achieving 3.5x more power efficiency and 3.2x higher compute density Furthermore, the runtime achieved on the recommended parameters from ADAPTNET is 99.93% of the best achievable runtime.
2406.00723
Hao Wu
Hao Wu
Throughput and Link Utilization Improvement in Satellite Networks: A Learning-Enabled Approach
5 pages, 6 figures
null
null
null
cs.NI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Satellite networks provide communication services to global users with an uneven geographical distribution. In densely populated regions, Inter-satellite links (ISLs) often experience congestion, blocking traffic from other links and leading to low link utilization and throughput. In such cases, delay-tolerant traffic can be withheld by moving satellites and carried to navigate congested areas, thereby mitigating link congestion in densely populated regions. Through rational store-and-forward decision-making, link utilization and throughput can be improved. Building on this foundation, this letter centers its focus on learning-based decision-making for satellite traffic. First, a link load prediction method based on topology isomorphism is proposed. Then, a Markov decision process (MDP) is formulated to model store-and-forward decision-making. To generate store-and-forward policies, we propose reinforcement learning algorithms based on value iteration and Q-Learning. Simulation results demonstrate that the proposed method improves throughput and link utilization while consuming less than 20$\%$ of the time required by constraint-based routing.
[ { "created": "Sun, 2 Jun 2024 12:16:08 GMT", "version": "v1" } ]
2024-06-04
[ [ "Wu", "Hao", "" ] ]
Satellite networks provide communication services to global users with an uneven geographical distribution. In densely populated regions, Inter-satellite links (ISLs) often experience congestion, blocking traffic from other links and leading to low link utilization and throughput. In such cases, delay-tolerant traffic can be withheld by moving satellites and carried to navigate congested areas, thereby mitigating link congestion in densely populated regions. Through rational store-and-forward decision-making, link utilization and throughput can be improved. Building on this foundation, this letter centers its focus on learning-based decision-making for satellite traffic. First, a link load prediction method based on topology isomorphism is proposed. Then, a Markov decision process (MDP) is formulated to model store-and-forward decision-making. To generate store-and-forward policies, we propose reinforcement learning algorithms based on value iteration and Q-Learning. Simulation results demonstrate that the proposed method improves throughput and link utilization while consuming less than 20$\%$ of the time required by constraint-based routing.
2302.12250
Dayal Singh Kalra
Dayal Singh Kalra and Maissam Barkeshli
Phase diagram of early training dynamics in deep neural networks: effect of the learning rate, depth, and width
Accepted at NeurIPS 2023 (camera-ready version): Additional results added for cross-entropy loss and effect on network output at initialization; 10+32 pages, 8+35 figures
null
null
null
cs.LG cond-mat.dis-nn
http://creativecommons.org/licenses/by/4.0/
We systematically analyze optimization dynamics in deep neural networks (DNNs) trained with stochastic gradient descent (SGD) and study the effect of learning rate $\eta$, depth $d$, and width $w$ of the neural network. By analyzing the maximum eigenvalue $\lambda^H_t$ of the Hessian of the loss, which is a measure of sharpness of the loss landscape, we find that the dynamics can show four distinct regimes: (i) an early time transient regime, (ii) an intermediate saturation regime, (iii) a progressive sharpening regime, and (iv) a late time ``edge of stability" regime. The early and intermediate regimes (i) and (ii) exhibit a rich phase diagram depending on $\eta \equiv c / \lambda_0^H $, $d$, and $w$. We identify several critical values of $c$, which separate qualitatively distinct phenomena in the early time dynamics of training loss and sharpness. Notably, we discover the opening up of a ``sharpness reduction" phase, where sharpness decreases at early times, as $d$ and $1/w$ are increased.
[ { "created": "Thu, 23 Feb 2023 18:59:30 GMT", "version": "v1" }, { "created": "Tue, 24 Oct 2023 17:59:46 GMT", "version": "v2" } ]
2023-10-25
[ [ "Kalra", "Dayal Singh", "" ], [ "Barkeshli", "Maissam", "" ] ]
We systematically analyze optimization dynamics in deep neural networks (DNNs) trained with stochastic gradient descent (SGD) and study the effect of learning rate $\eta$, depth $d$, and width $w$ of the neural network. By analyzing the maximum eigenvalue $\lambda^H_t$ of the Hessian of the loss, which is a measure of sharpness of the loss landscape, we find that the dynamics can show four distinct regimes: (i) an early time transient regime, (ii) an intermediate saturation regime, (iii) a progressive sharpening regime, and (iv) a late time ``edge of stability" regime. The early and intermediate regimes (i) and (ii) exhibit a rich phase diagram depending on $\eta \equiv c / \lambda_0^H $, $d$, and $w$. We identify several critical values of $c$, which separate qualitatively distinct phenomena in the early time dynamics of training loss and sharpness. Notably, we discover the opening up of a ``sharpness reduction" phase, where sharpness decreases at early times, as $d$ and $1/w$ are increased.
1908.00310
Buddhika Nettasinghe
Buddhika Nettasinghe and Vikram Krishnamurthy
Maximum Likelihood Estimation of Power-law Degree Distributions via Friendship Paradox based Sampling
Accepted to ACM Transactions on Knowledge Discovery from Data (2021)
null
null
null
cs.SI physics.data-an physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers the problem of estimating a power-law degree distribution of an undirected network using sampled data. Although power-law degree distributions are ubiquitous in nature, the widely used parametric methods for estimating them (e.g. linear regression on double-logarithmic axes, maximum likelihood estimation with uniformly sampled nodes) suffer from the large variance introduced by the lack of data-points from the tail portion of the power-law degree distribution. As a solution, we present a novel maximum likelihood estimation approach that exploits the friendship paradox to sample more efficiently from the tail of the degree distribution. We analytically show that the proposed method results in a smaller bias, variance and a Cramer-Rao lower bound compared to the vanilla maximum-likelihood estimate obtained with uniformly sampled nodes (which is the most commonly used method in literature). Detailed numerical and empirical results are presented to illustrate the performance of the proposed method under different conditions and how it compares with alternative methods. We also show that the proposed method and its desirable properties (i.e. smaller bias, variance and Cramer-Rao lower bound compared to vanilla method based on uniform samples) extend to parametric degree distributions other than the power-law such as exponential degree distributions as well. All the numerical and empirical results are reproducible and the code is publicly available on Github.
[ { "created": "Thu, 1 Aug 2019 10:29:14 GMT", "version": "v1" }, { "created": "Wed, 4 Sep 2019 01:00:12 GMT", "version": "v2" }, { "created": "Mon, 28 Dec 2020 18:50:43 GMT", "version": "v3" }, { "created": "Sun, 7 Mar 2021 19:42:49 GMT", "version": "v4" } ]
2021-03-09
[ [ "Nettasinghe", "Buddhika", "" ], [ "Krishnamurthy", "Vikram", "" ] ]
This paper considers the problem of estimating a power-law degree distribution of an undirected network using sampled data. Although power-law degree distributions are ubiquitous in nature, the widely used parametric methods for estimating them (e.g. linear regression on double-logarithmic axes, maximum likelihood estimation with uniformly sampled nodes) suffer from the large variance introduced by the lack of data-points from the tail portion of the power-law degree distribution. As a solution, we present a novel maximum likelihood estimation approach that exploits the friendship paradox to sample more efficiently from the tail of the degree distribution. We analytically show that the proposed method results in a smaller bias, variance and a Cramer-Rao lower bound compared to the vanilla maximum-likelihood estimate obtained with uniformly sampled nodes (which is the most commonly used method in literature). Detailed numerical and empirical results are presented to illustrate the performance of the proposed method under different conditions and how it compares with alternative methods. We also show that the proposed method and its desirable properties (i.e. smaller bias, variance and Cramer-Rao lower bound compared to vanilla method based on uniform samples) extend to parametric degree distributions other than the power-law such as exponential degree distributions as well. All the numerical and empirical results are reproducible and the code is publicly available on Github.
2209.09580
Pierre-Louis Roman
Martina Camaioni, Rachid Guerraoui, Jovan Komatovic, Matteo Monti, Pierre-Louis Roman, Manuel Vidigueira, Gauthier Voron
Carbon: Scaling Trusted Payments with Untrusted Machines
This is an extended version of the paper appearing at IEEE TDSC 2024 under DOI 10.1109/TDSC.2024.3428617 with formal definitions, pseudocode, and proofs added in appendices; these appendices correspond to the previous version of this paper on arXiv (arXiv:2209.09580v2)
null
10.1109/TDSC.2024.3428617
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
This paper introduces Carbon, a high-throughput system enabling asynchronous (safe) and consensus-free (efficient) payments and votes within a dynamic set of clients. Carbon is operated by a dynamic set of validators that may be reconfigured asynchronously, offering its clients eclipse resistance as well as lightweight bootstrap. Carbon offers clients the ability to select validators by voting them in and out of the system thanks to its novel asynchronous and stake-less voting mechanism. Carbon relies on an asynchronous and deterministic implementation of Byzantine reliable broadcast that uniquely leverages a permissionless set of untrusted servers, brokers, to slash the cost of client authentication inherent to Byzantine fault tolerant systems. Carbon is able to sustain a throughput of one million payments per second in a geo-distributed environment, outperforming the state of the art by three orders of magnitude with equivalent latencies.
[ { "created": "Tue, 20 Sep 2022 09:50:44 GMT", "version": "v1" }, { "created": "Fri, 30 Sep 2022 09:26:59 GMT", "version": "v2" }, { "created": "Thu, 15 Aug 2024 16:12:53 GMT", "version": "v3" } ]
2024-08-16
[ [ "Camaioni", "Martina", "" ], [ "Guerraoui", "Rachid", "" ], [ "Komatovic", "Jovan", "" ], [ "Monti", "Matteo", "" ], [ "Roman", "Pierre-Louis", "" ], [ "Vidigueira", "Manuel", "" ], [ "Voron", "Gauthier", "" ] ]
This paper introduces Carbon, a high-throughput system enabling asynchronous (safe) and consensus-free (efficient) payments and votes within a dynamic set of clients. Carbon is operated by a dynamic set of validators that may be reconfigured asynchronously, offering its clients eclipse resistance as well as lightweight bootstrap. Carbon offers clients the ability to select validators by voting them in and out of the system thanks to its novel asynchronous and stake-less voting mechanism. Carbon relies on an asynchronous and deterministic implementation of Byzantine reliable broadcast that uniquely leverages a permissionless set of untrusted servers, brokers, to slash the cost of client authentication inherent to Byzantine fault tolerant systems. Carbon is able to sustain a throughput of one million payments per second in a geo-distributed environment, outperforming the state of the art by three orders of magnitude with equivalent latencies.
1305.5228
Richard Mayr
Parosh Aziz Abdulla, Lorenzo Clemente, Richard Mayr, Sven Sandberg
Stochastic Parity Games on Lossy Channel Systems
19 pages
null
null
EDI-INF-RR-1416
cs.GT cs.LO
http://creativecommons.org/licenses/by/3.0/
We give an algorithm for solving stochastic parity games with almost-sure winning conditions on lossy channel systems, for the case where the players are restricted to finite-memory strategies. First, we describe a general framework, where we consider the class of 2.5-player games with almost-sure parity winning conditions on possibly infinite game graphs, assuming that the game contains a finite attractor. An attractor is a set of states (not necessarily absorbing) that is almost surely re-visited regardless of the players' decisions. We present a scheme that characterizes the set of winning states for each player. Then, we instantiate this scheme to obtain an algorithm for stochastic game lossy channel systems.
[ { "created": "Wed, 22 May 2013 18:43:54 GMT", "version": "v1" }, { "created": "Thu, 13 Jun 2013 10:17:22 GMT", "version": "v2" } ]
2013-06-14
[ [ "Abdulla", "Parosh Aziz", "" ], [ "Clemente", "Lorenzo", "" ], [ "Mayr", "Richard", "" ], [ "Sandberg", "Sven", "" ] ]
We give an algorithm for solving stochastic parity games with almost-sure winning conditions on lossy channel systems, for the case where the players are restricted to finite-memory strategies. First, we describe a general framework, where we consider the class of 2.5-player games with almost-sure parity winning conditions on possibly infinite game graphs, assuming that the game contains a finite attractor. An attractor is a set of states (not necessarily absorbing) that is almost surely re-visited regardless of the players' decisions. We present a scheme that characterizes the set of winning states for each player. Then, we instantiate this scheme to obtain an algorithm for stochastic game lossy channel systems.
1012.2524
Jaydip Sen
Jaydip Sen, Munir Sayyad, and Basavaraj Hooli
Convergence and Next Generation Networks
67 pages, 11 figures, 4 tables. Bootk Chapter published in the book "Future trends and Challenegs for ICT Standardization", pp. 107 - 192
BooK: Future Trends and Challenges for ICT Standardization. Editor: Ramjee Prasad, River Publishers, Aalborg, Denmark, 2010
10.13140/RG.2.1.2986.2640
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The communications sector is undergoing significant changes, with the emergence of a number of platforms available to provide a different range of services. Some of these platforms are complementary to each other, while others are competitive, or can provide a valid substitute for some of the services provided. Up till now, the most important communications platform in most of the developing countries has been the public switched telecommunication network (PSTN) which provides access to all households and buildings. This universality in providing access has also meant that the network has generally been designated as one for universal service.This chapter focuses on the area where the most significant changes are taking place in the communication sector. The objective of this chapter is neither to give an overview of all communication platforms, nor is it aimed to assess the relative extent to which different platforms complement or compete with each other. The central theme of this chapter is to examine the developments in what is commonly refereed to as next generation access networks and next generation core networks and their role in convergence.
[ { "created": "Sun, 12 Dec 2010 08:28:25 GMT", "version": "v1" } ]
2021-09-07
[ [ "Sen", "Jaydip", "" ], [ "Sayyad", "Munir", "" ], [ "Hooli", "Basavaraj", "" ] ]
The communications sector is undergoing significant changes, with the emergence of a number of platforms available to provide a different range of services. Some of these platforms are complementary to each other, while others are competitive, or can provide a valid substitute for some of the services provided. Up till now, the most important communications platform in most of the developing countries has been the public switched telecommunication network (PSTN) which provides access to all households and buildings. This universality in providing access has also meant that the network has generally been designated as one for universal service.This chapter focuses on the area where the most significant changes are taking place in the communication sector. The objective of this chapter is neither to give an overview of all communication platforms, nor is it aimed to assess the relative extent to which different platforms complement or compete with each other. The central theme of this chapter is to examine the developments in what is commonly refereed to as next generation access networks and next generation core networks and their role in convergence.
1705.03529
Andre Puschmann
Andr\'e Puschmann, Paul Sutton, Ismael Gomez
Implementing NB-IoT in Software - Experiences Using the srsLTE Library
Appears in the proceedings of the Wireless Innovation Forum Europe 2017
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
NB-IoT is the 3GPP standard for machine-to-machine communications, recently finalized within LTE release 13. This article gives a brief overview about this new LTE-based radio access technology and presents a implementation developed using the srsLTE software radio suite. We also carry out a performance study in which we compare a theoretical analysis with experimental results obtained in our testbed. Furthermore, we provide some interesting details and share our experience in exploring one of the worldwide first commercial NB-IoT deployments.
[ { "created": "Tue, 9 May 2017 20:28:30 GMT", "version": "v1" } ]
2017-05-11
[ [ "Puschmann", "André", "" ], [ "Sutton", "Paul", "" ], [ "Gomez", "Ismael", "" ] ]
NB-IoT is the 3GPP standard for machine-to-machine communications, recently finalized within LTE release 13. This article gives a brief overview about this new LTE-based radio access technology and presents a implementation developed using the srsLTE software radio suite. We also carry out a performance study in which we compare a theoretical analysis with experimental results obtained in our testbed. Furthermore, we provide some interesting details and share our experience in exploring one of the worldwide first commercial NB-IoT deployments.
2011.11635
Sebastian Friedemann
Sebastian Friedemann (DATAMOVE), Bruno Raffin (DATAMOVE)
An elastic framework for ensemble-based large-scale data assimilation
null
null
null
null
cs.CE cs.DC physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prediction of chaotic systems relies on a floating fusion of sensor data (observations) with a numerical model to decide on a good system trajectory and to compensate nonlinear feedback effects. Ensemble-based data assimilation (DA) is a major method for this concern depending on propagating an ensemble of perturbed model realizations.In this paper we develop an elastic, online, fault-tolerant and modular framework called Melissa-DA for large-scale ensemble-based DA. Melissa-DA allows elastic addition or removal of compute resources for state propagation at runtime. Dynamic load balancing based on list scheduling ensuresefficient execution. Online processing of the data produced by ensemble members enables to avoid the I/O bottleneck of file-based approaches. Our implementation embeds the PDAF parallel DA engine, enabling the use of various DA methods. Melissa-DA can support extra ensemble-based DAmethods by implementing the transformation of member background states into analysis states. Experiments confirm the excellent scalability of Melissa-DA, running on up to 16,240 cores, to propagate 16,384 members for a regional hydrological critical zone assimilation relying on theParFlow model on a domain with about 4 M grid cells.
[ { "created": "Sat, 21 Nov 2020 11:23:43 GMT", "version": "v1" }, { "created": "Wed, 25 Nov 2020 08:23:29 GMT", "version": "v2" } ]
2020-11-26
[ [ "Friedemann", "Sebastian", "", "DATAMOVE" ], [ "Raffin", "Bruno", "", "DATAMOVE" ] ]
Prediction of chaotic systems relies on a floating fusion of sensor data (observations) with a numerical model to decide on a good system trajectory and to compensate nonlinear feedback effects. Ensemble-based data assimilation (DA) is a major method for this concern depending on propagating an ensemble of perturbed model realizations.In this paper we develop an elastic, online, fault-tolerant and modular framework called Melissa-DA for large-scale ensemble-based DA. Melissa-DA allows elastic addition or removal of compute resources for state propagation at runtime. Dynamic load balancing based on list scheduling ensuresefficient execution. Online processing of the data produced by ensemble members enables to avoid the I/O bottleneck of file-based approaches. Our implementation embeds the PDAF parallel DA engine, enabling the use of various DA methods. Melissa-DA can support extra ensemble-based DAmethods by implementing the transformation of member background states into analysis states. Experiments confirm the excellent scalability of Melissa-DA, running on up to 16,240 cores, to propagate 16,384 members for a regional hydrological critical zone assimilation relying on theParFlow model on a domain with about 4 M grid cells.
1905.08204
Duncan Brown
Karan Vahi, Mats Rynge, George Papadimitriou, Duncan A. Brown, Rajiv Mayani, Rafael Ferreira da Silva, Ewa Deelman, Anirban Mandal, Eric Lyons, Michael Zink
Custom Execution Environments with Containers in Pegasus-enabled Scientific Workflows
10 pages, 7 figures, submitted to eScience 2019
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Science reproducibility is a cornerstone feature in scientific workflows. In most cases, this has been implemented as a way to exactly reproduce the computational steps taken to reach the final results. While these steps are often completely described, including the input parameters, datasets, and codes, the environment in which these steps are executed is only described at a higher level with endpoints and operating system name and versions. Though this may be sufficient for reproducibility in the short term, systems evolve and are replaced over time, breaking the underlying workflow reproducibility. A natural solution to this problem is containers, as they are well defined, have a lifetime independent of the underlying system, and can be user-controlled so that they can provide custom environments if needed. This paper highlights some unique challenges that may arise when using containers in distributed scientific workflows. Further, this paper explores how the Pegasus Workflow Management System implements container support to address such challenges.
[ { "created": "Mon, 20 May 2019 16:41:20 GMT", "version": "v1" } ]
2019-05-21
[ [ "Vahi", "Karan", "" ], [ "Rynge", "Mats", "" ], [ "Papadimitriou", "George", "" ], [ "Brown", "Duncan A.", "" ], [ "Mayani", "Rajiv", "" ], [ "da Silva", "Rafael Ferreira", "" ], [ "Deelman", "Ewa", "" ], [ "Mandal", "Anirban", "" ], [ "Lyons", "Eric", "" ], [ "Zink", "Michael", "" ] ]
Science reproducibility is a cornerstone feature in scientific workflows. In most cases, this has been implemented as a way to exactly reproduce the computational steps taken to reach the final results. While these steps are often completely described, including the input parameters, datasets, and codes, the environment in which these steps are executed is only described at a higher level with endpoints and operating system name and versions. Though this may be sufficient for reproducibility in the short term, systems evolve and are replaced over time, breaking the underlying workflow reproducibility. A natural solution to this problem is containers, as they are well defined, have a lifetime independent of the underlying system, and can be user-controlled so that they can provide custom environments if needed. This paper highlights some unique challenges that may arise when using containers in distributed scientific workflows. Further, this paper explores how the Pegasus Workflow Management System implements container support to address such challenges.
2404.10289
Max Kreminski
Max Kreminski
The Dearth of the Author in AI-Supported Writing
Published as a workshop paper at the In2Writing workshop at CHI 2024
null
null
null
cs.HC cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
We diagnose and briefly discuss the dearth of the author: a condition that arises when AI-based creativity support tools for writing allow users to produce large amounts of text without making a commensurate number of creative decisions, resulting in output that is sparse in expressive intent. We argue that the dearth of the author helps to explain a number of recurring difficulties and anxieties around AI-based writing support tools, but that it also suggests an ambitious new goal for AI-based CSTs.
[ { "created": "Tue, 16 Apr 2024 05:23:03 GMT", "version": "v1" } ]
2024-04-17
[ [ "Kreminski", "Max", "" ] ]
We diagnose and briefly discuss the dearth of the author: a condition that arises when AI-based creativity support tools for writing allow users to produce large amounts of text without making a commensurate number of creative decisions, resulting in output that is sparse in expressive intent. We argue that the dearth of the author helps to explain a number of recurring difficulties and anxieties around AI-based writing support tools, but that it also suggests an ambitious new goal for AI-based CSTs.
1706.06714
Van-Khanh Tran
Van-Khanh Tran and Le-Minh Nguyen
Neural-based Natural Language Generation in Dialogue using RNN Encoder-Decoder with Semantic Aggregation
To be appear at SIGDIAL 2017. arXiv admin note: text overlap with arXiv:1706.00134, arXiv:1706.00139
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Natural language generation (NLG) is an important component in spoken dialogue systems. This paper presents a model called Encoder-Aggregator-Decoder which is an extension of an Recurrent Neural Network based Encoder-Decoder architecture. The proposed Semantic Aggregator consists of two components: an Aligner and a Refiner. The Aligner is a conventional attention calculated over the encoded input information, while the Refiner is another attention or gating mechanism stacked over the attentive Aligner in order to further select and aggregate the semantic elements. The proposed model can be jointly trained both sentence planning and surface realization to produce natural language utterances. The model was extensively assessed on four different NLG domains, in which the experimental results showed that the proposed generator consistently outperforms the previous methods on all the NLG domains.
[ { "created": "Wed, 21 Jun 2017 01:07:02 GMT", "version": "v1" }, { "created": "Sun, 25 Jun 2017 09:31:34 GMT", "version": "v2" }, { "created": "Tue, 11 Jul 2017 14:47:13 GMT", "version": "v3" } ]
2017-07-12
[ [ "Tran", "Van-Khanh", "" ], [ "Nguyen", "Le-Minh", "" ] ]
Natural language generation (NLG) is an important component in spoken dialogue systems. This paper presents a model called Encoder-Aggregator-Decoder which is an extension of an Recurrent Neural Network based Encoder-Decoder architecture. The proposed Semantic Aggregator consists of two components: an Aligner and a Refiner. The Aligner is a conventional attention calculated over the encoded input information, while the Refiner is another attention or gating mechanism stacked over the attentive Aligner in order to further select and aggregate the semantic elements. The proposed model can be jointly trained both sentence planning and surface realization to produce natural language utterances. The model was extensively assessed on four different NLG domains, in which the experimental results showed that the proposed generator consistently outperforms the previous methods on all the NLG domains.
1009.2118
Sahand Negahban
Sahand Negahban and Martin J. Wainwright
Restricted strong convexity and weighted matrix completion: Optimal bounds with noise
null
null
null
null
cs.IT math.IT math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near low-rank matrices. Our results are based on measures of the "spikiness" and "low-rankness" of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an $M$-estimator that includes controls on both the rank and spikiness of the solution, and we establish non-asymptotic error bounds in weighted Frobenius norm for recovering matrices lying with $\ell_q$-"balls" of bounded spikiness. Using information-theoretic methods, we show that no algorithm can achieve better estimates (up to a logarithmic factor) over these same sets, showing that our conditions on matrices and associated rates are essentially optimal.
[ { "created": "Fri, 10 Sep 2010 23:08:58 GMT", "version": "v1" }, { "created": "Sun, 15 May 2011 17:30:12 GMT", "version": "v2" } ]
2011-05-17
[ [ "Negahban", "Sahand", "" ], [ "Wainwright", "Martin J.", "" ] ]
We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near low-rank matrices. Our results are based on measures of the "spikiness" and "low-rankness" of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an $M$-estimator that includes controls on both the rank and spikiness of the solution, and we establish non-asymptotic error bounds in weighted Frobenius norm for recovering matrices lying with $\ell_q$-"balls" of bounded spikiness. Using information-theoretic methods, we show that no algorithm can achieve better estimates (up to a logarithmic factor) over these same sets, showing that our conditions on matrices and associated rates are essentially optimal.
2109.11891
Aishwarya Venkataramanan
Aishwarya Venkataramanan, Martin Laviale, C\'ecile Figus, Philippe Usseglio-Polatera, C\'edric Pradalier
Tackling Inter-Class Similarity and Intra-Class Variance for Microscopic Image-based Classification
13th International Conference on Computer Vision Systems (2021)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic classification of aquatic microorganisms is based on the morphological features extracted from individual images. The current works on their classification do not consider the inter-class similarity and intra-class variance that causes misclassification. We are particularly interested in the case where variance within a class occurs due to discrete visual changes in microscopic images. In this paper, we propose to account for it by partitioning the classes with high variance based on the visual features. Our algorithm automatically decides the optimal number of sub-classes to be created and consider each of them as a separate class for training. This way, the network learns finer-grained visual features. Our experiments on two databases of freshwater benthic diatoms and marine plankton show that our method can outperform the state-of-the-art approaches for classification of these aquatic microorganisms.
[ { "created": "Fri, 24 Sep 2021 11:17:02 GMT", "version": "v1" } ]
2021-09-27
[ [ "Venkataramanan", "Aishwarya", "" ], [ "Laviale", "Martin", "" ], [ "Figus", "Cécile", "" ], [ "Usseglio-Polatera", "Philippe", "" ], [ "Pradalier", "Cédric", "" ] ]
Automatic classification of aquatic microorganisms is based on the morphological features extracted from individual images. The current works on their classification do not consider the inter-class similarity and intra-class variance that causes misclassification. We are particularly interested in the case where variance within a class occurs due to discrete visual changes in microscopic images. In this paper, we propose to account for it by partitioning the classes with high variance based on the visual features. Our algorithm automatically decides the optimal number of sub-classes to be created and consider each of them as a separate class for training. This way, the network learns finer-grained visual features. Our experiments on two databases of freshwater benthic diatoms and marine plankton show that our method can outperform the state-of-the-art approaches for classification of these aquatic microorganisms.
2406.18094
Hiroaki Yamagiwa
Yunzhen He, Hiroaki Yamagiwa, Hidetoshi Shimodaira
Shimo Lab at "Discharge Me!": Discharge Summarization by Prompt-Driven Concatenation of Electronic Health Record Sections
BioNLP @ ACL2024
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present our approach to the shared task "Discharge Me!" at the BioNLP Workshop 2024. The primary goal of this task is to reduce the time and effort clinicians spend on writing detailed notes in the electronic health record (EHR). Participants develop a pipeline to generate the "Brief Hospital Course" and "Discharge Instructions" sections from the EHR. Our approach involves a first step of extracting the relevant sections from the EHR. We then add explanatory prompts to these sections and concatenate them with separate tokens to create the input text. To train a text generation model, we perform LoRA fine-tuning on the ClinicalT5-large model. On the final test data, our approach achieved a ROUGE-1 score of $0.394$, which is comparable to the top solutions.
[ { "created": "Wed, 26 Jun 2024 06:10:20 GMT", "version": "v1" } ]
2024-06-27
[ [ "He", "Yunzhen", "" ], [ "Yamagiwa", "Hiroaki", "" ], [ "Shimodaira", "Hidetoshi", "" ] ]
In this paper, we present our approach to the shared task "Discharge Me!" at the BioNLP Workshop 2024. The primary goal of this task is to reduce the time and effort clinicians spend on writing detailed notes in the electronic health record (EHR). Participants develop a pipeline to generate the "Brief Hospital Course" and "Discharge Instructions" sections from the EHR. Our approach involves a first step of extracting the relevant sections from the EHR. We then add explanatory prompts to these sections and concatenate them with separate tokens to create the input text. To train a text generation model, we perform LoRA fine-tuning on the ClinicalT5-large model. On the final test data, our approach achieved a ROUGE-1 score of $0.394$, which is comparable to the top solutions.
2303.07814
Adam Goldbraikh
Adam Goldbraikh, Omer Shubi, Or Rubin, Carla M Pugh, Shlomi Laufer
MS-TCRNet: Multi-Stage Temporal Convolutional Recurrent Networks for Action Segmentation Using Sensor-Augmented Kinematics
41 pages, 7 figures. Submitted to Pattern Recognition
null
null
null
cs.CV cs.LG cs.RO eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Action segmentation is a challenging task in high-level process analysis, typically performed on video or kinematic data obtained from various sensors. This work presents two contributions related to action segmentation on kinematic data. Firstly, we introduce two versions of Multi-Stage Temporal Convolutional Recurrent Networks (MS-TCRNet), specifically designed for kinematic data. The architectures consist of a prediction generator with intra-stage regularization and Bidirectional LSTM or GRU-based refinement stages. Secondly, we propose two new data augmentation techniques, World Frame Rotation and Hand Inversion, which utilize the strong geometric structure of kinematic data to improve algorithm performance and robustness. We evaluate our models on three datasets of surgical suturing tasks: the Variable Tissue Simulation (VTS) Dataset and the newly introduced Bowel Repair Simulation (BRS) Dataset, both of which are open surgery simulation datasets collected by us, as well as the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), a well-known benchmark in robotic surgery. Our methods achieved state-of-the-art performance.
[ { "created": "Tue, 14 Mar 2023 11:44:58 GMT", "version": "v1" }, { "created": "Fri, 12 Jul 2024 15:48:09 GMT", "version": "v2" } ]
2024-07-15
[ [ "Goldbraikh", "Adam", "" ], [ "Shubi", "Omer", "" ], [ "Rubin", "Or", "" ], [ "Pugh", "Carla M", "" ], [ "Laufer", "Shlomi", "" ] ]
Action segmentation is a challenging task in high-level process analysis, typically performed on video or kinematic data obtained from various sensors. This work presents two contributions related to action segmentation on kinematic data. Firstly, we introduce two versions of Multi-Stage Temporal Convolutional Recurrent Networks (MS-TCRNet), specifically designed for kinematic data. The architectures consist of a prediction generator with intra-stage regularization and Bidirectional LSTM or GRU-based refinement stages. Secondly, we propose two new data augmentation techniques, World Frame Rotation and Hand Inversion, which utilize the strong geometric structure of kinematic data to improve algorithm performance and robustness. We evaluate our models on three datasets of surgical suturing tasks: the Variable Tissue Simulation (VTS) Dataset and the newly introduced Bowel Repair Simulation (BRS) Dataset, both of which are open surgery simulation datasets collected by us, as well as the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), a well-known benchmark in robotic surgery. Our methods achieved state-of-the-art performance.
1903.04463
Farzin Salek Shishavan
Farzin Salek, Min-Hsiu Hsieh, Javier R. Fonollosa
Publicness, Privacy and Confidentiality in the Single-Serving Quantum Broadcast Channel
23 pages, 1 figure, journal
null
null
null
cs.IT math.IT quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The 2-receiver broadcast channel is studied: a network with three parties where the transmitter and one of the receivers are the primarily involved parties and the other receiver considered as third party. The messages that are determined to be communicated are classified into public, private and confidential based on the information they convey. The public message contains information intended for both parties and is required to be decoded correctly by both of them, the private message is intended for the primary party only, however, there is no secrecy requirement imposed upon it meaning that it can possibly be exposed to the third party and finally the confidential message containing information intended exclusively for the primary party such that this information must be kept completely secret from the other receiver. A trade-off arises between the rates of the three messages, when one of the rates is high, the other rates may need to be reduced to guarantee the reliable transmission of all three messages. The encoder performs the necessary equivocation by virtue of dummy random numbers whose rate is assumed to be limited and should be considered in the trade-off as well. We study this trade-off in the one-shot regime of a quantum broadcast channel by providing achievability and (weak) converse regions. In the achievability, we prove and use a conditional version of the convex-split lemma as well as position-based decoding. By studying the asymptotic behaviour of our bounds, we will recover several well-known asymptotic results in the literature.
[ { "created": "Mon, 11 Mar 2019 17:38:03 GMT", "version": "v1" } ]
2019-03-12
[ [ "Salek", "Farzin", "" ], [ "Hsieh", "Min-Hsiu", "" ], [ "Fonollosa", "Javier R.", "" ] ]
The 2-receiver broadcast channel is studied: a network with three parties where the transmitter and one of the receivers are the primarily involved parties and the other receiver considered as third party. The messages that are determined to be communicated are classified into public, private and confidential based on the information they convey. The public message contains information intended for both parties and is required to be decoded correctly by both of them, the private message is intended for the primary party only, however, there is no secrecy requirement imposed upon it meaning that it can possibly be exposed to the third party and finally the confidential message containing information intended exclusively for the primary party such that this information must be kept completely secret from the other receiver. A trade-off arises between the rates of the three messages, when one of the rates is high, the other rates may need to be reduced to guarantee the reliable transmission of all three messages. The encoder performs the necessary equivocation by virtue of dummy random numbers whose rate is assumed to be limited and should be considered in the trade-off as well. We study this trade-off in the one-shot regime of a quantum broadcast channel by providing achievability and (weak) converse regions. In the achievability, we prove and use a conditional version of the convex-split lemma as well as position-based decoding. By studying the asymptotic behaviour of our bounds, we will recover several well-known asymptotic results in the literature.
2206.11541
Mohammed Salah
Mohammed Salah, Mohammed Chehadah, Muhammed Humais, Mohammed Wahbah, Abdulla Ayyad, Rana Azzam, Lakmal Seneviratne, and Yahya Zweiri
A Neuromorphic Vision-Based Measurement for Robust Relative Localization in Future Space Exploration Missions
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Space exploration has witnessed revolutionary changes upon landing of the Perseverance Rover on the Martian surface and demonstrating the first flight beyond Earth by the Mars helicopter, Ingenuity. During their mission on Mars, Perseverance Rover and Ingenuity collaboratively explore the Martian surface, where Ingenuity scouts terrain information for rover's safe traversability. Hence, determining the relative poses between both the platforms is of paramount importance for the success of this mission. Driven by this necessity, this work proposes a robust relative localization system based on a fusion of neuromorphic vision-based measurements (NVBMs) and inertial measurements. The emergence of neuromorphic vision triggered a paradigm shift in the computer vision community, due to its unique working principle delineated with asynchronous events triggered by variations of light intensities occurring in the scene. This implies that observations cannot be acquired in static scenes due to illumination invariance. To circumvent this limitation, high frequency active landmarks are inserted in the scene to guarantee consistent event firing. These landmarks are adopted as salient features to facilitate relative localization. A novel event-based landmark identification algorithm using Gaussian Mixture Models (GMM) is developed for matching the landmarks correspondences formulating our NVBMs. The NVBMs are fused with inertial measurements in proposed state estimators, landmark tracking Kalman filter (LTKF) and translation decoupled Kalman filter (TDKF) for landmark tracking and relative localization, respectively. The proposed system was tested in a variety of experiments and has outperformed state-of-the-art approaches in accuracy and range.
[ { "created": "Thu, 23 Jun 2022 08:39:05 GMT", "version": "v1" }, { "created": "Wed, 12 Oct 2022 08:25:59 GMT", "version": "v2" } ]
2022-10-13
[ [ "Salah", "Mohammed", "" ], [ "Chehadah", "Mohammed", "" ], [ "Humais", "Muhammed", "" ], [ "Wahbah", "Mohammed", "" ], [ "Ayyad", "Abdulla", "" ], [ "Azzam", "Rana", "" ], [ "Seneviratne", "Lakmal", "" ], [ "Zweiri", "Yahya", "" ] ]
Space exploration has witnessed revolutionary changes upon landing of the Perseverance Rover on the Martian surface and demonstrating the first flight beyond Earth by the Mars helicopter, Ingenuity. During their mission on Mars, Perseverance Rover and Ingenuity collaboratively explore the Martian surface, where Ingenuity scouts terrain information for rover's safe traversability. Hence, determining the relative poses between both the platforms is of paramount importance for the success of this mission. Driven by this necessity, this work proposes a robust relative localization system based on a fusion of neuromorphic vision-based measurements (NVBMs) and inertial measurements. The emergence of neuromorphic vision triggered a paradigm shift in the computer vision community, due to its unique working principle delineated with asynchronous events triggered by variations of light intensities occurring in the scene. This implies that observations cannot be acquired in static scenes due to illumination invariance. To circumvent this limitation, high frequency active landmarks are inserted in the scene to guarantee consistent event firing. These landmarks are adopted as salient features to facilitate relative localization. A novel event-based landmark identification algorithm using Gaussian Mixture Models (GMM) is developed for matching the landmarks correspondences formulating our NVBMs. The NVBMs are fused with inertial measurements in proposed state estimators, landmark tracking Kalman filter (LTKF) and translation decoupled Kalman filter (TDKF) for landmark tracking and relative localization, respectively. The proposed system was tested in a variety of experiments and has outperformed state-of-the-art approaches in accuracy and range.
2105.09540
Xiaolin Chen
Xiaolin Chen, Shuai Zhou, Bei guan, Kai Yang, Hao Fan, Hu Wang, Yongji Wang
Fed-EINI: An Efficient and Interpretable Inference Framework for Decision Tree Ensembles in Federated Learning
10 pages, 8 figures. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.LG cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing concerns about data privacy and security drive an emerging field of studying privacy-preserving machine learning from isolated data sources, i.e., federated learning. A class of federated learning, vertical federated learning, where different parties hold different features for common users, has a great potential of driving a great variety of business cooperation among enterprises in many fields. In machine learning, decision tree ensembles such as gradient boosting decision trees (GBDT) and random forest are widely applied powerful models with high interpretability and modeling efficiency. However, stateof-art vertical federated learning frameworks adapt anonymous features to avoid possible data breaches, makes the interpretability of the model compromised. To address this issue in the inference process, in this paper, we firstly make a problem analysis about the necessity of disclosure meanings of feature to Guest Party in vertical federated learning. Then we find the prediction result of a tree could be expressed as the intersection of results of sub-models of the tree held by all parties. With this key observation, we protect data privacy and allow the disclosure of feature meaning by concealing decision paths and adapt a communication-efficient secure computation method for inference outputs. The advantages of Fed-EINI will be demonstrated through both theoretical analysis and extensive numerical results. We improve the interpretability of the model by disclosing the meaning of features while ensuring efficiency and accuracy.
[ { "created": "Thu, 20 May 2021 06:40:05 GMT", "version": "v1" }, { "created": "Thu, 2 Dec 2021 03:34:46 GMT", "version": "v10" }, { "created": "Wed, 8 Dec 2021 02:06:36 GMT", "version": "v11" }, { "created": "Mon, 12 Jul 2021 08:09:39 GMT", "version": "v2" }, { "created": "Fri, 16 Jul 2021 08:07:13 GMT", "version": "v3" }, { "created": "Mon, 19 Jul 2021 13:17:56 GMT", "version": "v4" }, { "created": "Tue, 20 Jul 2021 14:25:09 GMT", "version": "v5" }, { "created": "Mon, 26 Jul 2021 13:10:19 GMT", "version": "v6" }, { "created": "Mon, 22 Nov 2021 09:02:40 GMT", "version": "v7" }, { "created": "Wed, 24 Nov 2021 02:22:48 GMT", "version": "v8" }, { "created": "Wed, 1 Dec 2021 02:26:54 GMT", "version": "v9" } ]
2021-12-10
[ [ "Chen", "Xiaolin", "" ], [ "Zhou", "Shuai", "" ], [ "guan", "Bei", "" ], [ "Yang", "Kai", "" ], [ "Fan", "Hao", "" ], [ "Wang", "Hu", "" ], [ "Wang", "Yongji", "" ] ]
The increasing concerns about data privacy and security drive an emerging field of studying privacy-preserving machine learning from isolated data sources, i.e., federated learning. A class of federated learning, vertical federated learning, where different parties hold different features for common users, has a great potential of driving a great variety of business cooperation among enterprises in many fields. In machine learning, decision tree ensembles such as gradient boosting decision trees (GBDT) and random forest are widely applied powerful models with high interpretability and modeling efficiency. However, stateof-art vertical federated learning frameworks adapt anonymous features to avoid possible data breaches, makes the interpretability of the model compromised. To address this issue in the inference process, in this paper, we firstly make a problem analysis about the necessity of disclosure meanings of feature to Guest Party in vertical federated learning. Then we find the prediction result of a tree could be expressed as the intersection of results of sub-models of the tree held by all parties. With this key observation, we protect data privacy and allow the disclosure of feature meaning by concealing decision paths and adapt a communication-efficient secure computation method for inference outputs. The advantages of Fed-EINI will be demonstrated through both theoretical analysis and extensive numerical results. We improve the interpretability of the model by disclosing the meaning of features while ensuring efficiency and accuracy.
2303.17661
Muntabir Hasan Choudhury
Muntabir Hasan Choudhury, Lamia Salsabil, Himarsha R. Jayanetti, Jian Wu, William A. Ingram, Edward A. Fox
MetaEnhance: Metadata Quality Improvement for Electronic Theses and Dissertations of University Libraries
7 pages, 3 tables, and 1 figure. Accepted by 2023 ACM/IEEE Joint Conference on Digital Libraries (JCDL '23) as a short paper
null
null
null
cs.DL cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Metadata quality is crucial for digital objects to be discovered through digital library interfaces. However, due to various reasons, the metadata of digital objects often exhibits incomplete, inconsistent, and incorrect values. We investigate methods to automatically detect, correct, and canonicalize scholarly metadata, using seven key fields of electronic theses and dissertations (ETDs) as a case study. We propose MetaEnhance, a framework that utilizes state-of-the-art artificial intelligence methods to improve the quality of these fields. To evaluate MetaEnhance, we compiled a metadata quality evaluation benchmark containing 500 ETDs, by combining subsets sampled using multiple criteria. We tested MetaEnhance on this benchmark and found that the proposed methods achieved nearly perfect F1-scores in detecting errors and F1-scores in correcting errors ranging from 0.85 to 1.00 for five of seven fields.
[ { "created": "Thu, 30 Mar 2023 18:56:42 GMT", "version": "v1" } ]
2023-04-03
[ [ "Choudhury", "Muntabir Hasan", "" ], [ "Salsabil", "Lamia", "" ], [ "Jayanetti", "Himarsha R.", "" ], [ "Wu", "Jian", "" ], [ "Ingram", "William A.", "" ], [ "Fox", "Edward A.", "" ] ]
Metadata quality is crucial for digital objects to be discovered through digital library interfaces. However, due to various reasons, the metadata of digital objects often exhibits incomplete, inconsistent, and incorrect values. We investigate methods to automatically detect, correct, and canonicalize scholarly metadata, using seven key fields of electronic theses and dissertations (ETDs) as a case study. We propose MetaEnhance, a framework that utilizes state-of-the-art artificial intelligence methods to improve the quality of these fields. To evaluate MetaEnhance, we compiled a metadata quality evaluation benchmark containing 500 ETDs, by combining subsets sampled using multiple criteria. We tested MetaEnhance on this benchmark and found that the proposed methods achieved nearly perfect F1-scores in detecting errors and F1-scores in correcting errors ranging from 0.85 to 1.00 for five of seven fields.
2405.09114
Qihe Pan
Yiming Wu, Qihe Pan, Zhen Zhao, Zicheng Wang, Sifan Long, Ronghua Liang
SOEDiff: Efficient Distillation for Small Object Editing
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we delve into a new task known as small object editing (SOE), which focuses on text-based image inpainting within a constrained, small-sized area. Despite the remarkable success have been achieved by current image inpainting approaches, their application to the SOE task generally results in failure cases such as Object Missing, Text-Image Mismatch, and Distortion. These failures stem from the limited use of small-sized objects in training datasets and the downsampling operations employed by U-Net models, which hinders accurate generation. To overcome these challenges, we introduce a novel training-based approach, SOEDiff, aimed at enhancing the capability of baseline models like StableDiffusion in editing small-sized objects while minimizing training costs. Specifically, our method involves two key components: SO-LoRA, which efficiently fine-tunes low-rank matrices, and Cross-Scale Score Distillation loss, which leverages high-resolution predictions from the pre-trained teacher diffusion model. Our method presents significant improvements on the test dataset collected from MSCOCO and OpenImage, validating the effectiveness of our proposed method in small object editing. In particular, when comparing SOEDiff with SD-I model on the OpenImage-f dataset, we observe a 0.99 improvement in CLIP-Score and a reduction of 2.87 in FID.
[ { "created": "Wed, 15 May 2024 06:14:31 GMT", "version": "v1" }, { "created": "Thu, 25 Jul 2024 21:30:41 GMT", "version": "v2" } ]
2024-07-29
[ [ "Wu", "Yiming", "" ], [ "Pan", "Qihe", "" ], [ "Zhao", "Zhen", "" ], [ "Wang", "Zicheng", "" ], [ "Long", "Sifan", "" ], [ "Liang", "Ronghua", "" ] ]
In this paper, we delve into a new task known as small object editing (SOE), which focuses on text-based image inpainting within a constrained, small-sized area. Despite the remarkable success have been achieved by current image inpainting approaches, their application to the SOE task generally results in failure cases such as Object Missing, Text-Image Mismatch, and Distortion. These failures stem from the limited use of small-sized objects in training datasets and the downsampling operations employed by U-Net models, which hinders accurate generation. To overcome these challenges, we introduce a novel training-based approach, SOEDiff, aimed at enhancing the capability of baseline models like StableDiffusion in editing small-sized objects while minimizing training costs. Specifically, our method involves two key components: SO-LoRA, which efficiently fine-tunes low-rank matrices, and Cross-Scale Score Distillation loss, which leverages high-resolution predictions from the pre-trained teacher diffusion model. Our method presents significant improvements on the test dataset collected from MSCOCO and OpenImage, validating the effectiveness of our proposed method in small object editing. In particular, when comparing SOEDiff with SD-I model on the OpenImage-f dataset, we observe a 0.99 improvement in CLIP-Score and a reduction of 2.87 in FID.
2309.06844
Dimitar Dimitrov
Georgi Pachov, Dimitar Dimitrov, Ivan Koychev, Preslav Nakov
Gpachov at CheckThat! 2023: A Diverse Multi-Approach Ensemble for Subjectivity Detection in News Articles
null
null
null
null
cs.CL cs.AI cs.MM
http://creativecommons.org/licenses/by-sa/4.0/
The wide-spread use of social networks has given rise to subjective, misleading, and even false information on the Internet. Thus, subjectivity detection can play an important role in ensuring the objectiveness and the quality of a piece of information. This paper presents the solution built by the Gpachov team for the CLEF-2023 CheckThat! lab Task~2 on subjectivity detection. Three different research directions are explored. The first one is based on fine-tuning a sentence embeddings encoder model and dimensionality reduction. The second one explores a sample-efficient few-shot learning model. The third one evaluates fine-tuning a multilingual transformer on an altered dataset, using data from multiple languages. Finally, the three approaches are combined in a simple majority voting ensemble, resulting in 0.77 macro F1 on the test set and achieving 2nd place on the English subtask.
[ { "created": "Wed, 13 Sep 2023 09:49:20 GMT", "version": "v1" } ]
2023-09-14
[ [ "Pachov", "Georgi", "" ], [ "Dimitrov", "Dimitar", "" ], [ "Koychev", "Ivan", "" ], [ "Nakov", "Preslav", "" ] ]
The wide-spread use of social networks has given rise to subjective, misleading, and even false information on the Internet. Thus, subjectivity detection can play an important role in ensuring the objectiveness and the quality of a piece of information. This paper presents the solution built by the Gpachov team for the CLEF-2023 CheckThat! lab Task~2 on subjectivity detection. Three different research directions are explored. The first one is based on fine-tuning a sentence embeddings encoder model and dimensionality reduction. The second one explores a sample-efficient few-shot learning model. The third one evaluates fine-tuning a multilingual transformer on an altered dataset, using data from multiple languages. Finally, the three approaches are combined in a simple majority voting ensemble, resulting in 0.77 macro F1 on the test set and achieving 2nd place on the English subtask.
2306.02287
Mamtaj Akter
Mamtaj Akter, Leena Alghamdi, Jess Kropczynski, Heather Lipford, Pamela Wisniewski
It Takes a Village: A Case for Including Extended Family Members in the Joint Oversight of Family-based Privacy and Security for Mobile Smartphones
null
Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
10.1145/3544549.3585904
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
We conducted a user study with 19 parent-teen dyads to understand the perceived benefits and drawbacks of using a mobile app that allows them to co-manage mobile privacy, safety, and security within their families. While the primary goal of the study was to understand the use case as it pertained to parents and teens, an emerging finding from our study was that participants found value in extending app use to other family members (siblings, cousins, and grandparents). Participants felt that it would help bring the necessary expertise into their immediate family network and help protect the older adults and children of the family from privacy and security risks. However, participants expressed that co-monitoring by extended family members might cause tensions in their families, creating interpersonal conflicts. To alleviate these concerns, participants suggested more control over the privacy features to facilitate sharing their installed apps with only trusted family members.
[ { "created": "Sun, 4 Jun 2023 07:33:37 GMT", "version": "v1" }, { "created": "Tue, 16 Apr 2024 03:31:03 GMT", "version": "v2" } ]
2024-04-17
[ [ "Akter", "Mamtaj", "" ], [ "Alghamdi", "Leena", "" ], [ "Kropczynski", "Jess", "" ], [ "Lipford", "Heather", "" ], [ "Wisniewski", "Pamela", "" ] ]
We conducted a user study with 19 parent-teen dyads to understand the perceived benefits and drawbacks of using a mobile app that allows them to co-manage mobile privacy, safety, and security within their families. While the primary goal of the study was to understand the use case as it pertained to parents and teens, an emerging finding from our study was that participants found value in extending app use to other family members (siblings, cousins, and grandparents). Participants felt that it would help bring the necessary expertise into their immediate family network and help protect the older adults and children of the family from privacy and security risks. However, participants expressed that co-monitoring by extended family members might cause tensions in their families, creating interpersonal conflicts. To alleviate these concerns, participants suggested more control over the privacy features to facilitate sharing their installed apps with only trusted family members.
1909.10686
Weiwei Wan
Daniel Sanchez, Weiwei Wan, and Kensuke Harada
Tethered Tool Manipulation Planning with Cable Maneuvering
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a planner for manipulating tethered tools using dual-armed robots. The planner generates robot motion sequences to maneuver a tool and its cable while avoiding robot-cable entanglements. Firstly, the planner generates an Object Manipulation Motion Sequence (OMMS) to handle the tool and place it in desired poses. Secondly, the planner examines the tool movement associated with the OMMS and computes candidate positions for a cable slider, to maneuver the tool cable and avoid collisions. Finally, the planner determines the optimal slider positions to avoid entanglements and generates a Cable Manipulation Motion Sequence (CMMS) to place the slider in these positions. The robot executes both the OMMS and CMMS to handle the tool and its cable to avoid entanglements and excess cable bending. Simulations and real-world experiments help validate the proposed method.
[ { "created": "Tue, 24 Sep 2019 02:55:43 GMT", "version": "v1" } ]
2019-09-25
[ [ "Sanchez", "Daniel", "" ], [ "Wan", "Weiwei", "" ], [ "Harada", "Kensuke", "" ] ]
In this paper, we present a planner for manipulating tethered tools using dual-armed robots. The planner generates robot motion sequences to maneuver a tool and its cable while avoiding robot-cable entanglements. Firstly, the planner generates an Object Manipulation Motion Sequence (OMMS) to handle the tool and place it in desired poses. Secondly, the planner examines the tool movement associated with the OMMS and computes candidate positions for a cable slider, to maneuver the tool cable and avoid collisions. Finally, the planner determines the optimal slider positions to avoid entanglements and generates a Cable Manipulation Motion Sequence (CMMS) to place the slider in these positions. The robot executes both the OMMS and CMMS to handle the tool and its cable to avoid entanglements and excess cable bending. Simulations and real-world experiments help validate the proposed method.
2210.05419
Yante Li
Yante Li, Yang Liu, Kh\'Anh Nguyen, Henglin Shi, Eija Vuorenmaa, Sanna Jarvela, and Guoying Zhao
Exploring Interactions and Regulations in Collaborative Learning: An Interdisciplinary Multimodal Dataset
17 pages, 9 figures
null
null
null
cs.CV cs.DB
http://creativecommons.org/licenses/by-nc-nd/4.0/
Collaborative learning is an educational approach that enhances learning through shared goals and working together. Interaction and regulation are two essential factors related to the success of collaborative learning. Since the information from various modalities can reflect the quality of collaboration, a new multimodal dataset with cognitive and emotional triggers is introduced in this paper to explore how regulations affect interactions during the collaborative process. Specifically, a learning task with intentional interventions is designed and assigned to high school students aged 15 years old (N=81) in average. Multimodal signals, including video, Kinect, audio, and physiological data, are collected and exploited to study regulations in collaborative learning in terms of individual-participant-single-modality, individual-participant-multiple-modality, and multiple-participant-multiple-modality. Analysis of annotated emotions, body gestures, and their interactions indicates that our multimodal dataset with designed treatments could effectively examine moments of regulation in collaborative learning. In addition, preliminary experiments based on baseline models suggest that the dataset provides a challenging in-the-wild scenario, which could further contribute to the fields of education and affective computing.
[ { "created": "Tue, 11 Oct 2022 12:56:36 GMT", "version": "v1" } ]
2022-10-12
[ [ "Li", "Yante", "" ], [ "Liu", "Yang", "" ], [ "Nguyen", "KhÁnh", "" ], [ "Shi", "Henglin", "" ], [ "Vuorenmaa", "Eija", "" ], [ "Jarvela", "Sanna", "" ], [ "Zhao", "Guoying", "" ] ]
Collaborative learning is an educational approach that enhances learning through shared goals and working together. Interaction and regulation are two essential factors related to the success of collaborative learning. Since the information from various modalities can reflect the quality of collaboration, a new multimodal dataset with cognitive and emotional triggers is introduced in this paper to explore how regulations affect interactions during the collaborative process. Specifically, a learning task with intentional interventions is designed and assigned to high school students aged 15 years old (N=81) in average. Multimodal signals, including video, Kinect, audio, and physiological data, are collected and exploited to study regulations in collaborative learning in terms of individual-participant-single-modality, individual-participant-multiple-modality, and multiple-participant-multiple-modality. Analysis of annotated emotions, body gestures, and their interactions indicates that our multimodal dataset with designed treatments could effectively examine moments of regulation in collaborative learning. In addition, preliminary experiments based on baseline models suggest that the dataset provides a challenging in-the-wild scenario, which could further contribute to the fields of education and affective computing.
1901.02802
Alireza Shamsoshoara
Alireza Shamsoshoara
Overview of Blakley's Secret Sharing Scheme
8 pages, 4 Figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this report, I explained the problem of Secret Sharing Scheme. Then based on the definition of the problem, two old methods: Blakley's Secret Sharing Scheme and Shamir's Secret Sharing are introduced. However, we explained the details of the first one since it's the topic of this work. Blakley's method has an application in distributing a key between different parties and reconstructing the key based on each share. However, this method is not efficient enough because of too large space states. Also, we tried to simulate a scenario for spreading a key between some users and tried to reconstruct the first key using Matlab in a graphical user interface.
[ { "created": "Wed, 9 Jan 2019 16:08:30 GMT", "version": "v1" } ]
2019-01-10
[ [ "Shamsoshoara", "Alireza", "" ] ]
In this report, I explained the problem of Secret Sharing Scheme. Then based on the definition of the problem, two old methods: Blakley's Secret Sharing Scheme and Shamir's Secret Sharing are introduced. However, we explained the details of the first one since it's the topic of this work. Blakley's method has an application in distributing a key between different parties and reconstructing the key based on each share. However, this method is not efficient enough because of too large space states. Also, we tried to simulate a scenario for spreading a key between some users and tried to reconstruct the first key using Matlab in a graphical user interface.
1912.00497
Himangi Mittal
Himangi Mittal, Brian Okorn, David Held
Just Go with the Flow: Self-Supervised Scene Flow Estimation
Accepted at CVPR 2020 (Oral)
null
null
null
cs.CV cs.LG cs.RO eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When interacting with highly dynamic environments, scene flow allows autonomous systems to reason about the non-rigid motion of multiple independent objects. This is of particular interest in the field of autonomous driving, in which many cars, people, bicycles, and other objects need to be accurately tracked. Current state-of-the-art methods require annotated scene flow data from autonomous driving scenes to train scene flow networks with supervised learning. As an alternative, we present a method of training scene flow that uses two self-supervised losses, based on nearest neighbors and cycle consistency. These self-supervised losses allow us to train our method on large unlabeled autonomous driving datasets; the resulting method matches current state-of-the-art supervised performance using no real world annotations and exceeds state-of-the-art performance when combining our self-supervised approach with supervised learning on a smaller labeled dataset.
[ { "created": "Sun, 1 Dec 2019 20:32:54 GMT", "version": "v1" }, { "created": "Mon, 13 Apr 2020 19:10:57 GMT", "version": "v2" } ]
2020-04-15
[ [ "Mittal", "Himangi", "" ], [ "Okorn", "Brian", "" ], [ "Held", "David", "" ] ]
When interacting with highly dynamic environments, scene flow allows autonomous systems to reason about the non-rigid motion of multiple independent objects. This is of particular interest in the field of autonomous driving, in which many cars, people, bicycles, and other objects need to be accurately tracked. Current state-of-the-art methods require annotated scene flow data from autonomous driving scenes to train scene flow networks with supervised learning. As an alternative, we present a method of training scene flow that uses two self-supervised losses, based on nearest neighbors and cycle consistency. These self-supervised losses allow us to train our method on large unlabeled autonomous driving datasets; the resulting method matches current state-of-the-art supervised performance using no real world annotations and exceeds state-of-the-art performance when combining our self-supervised approach with supervised learning on a smaller labeled dataset.
2006.14964
Anna Melnichenko
Hagen Echzell, Tobias Friedrich, Pascal Lenzner, Anna Melnichenko
Flow-Based Network Creation Games
To appear at the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020)
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network Creation Games(NCGs) model the creation of decentralized communication networks like the Internet. In such games strategic agents corresponding to network nodes selfishly decide with whom to connect to optimize some objective function. Past research intensively analyzed models where the agents strive for a central position in the network. This models agents optimizing the network for low-latency applications like VoIP. However, with today's abundance of streaming services it is important to ensure that the created network can satisfy the increased bandwidth demand. To the best of our knowledge, this natural problem of the decentralized strategic creation of networks with sufficient bandwidth has not yet been studied. We introduce Flow-Based NCGs where the selfish agents focus on bandwidth instead of latency. In essence, budget-constrained agents create network links to maximize their minimum or average network flow value to all other network nodes. Equivalently, this can also be understood as agents who create links to increase their connectivity and thus also the robustness of the network. For this novel type of NCG we prove that pure Nash equilibria exist, we give a simple algorithm for computing optimal networks, we show that the Price of Stability is 1 and we prove an (almost) tight bound of 2 on the Price of Anarchy. Last but not least, we show that our models do not admit a potential function.
[ { "created": "Fri, 26 Jun 2020 12:59:24 GMT", "version": "v1" } ]
2020-06-29
[ [ "Echzell", "Hagen", "" ], [ "Friedrich", "Tobias", "" ], [ "Lenzner", "Pascal", "" ], [ "Melnichenko", "Anna", "" ] ]
Network Creation Games(NCGs) model the creation of decentralized communication networks like the Internet. In such games strategic agents corresponding to network nodes selfishly decide with whom to connect to optimize some objective function. Past research intensively analyzed models where the agents strive for a central position in the network. This models agents optimizing the network for low-latency applications like VoIP. However, with today's abundance of streaming services it is important to ensure that the created network can satisfy the increased bandwidth demand. To the best of our knowledge, this natural problem of the decentralized strategic creation of networks with sufficient bandwidth has not yet been studied. We introduce Flow-Based NCGs where the selfish agents focus on bandwidth instead of latency. In essence, budget-constrained agents create network links to maximize their minimum or average network flow value to all other network nodes. Equivalently, this can also be understood as agents who create links to increase their connectivity and thus also the robustness of the network. For this novel type of NCG we prove that pure Nash equilibria exist, we give a simple algorithm for computing optimal networks, we show that the Price of Stability is 1 and we prove an (almost) tight bound of 2 on the Price of Anarchy. Last but not least, we show that our models do not admit a potential function.
2307.09777
Shuo Huang
Shuo Huang, Chengpeng Hu, Julian Togelius, Jialin Liu
Generating Redstone Style Cities in Minecraft
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Procedurally generating cities in Minecraft provides players more diverse scenarios and could help understand and improve the design of cities in other digital worlds and the real world. This paper presents a city generator that was submitted as an entry to the 2023 Edition of Minecraft Settlement Generation Competition for Minecraft. The generation procedure is composed of six main steps, namely vegetation clearing, terrain reshaping, building layout generation, route planning, streetlight placement, and wall construction. Three algorithms, including a heuristic-based algorithm, an evolving layout algorithm, and a random one are applied to generate the building layout, thus determining where to place different redstone style buildings, and tested by generating cities on random maps in limited time. Experimental results show that the heuristic-based algorithm is capable of finding an acceptable building layout faster for flat maps, while the evolving layout algorithm performs better in evolving layout for rugged maps. A user study is conducted to compare our generator with outstanding entries of the competition's 2022 edition using the competition's evaluation criteria and shows that our generator performs well in the adaptation and functionality criteria
[ { "created": "Wed, 19 Jul 2023 06:36:01 GMT", "version": "v1" } ]
2023-07-20
[ [ "Huang", "Shuo", "" ], [ "Hu", "Chengpeng", "" ], [ "Togelius", "Julian", "" ], [ "Liu", "Jialin", "" ] ]
Procedurally generating cities in Minecraft provides players more diverse scenarios and could help understand and improve the design of cities in other digital worlds and the real world. This paper presents a city generator that was submitted as an entry to the 2023 Edition of Minecraft Settlement Generation Competition for Minecraft. The generation procedure is composed of six main steps, namely vegetation clearing, terrain reshaping, building layout generation, route planning, streetlight placement, and wall construction. Three algorithms, including a heuristic-based algorithm, an evolving layout algorithm, and a random one are applied to generate the building layout, thus determining where to place different redstone style buildings, and tested by generating cities on random maps in limited time. Experimental results show that the heuristic-based algorithm is capable of finding an acceptable building layout faster for flat maps, while the evolving layout algorithm performs better in evolving layout for rugged maps. A user study is conducted to compare our generator with outstanding entries of the competition's 2022 edition using the competition's evaluation criteria and shows that our generator performs well in the adaptation and functionality criteria
1609.03500
Sheng Zou
Sheng Zou and Alina Zare
Hyperspectral Unmixing with Endmember Variability using Partial Membership Latent Dirichlet Allocation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The application of Partial Membership Latent Dirichlet Allocation(PM-LDA) for hyperspectral endmember estimation and spectral unmixing is presented. PM-LDA provides a model for a hyperspectral image analysis that accounts for spectral variability and incorporates spatial information through the use of superpixel-based 'documents.' In our application of PM-LDA, we employ the Normal Compositional Model in which endmembers are represented as Normal distributions to account for spectral variability and proportion vectors are modeled as random variables governed by a Dirichlet distribution. The use of the Dirichlet distribution enforces positivity and sum-to-one constraints on the proportion values. Algorithm results on real hyperspectral data indicate that PM-LDA produces endmember distributions that represent the ground truth classes and their associated variability.
[ { "created": "Mon, 12 Sep 2016 17:32:41 GMT", "version": "v1" } ]
2016-09-13
[ [ "Zou", "Sheng", "" ], [ "Zare", "Alina", "" ] ]
The application of Partial Membership Latent Dirichlet Allocation(PM-LDA) for hyperspectral endmember estimation and spectral unmixing is presented. PM-LDA provides a model for a hyperspectral image analysis that accounts for spectral variability and incorporates spatial information through the use of superpixel-based 'documents.' In our application of PM-LDA, we employ the Normal Compositional Model in which endmembers are represented as Normal distributions to account for spectral variability and proportion vectors are modeled as random variables governed by a Dirichlet distribution. The use of the Dirichlet distribution enforces positivity and sum-to-one constraints on the proportion values. Algorithm results on real hyperspectral data indicate that PM-LDA produces endmember distributions that represent the ground truth classes and their associated variability.
2206.15007
Zhiying Zhu
Zhiying Zhu, Weixin Liang, James Zou
GSCLIP : A Framework for Explaining Distribution Shifts in Natural Language
Accepted by ICML 2022 DataPerf
null
null
null
cs.CL cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Helping end users comprehend the abstract distribution shifts can greatly facilitate AI deployment. Motivated by this, we propose a novel task, dataset explanation. Given two image data sets, dataset explanation aims to automatically point out their dataset-level distribution shifts with natural language. Current techniques for monitoring distribution shifts provide inadequate information to understand datasets with the goal of improving data quality. Therefore, we introduce GSCLIP, a training-free framework to solve the dataset explanation task. In GSCLIP, we propose the selector as the first quantitative evaluation method to identify explanations that are proper to summarize dataset shifts. Furthermore, we leverage this selector to demonstrate the superiority of a generator based on language model generation. Systematic evaluation on natural data shift verifies that GSCLIP, a combined system of a hybrid generator group and an efficient selector is not only easy-to-use but also powerful for dataset explanation at scale.
[ { "created": "Thu, 30 Jun 2022 04:06:26 GMT", "version": "v1" } ]
2022-07-01
[ [ "Zhu", "Zhiying", "" ], [ "Liang", "Weixin", "" ], [ "Zou", "James", "" ] ]
Helping end users comprehend the abstract distribution shifts can greatly facilitate AI deployment. Motivated by this, we propose a novel task, dataset explanation. Given two image data sets, dataset explanation aims to automatically point out their dataset-level distribution shifts with natural language. Current techniques for monitoring distribution shifts provide inadequate information to understand datasets with the goal of improving data quality. Therefore, we introduce GSCLIP, a training-free framework to solve the dataset explanation task. In GSCLIP, we propose the selector as the first quantitative evaluation method to identify explanations that are proper to summarize dataset shifts. Furthermore, we leverage this selector to demonstrate the superiority of a generator based on language model generation. Systematic evaluation on natural data shift verifies that GSCLIP, a combined system of a hybrid generator group and an efficient selector is not only easy-to-use but also powerful for dataset explanation at scale.
1211.2073
Yang Lu
Yang Lu and Mengying Wang and Kenny Q. Zhu and Bo Yuan
LAGE: A Java Framework to reconstruct Gene Regulatory Networks from Large-Scale Continues Expression Data
2 pages
null
null
null
cs.LG cs.CE q-bio.QM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LAGE is a systematic framework developed in Java. The motivation of LAGE is to provide a scalable and parallel solution to reconstruct Gene Regulatory Networks (GRNs) from continuous gene expression data for very large amount of genes. The basic idea of our framework is motivated by the philosophy of divideand-conquer. Specifically, LAGE recursively partitions genes into multiple overlapping communities with much smaller sizes, learns intra-community GRNs respectively before merge them altogether. Besides, the complete information of overlapping communities serves as the byproduct, which could be used to mine meaningful functional modules in biological networks.
[ { "created": "Fri, 9 Nov 2012 08:34:25 GMT", "version": "v1" } ]
2012-11-12
[ [ "Lu", "Yang", "" ], [ "Wang", "Mengying", "" ], [ "Zhu", "Kenny Q.", "" ], [ "Yuan", "Bo", "" ] ]
LAGE is a systematic framework developed in Java. The motivation of LAGE is to provide a scalable and parallel solution to reconstruct Gene Regulatory Networks (GRNs) from continuous gene expression data for very large amount of genes. The basic idea of our framework is motivated by the philosophy of divideand-conquer. Specifically, LAGE recursively partitions genes into multiple overlapping communities with much smaller sizes, learns intra-community GRNs respectively before merge them altogether. Besides, the complete information of overlapping communities serves as the byproduct, which could be used to mine meaningful functional modules in biological networks.
2305.04719
Zhiling Yan
Shaozu Yuan, Aijun Dai, Zhiling Yan, Ruixue Liu, Meng Chen, Baoyang Chen, Zhijie Qiu, Xiaodong He
Learning to Generate Poetic Chinese Landscape Painting with Calligraphy
Accepted by IJCAI 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a novel system (denoted as Polaca) to generate poetic Chinese landscape painting with calligraphy. Unlike previous single image-to-image painting generation, Polaca takes the classic poetry as input and outputs the artistic landscape painting image with the corresponding calligraphy. It is equipped with three different modules to complete the whole piece of landscape painting artwork: the first one is a text-to-image module to generate landscape painting image, the second one is an image-to-image module to generate stylistic calligraphy image, and the third one is an image fusion module to fuse the two images into a whole piece of aesthetic artwork.
[ { "created": "Mon, 8 May 2023 14:10:10 GMT", "version": "v1" } ]
2023-05-09
[ [ "Yuan", "Shaozu", "" ], [ "Dai", "Aijun", "" ], [ "Yan", "Zhiling", "" ], [ "Liu", "Ruixue", "" ], [ "Chen", "Meng", "" ], [ "Chen", "Baoyang", "" ], [ "Qiu", "Zhijie", "" ], [ "He", "Xiaodong", "" ] ]
In this paper, we present a novel system (denoted as Polaca) to generate poetic Chinese landscape painting with calligraphy. Unlike previous single image-to-image painting generation, Polaca takes the classic poetry as input and outputs the artistic landscape painting image with the corresponding calligraphy. It is equipped with three different modules to complete the whole piece of landscape painting artwork: the first one is a text-to-image module to generate landscape painting image, the second one is an image-to-image module to generate stylistic calligraphy image, and the third one is an image fusion module to fuse the two images into a whole piece of aesthetic artwork.
2102.04875
Parwat Singh Anjana
Parwat Singh Anjana, Sweta Kumari, Sathya Peri, Sachin Rathor, Archit Somani
OptSmart: A Space Efficient Optimistic Concurrent Execution of Smart Contracts
43 pages, 13 figure, 1 Table
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Popular blockchains such as Ethereum and several others execute complex transactions in blocks through user-defined scripts known as smart contracts. Serial execution of smart contract transactions/atomic-units (AUs) fails to harness the multiprocessing power offered by the prevalence of multi-core processors. By adding concurrency to the execution of AUs, we can achieve better efficiency and higher throughput. In this paper, we develop a concurrent miner that proposes a block by executing the AUs concurrently using optimistic Software Transactional Memory systems (STMs). It captures the independent AUs in a concurrent bin and dependent AUs in the block graph (BG) efficiently. Later, we propose a concurrent validator that re-executes the same AUs concurrently and deterministically using a concurrent bin followed by a BG given by the miner to verify the proposed block. We rigorously prove the correctness of concurrent execution of AUs and achieve significant performance gain over the state-of-the-art.
[ { "created": "Tue, 9 Feb 2021 15:18:42 GMT", "version": "v1" }, { "created": "Wed, 17 Feb 2021 06:20:02 GMT", "version": "v2" } ]
2021-02-18
[ [ "Anjana", "Parwat Singh", "" ], [ "Kumari", "Sweta", "" ], [ "Peri", "Sathya", "" ], [ "Rathor", "Sachin", "" ], [ "Somani", "Archit", "" ] ]
Popular blockchains such as Ethereum and several others execute complex transactions in blocks through user-defined scripts known as smart contracts. Serial execution of smart contract transactions/atomic-units (AUs) fails to harness the multiprocessing power offered by the prevalence of multi-core processors. By adding concurrency to the execution of AUs, we can achieve better efficiency and higher throughput. In this paper, we develop a concurrent miner that proposes a block by executing the AUs concurrently using optimistic Software Transactional Memory systems (STMs). It captures the independent AUs in a concurrent bin and dependent AUs in the block graph (BG) efficiently. Later, we propose a concurrent validator that re-executes the same AUs concurrently and deterministically using a concurrent bin followed by a BG given by the miner to verify the proposed block. We rigorously prove the correctness of concurrent execution of AUs and achieve significant performance gain over the state-of-the-art.
1301.0569
Phan H. Giang
Phan H. Giang, Prakash P. Shenoy
Statistical Decisions Using Likelihood Information Without Prior Probabilities
Appears in Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence (UAI2002)
null
null
UAI-P-2002-PG-170-178
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a decision-theoretic approach to statistical inference that satisfies the likelihood principle (LP) without using prior information. Unlike the Bayesian approach, which also satisfies LP, we do not assume knowledge of the prior distribution of the unknown parameter. With respect to information that can be obtained from an experiment, our solution is more efficient than Wald's minimax solution.However, with respect to information assumed to be known before the experiment, our solution demands less input than the Bayesian solution.
[ { "created": "Wed, 12 Dec 2012 15:56:18 GMT", "version": "v1" } ]
2013-01-07
[ [ "Giang", "Phan H.", "" ], [ "Shenoy", "Prakash P.", "" ] ]
This paper presents a decision-theoretic approach to statistical inference that satisfies the likelihood principle (LP) without using prior information. Unlike the Bayesian approach, which also satisfies LP, we do not assume knowledge of the prior distribution of the unknown parameter. With respect to information that can be obtained from an experiment, our solution is more efficient than Wald's minimax solution.However, with respect to information assumed to be known before the experiment, our solution demands less input than the Bayesian solution.
2406.17223
Qi Cao
Qi Cao, Qi Chen, Baoming Bai
On Zero-Error Capacity of Graphs with One Edge
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the zero-error capacity of channels with memory, which are represented by graphs. We provide a method to construct code for any graph with one edge, thereby determining a lower bound on its zero-error capacity. Moreover, this code can achieve zero-error capacity when the symbols in a vertex with degree one are the same. We further apply our method to the one-edge graphs representing the binary channels with two memories. There are 28 possible graphs, which can be organized into 11 categories based on their symmetries. The code constructed by our method is proved to achieve the zero-error capacity for all these graphs except for the two graphs in Case 11.
[ { "created": "Tue, 25 Jun 2024 02:17:34 GMT", "version": "v1" } ]
2024-06-26
[ [ "Cao", "Qi", "" ], [ "Chen", "Qi", "" ], [ "Bai", "Baoming", "" ] ]
In this paper, we study the zero-error capacity of channels with memory, which are represented by graphs. We provide a method to construct code for any graph with one edge, thereby determining a lower bound on its zero-error capacity. Moreover, this code can achieve zero-error capacity when the symbols in a vertex with degree one are the same. We further apply our method to the one-edge graphs representing the binary channels with two memories. There are 28 possible graphs, which can be organized into 11 categories based on their symmetries. The code constructed by our method is proved to achieve the zero-error capacity for all these graphs except for the two graphs in Case 11.
2408.02231
Agneet Chatterjee
Agneet Chatterjee, Yiran Luo, Tejas Gokhale, Yezhou Yang, Chitta Baral
REVISION: Rendering Tools Enable Spatial Fidelity in Vision-Language Models
Accepted to ECCV 2024. Project Page : https://agneetchatterjee.com/revision/
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Text-to-Image (T2I) and multimodal large language models (MLLMs) have been adopted in solutions for several computer vision and multimodal learning tasks. However, it has been found that such vision-language models lack the ability to correctly reason over spatial relationships. To tackle this shortcoming, we develop the REVISION framework which improves spatial fidelity in vision-language models. REVISION is a 3D rendering based pipeline that generates spatially accurate synthetic images, given a textual prompt. REVISION is an extendable framework, which currently supports 100+ 3D assets, 11 spatial relationships, all with diverse camera perspectives and backgrounds. Leveraging images from REVISION as additional guidance in a training-free manner consistently improves the spatial consistency of T2I models across all spatial relationships, achieving competitive performance on the VISOR and T2I-CompBench benchmarks. We also design RevQA, a question-answering benchmark to evaluate the spatial reasoning abilities of MLLMs, and find that state-of-the-art models are not robust to complex spatial reasoning under adversarial settings. Our results and findings indicate that utilizing rendering-based frameworks is an effective approach for developing spatially-aware generative models.
[ { "created": "Mon, 5 Aug 2024 04:51:46 GMT", "version": "v1" } ]
2024-08-06
[ [ "Chatterjee", "Agneet", "" ], [ "Luo", "Yiran", "" ], [ "Gokhale", "Tejas", "" ], [ "Yang", "Yezhou", "" ], [ "Baral", "Chitta", "" ] ]
Text-to-Image (T2I) and multimodal large language models (MLLMs) have been adopted in solutions for several computer vision and multimodal learning tasks. However, it has been found that such vision-language models lack the ability to correctly reason over spatial relationships. To tackle this shortcoming, we develop the REVISION framework which improves spatial fidelity in vision-language models. REVISION is a 3D rendering based pipeline that generates spatially accurate synthetic images, given a textual prompt. REVISION is an extendable framework, which currently supports 100+ 3D assets, 11 spatial relationships, all with diverse camera perspectives and backgrounds. Leveraging images from REVISION as additional guidance in a training-free manner consistently improves the spatial consistency of T2I models across all spatial relationships, achieving competitive performance on the VISOR and T2I-CompBench benchmarks. We also design RevQA, a question-answering benchmark to evaluate the spatial reasoning abilities of MLLMs, and find that state-of-the-art models are not robust to complex spatial reasoning under adversarial settings. Our results and findings indicate that utilizing rendering-based frameworks is an effective approach for developing spatially-aware generative models.
2003.04992
Hui Wan
Hui Wan
Multi-task Learning with Multi-head Attention for Multi-choice Reading Comprehension
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple-choice Machine Reading Comprehension (MRC) is an important and challenging Natural Language Understanding (NLU) task, in which a machine must choose the answer to a question from a set of choices, with the question placed in context of text passages or dialog. In the last a couple of years the NLU field has been revolutionized with the advent of models based on the Transformer architecture, which are pretrained on massive amounts of unsupervised data and then fine-tuned for various supervised learning NLU tasks. Transformer models have come to dominate a wide variety of leader-boards in the NLU field; in the area of MRC, the current state-of-the-art model on the DREAM dataset (see[Sunet al., 2019]) fine tunes Albert, a large pretrained Transformer-based model, and addition-ally combines it with an extra layer of multi-head attention between context and question-answer[Zhuet al., 2020].The purpose of this note is to document a new state-of-the-art result in the DREAM task, which is accomplished by, additionally, performing multi-task learning on two MRC multi-choice reading comprehension tasks (RACE and DREAM).
[ { "created": "Wed, 26 Feb 2020 16:32:25 GMT", "version": "v1" } ]
2020-03-12
[ [ "Wan", "Hui", "" ] ]
Multiple-choice Machine Reading Comprehension (MRC) is an important and challenging Natural Language Understanding (NLU) task, in which a machine must choose the answer to a question from a set of choices, with the question placed in context of text passages or dialog. In the last a couple of years the NLU field has been revolutionized with the advent of models based on the Transformer architecture, which are pretrained on massive amounts of unsupervised data and then fine-tuned for various supervised learning NLU tasks. Transformer models have come to dominate a wide variety of leader-boards in the NLU field; in the area of MRC, the current state-of-the-art model on the DREAM dataset (see[Sunet al., 2019]) fine tunes Albert, a large pretrained Transformer-based model, and addition-ally combines it with an extra layer of multi-head attention between context and question-answer[Zhuet al., 2020].The purpose of this note is to document a new state-of-the-art result in the DREAM task, which is accomplished by, additionally, performing multi-task learning on two MRC multi-choice reading comprehension tasks (RACE and DREAM).
1605.03821
Jian Wang
Liqing Gao, Yanzhang Wang, Xin Ye and Jian Wang
Crowd Counting Considering Network Flow Constraints in Videos
20pages,9 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growth of the number of people in the monitoring scene may increase the probability of security threat, which makes crowd counting more and more important. Most of the existing approaches estimate the number of pedestrians within one frame, which results in inconsistent predictions in terms of time. This paper, for the first time, introduces a quadratic programming model with the network flow constraints to improve the accuracy of crowd counting. Firstly, the foreground of each frame is segmented into groups, each of which contains several pedestrians. Then, a regression-based map is developed in accordance with the relationship between low-level features of each group and the number of people in it. Secondly, a directed graph is constructed to simulate constraints on people's flow, whose vertices represent groups of each frame and arcs represent people moving from one group to another. Then, the people flow can be viewed as an integer flow in the constructed digraph. Finally, by solving a quadratic programming problem with network flow constraints in the directed graph, we obtain consistency in people counting. The experimental results show that the proposed method can reduce the crowd counting errors and improve the accuracy. Moreover, this method can also be applied to any ultramodern group-based regression counting approach to get improvements.
[ { "created": "Thu, 12 May 2016 14:12:21 GMT", "version": "v1" }, { "created": "Fri, 15 Dec 2017 14:22:55 GMT", "version": "v2" } ]
2017-12-18
[ [ "Gao", "Liqing", "" ], [ "Wang", "Yanzhang", "" ], [ "Ye", "Xin", "" ], [ "Wang", "Jian", "" ] ]
The growth of the number of people in the monitoring scene may increase the probability of security threat, which makes crowd counting more and more important. Most of the existing approaches estimate the number of pedestrians within one frame, which results in inconsistent predictions in terms of time. This paper, for the first time, introduces a quadratic programming model with the network flow constraints to improve the accuracy of crowd counting. Firstly, the foreground of each frame is segmented into groups, each of which contains several pedestrians. Then, a regression-based map is developed in accordance with the relationship between low-level features of each group and the number of people in it. Secondly, a directed graph is constructed to simulate constraints on people's flow, whose vertices represent groups of each frame and arcs represent people moving from one group to another. Then, the people flow can be viewed as an integer flow in the constructed digraph. Finally, by solving a quadratic programming problem with network flow constraints in the directed graph, we obtain consistency in people counting. The experimental results show that the proposed method can reduce the crowd counting errors and improve the accuracy. Moreover, this method can also be applied to any ultramodern group-based regression counting approach to get improvements.
1810.09798
Fernando Alonso-Fernandez
Fernando Alonso-Fernandez, Josef Bigun, Cristofer Englund
Expression Recognition Using the Periocular Region: A Feasibility Study
Accepted for publication at Intl Conf on Signal Image Technology & Internet Based Systems, SITIS 2018
Proc. Intl Conf on Signal Image Technology & Internet Based Systems, SITIS, Gran Canaria, Spain, 26-29 Nov 2018
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0%/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available.
[ { "created": "Tue, 23 Oct 2018 11:56:20 GMT", "version": "v1" } ]
2020-10-19
[ [ "Alonso-Fernandez", "Fernando", "" ], [ "Bigun", "Josef", "" ], [ "Englund", "Cristofer", "" ] ]
This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0%/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available.
2003.13883
Wennie Tabib
Wennie Tabib, Kshitij Goel, John Yao, Curtis Boirum and Nathan Michael (Carnegie Mellon University)
Autonomous Cave Surveying with an Aerial Robot
17 pages, 14 figures; accepted for publication in IEEE Transactions on Robotics (TRO 2021) and adds additional experimental results
null
10.1109/TRO.2021.3104459
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a method for cave surveying in total darkness using an autonomous aerial vehicle equipped with a depth camera for mapping, downward-facing camera for state estimation, and forward and downward lights. Traditional methods of cave surveying are labor-intensive and dangerous due to the risk of hypothermia when collecting data over extended periods of time in cold and damp environments, the risk of injury when operating in darkness in rocky or muddy environments, and the potential structural instability of the subterranean environment. Although these dangers can be mitigated by deploying robots to map dangerous passages and voids, real-time feedback is often needed to operate robots safely and efficiently. Few state-of-the-art, high-resolution perceptual modeling techniques attempt to reduce their high bandwidth requirements to work well with low bandwidth communication channels. To bridge this gap in the state of the art, this work compactly represents sensor observations as Gaussian mixture models and maintains a local occupancy grid map for a motion planner that greedily maximizes an information-theoretic objective function. The approach accommodates both limited field of view depth cameras and larger field of view LiDAR sensors and is extensively evaluated in long duration simulations on an embedded PC. An aerial system is leveraged to demonstrate the repeatability of the approach in a flight arena as well as the effects of communication dropouts. Finally, the system is deployed in Laurel Caverns, a commercially owned and operated cave in southwestern Pennsylvania, USA, and a wild cave in West Virginia, USA.
[ { "created": "Tue, 31 Mar 2020 00:22:04 GMT", "version": "v1" }, { "created": "Sat, 16 Oct 2021 03:32:51 GMT", "version": "v2" } ]
2021-10-19
[ [ "Tabib", "Wennie", "", "Carnegie Mellon University" ], [ "Goel", "Kshitij", "", "Carnegie Mellon University" ], [ "Yao", "John", "", "Carnegie Mellon University" ], [ "Boirum", "Curtis", "", "Carnegie Mellon University" ], [ "Michael", "Nathan", "", "Carnegie Mellon University" ] ]
This paper presents a method for cave surveying in total darkness using an autonomous aerial vehicle equipped with a depth camera for mapping, downward-facing camera for state estimation, and forward and downward lights. Traditional methods of cave surveying are labor-intensive and dangerous due to the risk of hypothermia when collecting data over extended periods of time in cold and damp environments, the risk of injury when operating in darkness in rocky or muddy environments, and the potential structural instability of the subterranean environment. Although these dangers can be mitigated by deploying robots to map dangerous passages and voids, real-time feedback is often needed to operate robots safely and efficiently. Few state-of-the-art, high-resolution perceptual modeling techniques attempt to reduce their high bandwidth requirements to work well with low bandwidth communication channels. To bridge this gap in the state of the art, this work compactly represents sensor observations as Gaussian mixture models and maintains a local occupancy grid map for a motion planner that greedily maximizes an information-theoretic objective function. The approach accommodates both limited field of view depth cameras and larger field of view LiDAR sensors and is extensively evaluated in long duration simulations on an embedded PC. An aerial system is leveraged to demonstrate the repeatability of the approach in a flight arena as well as the effects of communication dropouts. Finally, the system is deployed in Laurel Caverns, a commercially owned and operated cave in southwestern Pennsylvania, USA, and a wild cave in West Virginia, USA.
2008.06101
Xiangyu Guo
Xiangyu Guo, Janardhan Kulkarni, Shi Li, Jiayi Xian
Consistent $k$-Median: Simpler, Better and Robust
null
null
null
null
cs.DS cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we introduce and study the online consistent $k$-clustering with outliers problem, generalizing the non-outlier version of the problem studied in [Lattanzi-Vassilvitskii, ICML17]. We show that a simple local-search based online algorithm can give a bicriteria constant approximation for the problem with $O(k^2 \log^2 (nD))$ swaps of medians (recourse) in total, where $D$ is the diameter of the metric. When restricted to the problem without outliers, our algorithm is simpler, deterministic and gives better approximation ratio and recourse, compared to that of [Lattanzi-Vassilvitskii, ICML17].
[ { "created": "Thu, 13 Aug 2020 20:24:28 GMT", "version": "v1" } ]
2020-08-17
[ [ "Guo", "Xiangyu", "" ], [ "Kulkarni", "Janardhan", "" ], [ "Li", "Shi", "" ], [ "Xian", "Jiayi", "" ] ]
In this paper we introduce and study the online consistent $k$-clustering with outliers problem, generalizing the non-outlier version of the problem studied in [Lattanzi-Vassilvitskii, ICML17]. We show that a simple local-search based online algorithm can give a bicriteria constant approximation for the problem with $O(k^2 \log^2 (nD))$ swaps of medians (recourse) in total, where $D$ is the diameter of the metric. When restricted to the problem without outliers, our algorithm is simpler, deterministic and gives better approximation ratio and recourse, compared to that of [Lattanzi-Vassilvitskii, ICML17].
2110.07244
Quan Wang
Quan Wang and Songtai Dai and Benfeng Xu and Yajuan Lyu and Yong Zhu and Hua Wu and Haifeng Wang
Building Chinese Biomedical Language Models via Multi-Level Text Discrimination
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Pre-trained language models (PLMs), such as BERT and GPT, have revolutionized the field of NLP, not only in the general domain but also in the biomedical domain. Most prior efforts in building biomedical PLMs have resorted simply to domain adaptation and focused mainly on English. In this work we introduce eHealth, a Chinese biomedical PLM built from scratch with a new pre-training framework. This new framework pre-trains eHealth as a discriminator through both token- and sequence-level discrimination. The former is to detect input tokens corrupted by a generator and recover their original identities from plausible candidates, while the latter is to further distinguish corruptions of a same original sequence from those of others. As such, eHealth can learn language semantics at both token and sequence levels. Extensive experiments on 11 Chinese biomedical language understanding tasks of various forms verify the effectiveness and superiority of our approach. We release the pre-trained model at \url{https://github.com/PaddlePaddle/Research/tree/master/KG/eHealth} and will also release the code later.
[ { "created": "Thu, 14 Oct 2021 10:43:28 GMT", "version": "v1" }, { "created": "Wed, 2 Mar 2022 10:04:24 GMT", "version": "v2" } ]
2022-03-03
[ [ "Wang", "Quan", "" ], [ "Dai", "Songtai", "" ], [ "Xu", "Benfeng", "" ], [ "Lyu", "Yajuan", "" ], [ "Zhu", "Yong", "" ], [ "Wu", "Hua", "" ], [ "Wang", "Haifeng", "" ] ]
Pre-trained language models (PLMs), such as BERT and GPT, have revolutionized the field of NLP, not only in the general domain but also in the biomedical domain. Most prior efforts in building biomedical PLMs have resorted simply to domain adaptation and focused mainly on English. In this work we introduce eHealth, a Chinese biomedical PLM built from scratch with a new pre-training framework. This new framework pre-trains eHealth as a discriminator through both token- and sequence-level discrimination. The former is to detect input tokens corrupted by a generator and recover their original identities from plausible candidates, while the latter is to further distinguish corruptions of a same original sequence from those of others. As such, eHealth can learn language semantics at both token and sequence levels. Extensive experiments on 11 Chinese biomedical language understanding tasks of various forms verify the effectiveness and superiority of our approach. We release the pre-trained model at \url{https://github.com/PaddlePaddle/Research/tree/master/KG/eHealth} and will also release the code later.
1907.05391
Slobodan Mitrovi\'c
Jakub {\L}\k{a}cki, Slobodan Mitrovi\'c, Krzysztof Onak, Piotr Sankowski
Walking Randomly, Massively, and Efficiently
null
null
null
null
cs.DS cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a set of techniques that allow for efficiently generating many independent random walks in the Massive Parallel Computation (MPC) model with space per machine strongly sublinear in the number of vertices. In this space-per-machine regime, many natural approaches to graph problems struggle to overcome the $\Theta(\log n)$ MPC round complexity barrier. Our techniques enable breaking this barrier for PageRank---one of the most important applications of random walks---even in more challenging directed graphs, and for approximate bipartiteness and expansion testing. In the undirected case, we start our random walks from the stationary distribution, which implies that we approximately know the empirical distribution of their next steps. This allows for preparing continuations of random walks in advance and applying a doubling approach. As a result we can generate multiple random walks of length $l$ in $\Theta(\log l)$ rounds on MPC. Moreover, we show that under the popular 1-vs.-2-Cycles conjecture, this round complexity is asymptotically tight. For directed graphs, our approach stems from our treatment of the PageRank Markov chain. We first compute the PageRank for the undirected version of the input graph and then slowly transition towards the directed case, considering convex combinations of the transition matrices in the process. For PageRank, we achieve the following round complexities for damping factor equal to $1 - \epsilon$: * in $O(\log \log n + \log 1 / \epsilon)$ rounds for undirected graphs (with $\tilde O(m / \epsilon^2)$ total space), * in $\tilde O(\log^2 \log n + \log^2 1/\epsilon)$ rounds for directed graphs (with $\tilde O((m+n^{1+o(1)}) / poly\, \epsilon)$ total space).
[ { "created": "Thu, 11 Jul 2019 17:13:26 GMT", "version": "v1" }, { "created": "Sun, 21 Jul 2019 09:30:04 GMT", "version": "v2" }, { "created": "Mon, 28 Oct 2019 09:50:10 GMT", "version": "v3" }, { "created": "Wed, 6 Nov 2019 02:27:31 GMT", "version": "v4" } ]
2019-11-07
[ [ "Łącki", "Jakub", "" ], [ "Mitrović", "Slobodan", "" ], [ "Onak", "Krzysztof", "" ], [ "Sankowski", "Piotr", "" ] ]
We introduce a set of techniques that allow for efficiently generating many independent random walks in the Massive Parallel Computation (MPC) model with space per machine strongly sublinear in the number of vertices. In this space-per-machine regime, many natural approaches to graph problems struggle to overcome the $\Theta(\log n)$ MPC round complexity barrier. Our techniques enable breaking this barrier for PageRank---one of the most important applications of random walks---even in more challenging directed graphs, and for approximate bipartiteness and expansion testing. In the undirected case, we start our random walks from the stationary distribution, which implies that we approximately know the empirical distribution of their next steps. This allows for preparing continuations of random walks in advance and applying a doubling approach. As a result we can generate multiple random walks of length $l$ in $\Theta(\log l)$ rounds on MPC. Moreover, we show that under the popular 1-vs.-2-Cycles conjecture, this round complexity is asymptotically tight. For directed graphs, our approach stems from our treatment of the PageRank Markov chain. We first compute the PageRank for the undirected version of the input graph and then slowly transition towards the directed case, considering convex combinations of the transition matrices in the process. For PageRank, we achieve the following round complexities for damping factor equal to $1 - \epsilon$: * in $O(\log \log n + \log 1 / \epsilon)$ rounds for undirected graphs (with $\tilde O(m / \epsilon^2)$ total space), * in $\tilde O(\log^2 \log n + \log^2 1/\epsilon)$ rounds for directed graphs (with $\tilde O((m+n^{1+o(1)}) / poly\, \epsilon)$ total space).
1812.09280
Hichem Sahbi
Hichem Sahbi
Canonical Correlation Analysis for Misaligned Satellite Image Change Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Canonical correlation analysis (CCA) is a statistical learning method that seeks to build view-independent latent representations from multi-view data. This method has been successfully applied to several pattern analysis tasks such as image-to-text mapping and view-invariant object/action recognition. However, this success is highly dependent on the quality of data pairing (i.e., alignments) and mispairing adversely affects the generalization ability of the learned CCA representations. In this paper, we address the issue of alignment errors using a new variant of canonical correlation analysis referred to as alignment-agnostic (AA) CCA. Starting from erroneously paired data taken from different views, this CCA finds transformation matrices by optimizing a constrained maximization problem that mixes a data correlation term with context regularization; the particular design of these two terms mitigates the effect of alignment errors when learning the CCA transformations. Experiments conducted on multi-view tasks, including multi-temporal satellite image change detection, show that our AA CCA method is highly effective and resilient to mispairing errors.
[ { "created": "Fri, 21 Dec 2018 17:43:16 GMT", "version": "v1" } ]
2018-12-24
[ [ "Sahbi", "Hichem", "" ] ]
Canonical correlation analysis (CCA) is a statistical learning method that seeks to build view-independent latent representations from multi-view data. This method has been successfully applied to several pattern analysis tasks such as image-to-text mapping and view-invariant object/action recognition. However, this success is highly dependent on the quality of data pairing (i.e., alignments) and mispairing adversely affects the generalization ability of the learned CCA representations. In this paper, we address the issue of alignment errors using a new variant of canonical correlation analysis referred to as alignment-agnostic (AA) CCA. Starting from erroneously paired data taken from different views, this CCA finds transformation matrices by optimizing a constrained maximization problem that mixes a data correlation term with context regularization; the particular design of these two terms mitigates the effect of alignment errors when learning the CCA transformations. Experiments conducted on multi-view tasks, including multi-temporal satellite image change detection, show that our AA CCA method is highly effective and resilient to mispairing errors.
2402.04794
Chakib Fettal
Chakib Fettal, Lazhar Labiod, Mohamed Nadif
Scalable Multi-view Clustering via Explicit Kernel Features Maps
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A growing awareness of multi-view learning as an important component in data science and machine learning is a consequence of the increasing prevalence of multiple views in real-world applications, especially in the context of networks. In this paper we introduce a new scalability framework for multi-view subspace clustering. An efficient optimization strategy is proposed, leveraging kernel feature maps to reduce the computational burden while maintaining good clustering performance. The scalability of the algorithm means that it can be applied to large-scale datasets, including those with millions of data points, using a standard machine, in a few minutes. We conduct extensive experiments on real-world benchmark networks of various sizes in order to evaluate the performance of our algorithm against state-of-the-art multi-view subspace clustering methods and attributed-network multi-view approaches.
[ { "created": "Wed, 7 Feb 2024 12:35:31 GMT", "version": "v1" } ]
2024-02-08
[ [ "Fettal", "Chakib", "" ], [ "Labiod", "Lazhar", "" ], [ "Nadif", "Mohamed", "" ] ]
A growing awareness of multi-view learning as an important component in data science and machine learning is a consequence of the increasing prevalence of multiple views in real-world applications, especially in the context of networks. In this paper we introduce a new scalability framework for multi-view subspace clustering. An efficient optimization strategy is proposed, leveraging kernel feature maps to reduce the computational burden while maintaining good clustering performance. The scalability of the algorithm means that it can be applied to large-scale datasets, including those with millions of data points, using a standard machine, in a few minutes. We conduct extensive experiments on real-world benchmark networks of various sizes in order to evaluate the performance of our algorithm against state-of-the-art multi-view subspace clustering methods and attributed-network multi-view approaches.
2310.05804
Haoyu Zhang
Haoyu Zhang, Yu Wang, Guanghao Yin, Kejun Liu, Yuanyuan Liu, Tianshu Yu
Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis
Published in EMNLP 2023
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
10.18653/v1/2023.emnlp-main.49
null
cs.AI cs.CL cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Though Multimodal Sentiment Analysis (MSA) proves effective by utilizing rich information from multiple sources (e.g., language, video, and audio), the potential sentiment-irrelevant and conflicting information across modalities may hinder the performance from being further improved. To alleviate this, we present Adaptive Language-guided Multimodal Transformer (ALMT), which incorporates an Adaptive Hyper-modality Learning (AHL) module to learn an irrelevance/conflict-suppressing representation from visual and audio features under the guidance of language features at different scales. With the obtained hyper-modality representation, the model can obtain a complementary and joint representation through multimodal fusion for effective MSA. In practice, ALMT achieves state-of-the-art performance on several popular datasets (e.g., MOSI, MOSEI and CH-SIMS) and an abundance of ablation demonstrates the validity and necessity of our irrelevance/conflict suppression mechanism.
[ { "created": "Mon, 9 Oct 2023 15:43:07 GMT", "version": "v1" }, { "created": "Thu, 14 Dec 2023 13:07:45 GMT", "version": "v2" } ]
2023-12-15
[ [ "Zhang", "Haoyu", "" ], [ "Wang", "Yu", "" ], [ "Yin", "Guanghao", "" ], [ "Liu", "Kejun", "" ], [ "Liu", "Yuanyuan", "" ], [ "Yu", "Tianshu", "" ] ]
Though Multimodal Sentiment Analysis (MSA) proves effective by utilizing rich information from multiple sources (e.g., language, video, and audio), the potential sentiment-irrelevant and conflicting information across modalities may hinder the performance from being further improved. To alleviate this, we present Adaptive Language-guided Multimodal Transformer (ALMT), which incorporates an Adaptive Hyper-modality Learning (AHL) module to learn an irrelevance/conflict-suppressing representation from visual and audio features under the guidance of language features at different scales. With the obtained hyper-modality representation, the model can obtain a complementary and joint representation through multimodal fusion for effective MSA. In practice, ALMT achieves state-of-the-art performance on several popular datasets (e.g., MOSI, MOSEI and CH-SIMS) and an abundance of ablation demonstrates the validity and necessity of our irrelevance/conflict suppression mechanism.
1903.09766
Md Jahidul Islam
Md Jahidul Islam, Youya Xia and Junaed Sattar
Fast Underwater Image Enhancement for Improved Visual Perception
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a conditional generative adversarial network-based model for real-time underwater image enhancement. To supervise the adversarial training, we formulate an objective function that evaluates the perceptual image quality based on its global content, color, local texture, and style information. We also present EUVP, a large-scale dataset of a paired and unpaired collection of underwater images (of `poor' and `good' quality) that are captured using seven different cameras over various visibility conditions during oceanic explorations and human-robot collaborative experiments. In addition, we perform several qualitative and quantitative evaluations which suggest that the proposed model can learn to enhance underwater image quality from both paired and unpaired training. More importantly, the enhanced images provide improved performances of standard models for underwater object detection, human pose estimation, and saliency prediction. These results validate that it is suitable for real-time preprocessing in the autonomy pipeline by visually-guided underwater robots. The model and associated training pipelines are available at https://github.com/xahidbuffon/funie-gan.
[ { "created": "Sat, 23 Mar 2019 05:21:05 GMT", "version": "v1" }, { "created": "Wed, 18 Dec 2019 23:40:48 GMT", "version": "v2" }, { "created": "Sun, 9 Feb 2020 02:06:40 GMT", "version": "v3" } ]
2020-02-11
[ [ "Islam", "Md Jahidul", "" ], [ "Xia", "Youya", "" ], [ "Sattar", "Junaed", "" ] ]
In this paper, we present a conditional generative adversarial network-based model for real-time underwater image enhancement. To supervise the adversarial training, we formulate an objective function that evaluates the perceptual image quality based on its global content, color, local texture, and style information. We also present EUVP, a large-scale dataset of a paired and unpaired collection of underwater images (of `poor' and `good' quality) that are captured using seven different cameras over various visibility conditions during oceanic explorations and human-robot collaborative experiments. In addition, we perform several qualitative and quantitative evaluations which suggest that the proposed model can learn to enhance underwater image quality from both paired and unpaired training. More importantly, the enhanced images provide improved performances of standard models for underwater object detection, human pose estimation, and saliency prediction. These results validate that it is suitable for real-time preprocessing in the autonomy pipeline by visually-guided underwater robots. The model and associated training pipelines are available at https://github.com/xahidbuffon/funie-gan.
2308.08741
Jiazhao Zhang
Yijie Tang, Jiazhao Zhang, Zhinan Yu, He Wang, Kai Xu
MIPS-Fusion: Multi-Implicit-Submaps for Scalable and Robust Online Neural RGB-D Reconstruction
null
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
We introduce MIPS-Fusion, a robust and scalable online RGB-D reconstruction method based on a novel neural implicit representation -- multi-implicit-submap. Different from existing neural RGB-D reconstruction methods lacking either flexibility with a single neural map or scalability due to extra storage of feature grids, we propose a pure neural representation tackling both difficulties with a divide-and-conquer design. In our method, neural submaps are incrementally allocated alongside the scanning trajectory and efficiently learned with local neural bundle adjustments. The submaps can be refined individually in a back-end optimization and optimized jointly to realize submap-level loop closure. Meanwhile, we propose a hybrid tracking approach combining randomized and gradient-based pose optimizations. For the first time, randomized optimization is made possible in neural tracking with several key designs to the learning process, enabling efficient and robust tracking even under fast camera motions. The extensive evaluation demonstrates that our method attains higher reconstruction quality than the state of the arts for large-scale scenes and under fast camera motions.
[ { "created": "Thu, 17 Aug 2023 02:33:16 GMT", "version": "v1" }, { "created": "Thu, 24 Aug 2023 15:43:17 GMT", "version": "v2" } ]
2023-08-25
[ [ "Tang", "Yijie", "" ], [ "Zhang", "Jiazhao", "" ], [ "Yu", "Zhinan", "" ], [ "Wang", "He", "" ], [ "Xu", "Kai", "" ] ]
We introduce MIPS-Fusion, a robust and scalable online RGB-D reconstruction method based on a novel neural implicit representation -- multi-implicit-submap. Different from existing neural RGB-D reconstruction methods lacking either flexibility with a single neural map or scalability due to extra storage of feature grids, we propose a pure neural representation tackling both difficulties with a divide-and-conquer design. In our method, neural submaps are incrementally allocated alongside the scanning trajectory and efficiently learned with local neural bundle adjustments. The submaps can be refined individually in a back-end optimization and optimized jointly to realize submap-level loop closure. Meanwhile, we propose a hybrid tracking approach combining randomized and gradient-based pose optimizations. For the first time, randomized optimization is made possible in neural tracking with several key designs to the learning process, enabling efficient and robust tracking even under fast camera motions. The extensive evaluation demonstrates that our method attains higher reconstruction quality than the state of the arts for large-scale scenes and under fast camera motions.
2402.07645
Isabelle Lorge PhD
Isabelle Lorge, Dan W. Joyce, Niall Taylor, Alejo Nevado-Holgado, Andrea Cipriani, Andrey Kormilitzin
Detecting the Clinical Features of Difficult-to-Treat Depression using Synthetic Data from Large Language Models
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Difficult-to-treat depression (DTD) has been proposed as a broader and more clinically comprehensive perspective on a person's depressive disorder where despite treatment, they continue to experience significant burden. We sought to develop a Large Language Model (LLM)-based tool capable of interrogating routinely-collected, narrative (free-text) electronic health record (EHR) data to locate published prognostic factors that capture the clinical syndrome of DTD. In this work, we use LLM-generated synthetic data (GPT3.5) and a Non-Maximum Suppression (NMS) algorithm to train a BERT-based span extraction model. The resulting model is then able to extract and label spans related to a variety of relevant positive and negative factors in real clinical data (i.e. spans of text that increase or decrease the likelihood of a patient matching the DTD syndrome). We show it is possible to obtain good overall performance (0.70 F1 across polarity) on real clinical data on a set of as many as 20 different factors, and high performance (0.85 F1 with 0.95 precision) on a subset of important DTD factors such as history of abuse, family history of affective disorder, illness severity and suicidality by training the model exclusively on synthetic data. Our results show promise for future healthcare applications especially in applications where traditionally, highly confidential medical data and human-expert annotation would normally be required.
[ { "created": "Mon, 12 Feb 2024 13:34:33 GMT", "version": "v1" } ]
2024-02-13
[ [ "Lorge", "Isabelle", "" ], [ "Joyce", "Dan W.", "" ], [ "Taylor", "Niall", "" ], [ "Nevado-Holgado", "Alejo", "" ], [ "Cipriani", "Andrea", "" ], [ "Kormilitzin", "Andrey", "" ] ]
Difficult-to-treat depression (DTD) has been proposed as a broader and more clinically comprehensive perspective on a person's depressive disorder where despite treatment, they continue to experience significant burden. We sought to develop a Large Language Model (LLM)-based tool capable of interrogating routinely-collected, narrative (free-text) electronic health record (EHR) data to locate published prognostic factors that capture the clinical syndrome of DTD. In this work, we use LLM-generated synthetic data (GPT3.5) and a Non-Maximum Suppression (NMS) algorithm to train a BERT-based span extraction model. The resulting model is then able to extract and label spans related to a variety of relevant positive and negative factors in real clinical data (i.e. spans of text that increase or decrease the likelihood of a patient matching the DTD syndrome). We show it is possible to obtain good overall performance (0.70 F1 across polarity) on real clinical data on a set of as many as 20 different factors, and high performance (0.85 F1 with 0.95 precision) on a subset of important DTD factors such as history of abuse, family history of affective disorder, illness severity and suicidality by training the model exclusively on synthetic data. Our results show promise for future healthcare applications especially in applications where traditionally, highly confidential medical data and human-expert annotation would normally be required.
2408.06747
Jingyun Wang
Jingyun Wang and Guoliang Kang
ReCLIP++: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation
Extended version of our CVPR 24 paper
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent works utilize CLIP to perform the challenging unsupervised semantic segmentation task where only images without annotations are available. However, we observe that when adopting CLIP to such a pixel-level understanding task, unexpected bias (including class-preference bias and space-preference bias) occurs. Previous works don't explicitly model the bias, which largely constrains the segmentation performance. In this paper, we propose to explicitly model and rectify the bias existing in CLIP to facilitate the unsupervised semantic segmentation task. Specifically, we design a learnable ''Reference'' prompt to encode class-preference bias and a projection of the positional embedding in vision transformer to encode space-preference bias respectively. To avoid interference, two kinds of biases are firstly independently encoded into the Reference feature and the positional feature. Via a matrix multiplication between two features, a bias logit map is generated to explicitly represent two kinds of biases. Then we rectify the logits of CLIP via a simple element-wise subtraction. To make the rectified results smoother and more contextual, we design a mask decoder which takes the feature of CLIP and rectified logits as input and outputs a rectified segmentation mask with the help of Gumbel-Softmax operation. To make the bias modeling and rectification process meaningful and effective, a contrastive loss based on masked visual features and the text features of different classes is imposed. To further improve the segmentation, we distill the knowledge from the rectified CLIP to the advanced segmentation architecture via minimizing our designed mask-guided, feature-guided and text-guided loss terms. Extensive experiments on various benchmarks demonstrate that ReCLIP++ performs favorably against previous SOTAs. The implementation is available at: https://github.com/dogehhh/ReCLIP.
[ { "created": "Tue, 13 Aug 2024 09:10:48 GMT", "version": "v1" } ]
2024-08-14
[ [ "Wang", "Jingyun", "" ], [ "Kang", "Guoliang", "" ] ]
Recent works utilize CLIP to perform the challenging unsupervised semantic segmentation task where only images without annotations are available. However, we observe that when adopting CLIP to such a pixel-level understanding task, unexpected bias (including class-preference bias and space-preference bias) occurs. Previous works don't explicitly model the bias, which largely constrains the segmentation performance. In this paper, we propose to explicitly model and rectify the bias existing in CLIP to facilitate the unsupervised semantic segmentation task. Specifically, we design a learnable ''Reference'' prompt to encode class-preference bias and a projection of the positional embedding in vision transformer to encode space-preference bias respectively. To avoid interference, two kinds of biases are firstly independently encoded into the Reference feature and the positional feature. Via a matrix multiplication between two features, a bias logit map is generated to explicitly represent two kinds of biases. Then we rectify the logits of CLIP via a simple element-wise subtraction. To make the rectified results smoother and more contextual, we design a mask decoder which takes the feature of CLIP and rectified logits as input and outputs a rectified segmentation mask with the help of Gumbel-Softmax operation. To make the bias modeling and rectification process meaningful and effective, a contrastive loss based on masked visual features and the text features of different classes is imposed. To further improve the segmentation, we distill the knowledge from the rectified CLIP to the advanced segmentation architecture via minimizing our designed mask-guided, feature-guided and text-guided loss terms. Extensive experiments on various benchmarks demonstrate that ReCLIP++ performs favorably against previous SOTAs. The implementation is available at: https://github.com/dogehhh/ReCLIP.
2106.15166
Taekho You
Taekho You, Jinseo Park, June Young Lee, Jinhyuk Yun, Woo-Sung Jung
Disturbance of questionable publishing to academia
16 pages of main text including 4 figures + 42 pages of supplementary information including 38 supplementary figures
Journal of Informetrics, 2022, 16(2), 101294
10.1016/j.joi.2022.101294
null
cs.DL physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Questionable publications have been accused of "greedy" practices; however, their influence on academia has not been gauged. Here, we probe the impact of questionable publications through a systematic and comprehensive analysis with various participants from academia and compare the results with those of their unaccused counterparts using billions of citation records, including liaisons, i.e., journals and publishers, and prosumers, i.e., authors. Questionable publications attribute publisher-level self-citations to their journals while limiting journal-level self-citations; yet, conventional journal-level metrics are unable to detect these publisher-level self-citations. We propose a hybrid journal-publisher metric for detecting self-favouring citations among QJs from publishers. Additionally, we demonstrate that the questionable publications were less disruptive and influential than their counterparts. Our findings indicate an inflated citation impact of suspicious academic publishers. The findings provide a basis for actionable policy-making against questionable publications.
[ { "created": "Tue, 29 Jun 2021 08:26:39 GMT", "version": "v1" }, { "created": "Tue, 6 Jul 2021 07:42:56 GMT", "version": "v2" }, { "created": "Mon, 7 Mar 2022 02:41:34 GMT", "version": "v3" }, { "created": "Tue, 19 Apr 2022 13:18:20 GMT", "version": "v4" } ]
2022-05-10
[ [ "You", "Taekho", "" ], [ "Park", "Jinseo", "" ], [ "Lee", "June Young", "" ], [ "Yun", "Jinhyuk", "" ], [ "Jung", "Woo-Sung", "" ] ]
Questionable publications have been accused of "greedy" practices; however, their influence on academia has not been gauged. Here, we probe the impact of questionable publications through a systematic and comprehensive analysis with various participants from academia and compare the results with those of their unaccused counterparts using billions of citation records, including liaisons, i.e., journals and publishers, and prosumers, i.e., authors. Questionable publications attribute publisher-level self-citations to their journals while limiting journal-level self-citations; yet, conventional journal-level metrics are unable to detect these publisher-level self-citations. We propose a hybrid journal-publisher metric for detecting self-favouring citations among QJs from publishers. Additionally, we demonstrate that the questionable publications were less disruptive and influential than their counterparts. Our findings indicate an inflated citation impact of suspicious academic publishers. The findings provide a basis for actionable policy-making against questionable publications.
2403.05156
Minghui Xu
Biwei Yan, Kun Li, Minghui Xu, Yueyan Dong, Yue Zhang, Zhaochun Ren and Xiuzhen Cheng
On Protecting the Data Privacy of Large Language Models (LLMs): A Survey
18 pages, 4 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) are complex artificial intelligence systems capable of understanding, generating and translating human language. They learn language patterns by analyzing large amounts of text data, allowing them to perform writing, conversation, summarizing and other language tasks. When LLMs process and generate large amounts of data, there is a risk of leaking sensitive information, which may threaten data privacy. This paper concentrates on elucidating the data privacy concerns associated with LLMs to foster a comprehensive understanding. Specifically, a thorough investigation is undertaken to delineate the spectrum of data privacy threats, encompassing both passive privacy leakage and active privacy attacks within LLMs. Subsequently, we conduct an assessment of the privacy protection mechanisms employed by LLMs at various stages, followed by a detailed examination of their efficacy and constraints. Finally, the discourse extends to delineate the challenges encountered and outline prospective directions for advancement in the realm of LLM privacy protection.
[ { "created": "Fri, 8 Mar 2024 08:47:48 GMT", "version": "v1" }, { "created": "Thu, 14 Mar 2024 14:17:57 GMT", "version": "v2" } ]
2024-03-15
[ [ "Yan", "Biwei", "" ], [ "Li", "Kun", "" ], [ "Xu", "Minghui", "" ], [ "Dong", "Yueyan", "" ], [ "Zhang", "Yue", "" ], [ "Ren", "Zhaochun", "" ], [ "Cheng", "Xiuzhen", "" ] ]
Large language models (LLMs) are complex artificial intelligence systems capable of understanding, generating and translating human language. They learn language patterns by analyzing large amounts of text data, allowing them to perform writing, conversation, summarizing and other language tasks. When LLMs process and generate large amounts of data, there is a risk of leaking sensitive information, which may threaten data privacy. This paper concentrates on elucidating the data privacy concerns associated with LLMs to foster a comprehensive understanding. Specifically, a thorough investigation is undertaken to delineate the spectrum of data privacy threats, encompassing both passive privacy leakage and active privacy attacks within LLMs. Subsequently, we conduct an assessment of the privacy protection mechanisms employed by LLMs at various stages, followed by a detailed examination of their efficacy and constraints. Finally, the discourse extends to delineate the challenges encountered and outline prospective directions for advancement in the realm of LLM privacy protection.
2403.03473
Xinwei Ou
Xinwei Ou, Ce Zhu, Xiaolin Huang, and Yipeng Liu
Inverse-Free Fast Natural Gradient Descent Method for Deep Learning
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Second-order optimization techniques have the potential to achieve faster convergence rates compared to first-order methods through the incorporation of second-order derivatives or statistics. However, their utilization in deep learning is limited due to their computational inefficiency. Various approaches have been proposed to address this issue, primarily centered on minimizing the size of the matrix to be inverted. Nevertheless, the necessity of performing the inverse operation iteratively persists. In this work, we present a fast natural gradient descent (FNGD) method that only requires inversion during the first epoch. Specifically, it is revealed that natural gradient descent (NGD) is essentially a weighted sum of per-sample gradients. Our novel approach further proposes to share these weighted coefficients across epochs without affecting empirical performance. Consequently, FNGD exhibits similarities to the average sum in first-order methods, leading to the computational complexity of FNGD being comparable to that of first-order methods. Extensive experiments on image classification and machine translation tasks demonstrate the efficiency of the proposed FNGD. For training ResNet-18 on CIFAR-100, FNGD can achieve a speedup of 2.07$\times$ compared with KFAC. For training Transformer on Multi30K, FNGD outperforms AdamW by 24 BLEU score while requiring almost the same training time.
[ { "created": "Wed, 6 Mar 2024 05:13:28 GMT", "version": "v1" }, { "created": "Sun, 28 Apr 2024 10:52:32 GMT", "version": "v2" } ]
2024-04-30
[ [ "Ou", "Xinwei", "" ], [ "Zhu", "Ce", "" ], [ "Huang", "Xiaolin", "" ], [ "Liu", "Yipeng", "" ] ]
Second-order optimization techniques have the potential to achieve faster convergence rates compared to first-order methods through the incorporation of second-order derivatives or statistics. However, their utilization in deep learning is limited due to their computational inefficiency. Various approaches have been proposed to address this issue, primarily centered on minimizing the size of the matrix to be inverted. Nevertheless, the necessity of performing the inverse operation iteratively persists. In this work, we present a fast natural gradient descent (FNGD) method that only requires inversion during the first epoch. Specifically, it is revealed that natural gradient descent (NGD) is essentially a weighted sum of per-sample gradients. Our novel approach further proposes to share these weighted coefficients across epochs without affecting empirical performance. Consequently, FNGD exhibits similarities to the average sum in first-order methods, leading to the computational complexity of FNGD being comparable to that of first-order methods. Extensive experiments on image classification and machine translation tasks demonstrate the efficiency of the proposed FNGD. For training ResNet-18 on CIFAR-100, FNGD can achieve a speedup of 2.07$\times$ compared with KFAC. For training Transformer on Multi30K, FNGD outperforms AdamW by 24 BLEU score while requiring almost the same training time.
2209.03447
Yulai Zhao
Yulai Zhao, Jianshu Chen, Simon S. Du
Blessing of Class Diversity in Pre-training
AISTATS 2023 (Oral)
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
This paper presents a new statistical analysis aiming to explain the recent superior achievements of the pre-training techniques in natural language processing (NLP). We prove that when the classes of the pre-training task (e.g., different words in the masked language model task) are sufficiently diverse, in the sense that the least singular value of the last linear layer in pre-training (denoted as $\tilde{\nu}$) is large, then pre-training can significantly improve the sample efficiency of downstream tasks. Specially, we show the transfer learning excess risk enjoys an $O\left(\frac{1}{\tilde{\nu} \sqrt{n}}\right)$ rate, in contrast to the $O\left(\frac{1}{\sqrt{m}}\right)$ rate in the standard supervised learning. Here, $n$ is the number of pre-training data and $m$ is the number of data in the downstream task, and typically $n \gg m$. Our proof relies on a vector-form Rademacher complexity chain rule for disassembling composite function classes and a modified self-concordance condition. These techniques can be of independent interest.
[ { "created": "Wed, 7 Sep 2022 20:10:12 GMT", "version": "v1" }, { "created": "Mon, 12 Sep 2022 15:44:41 GMT", "version": "v2" }, { "created": "Sun, 12 Feb 2023 17:45:39 GMT", "version": "v3" } ]
2023-02-14
[ [ "Zhao", "Yulai", "" ], [ "Chen", "Jianshu", "" ], [ "Du", "Simon S.", "" ] ]
This paper presents a new statistical analysis aiming to explain the recent superior achievements of the pre-training techniques in natural language processing (NLP). We prove that when the classes of the pre-training task (e.g., different words in the masked language model task) are sufficiently diverse, in the sense that the least singular value of the last linear layer in pre-training (denoted as $\tilde{\nu}$) is large, then pre-training can significantly improve the sample efficiency of downstream tasks. Specially, we show the transfer learning excess risk enjoys an $O\left(\frac{1}{\tilde{\nu} \sqrt{n}}\right)$ rate, in contrast to the $O\left(\frac{1}{\sqrt{m}}\right)$ rate in the standard supervised learning. Here, $n$ is the number of pre-training data and $m$ is the number of data in the downstream task, and typically $n \gg m$. Our proof relies on a vector-form Rademacher complexity chain rule for disassembling composite function classes and a modified self-concordance condition. These techniques can be of independent interest.