id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1901.09515
Lin Chen
Lin Chen, Mingrui Zhang, Hamed Hassani, Amin Karbasi
Black Box Submodular Maximization: Discrete and Continuous Settings
Accepted to AISTATS 2020. First two authors contributed equally to this work
null
null
null
cs.LG cs.DS math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider the problem of black box continuous submodular maximization where we only have access to the function values and no information about the derivatives is provided. For a monotone and continuous DR-submodular function, and subject to a bounded convex body constraint, we propose Black-box Continuous Greedy, a derivative-free algorithm that provably achieves the tight $[(1-1/e)OPT-\epsilon]$ approximation guarantee with $O(d/\epsilon^3)$ function evaluations. We then extend our result to the stochastic setting where function values are subject to stochastic zero-mean noise. It is through this stochastic generalization that we revisit the discrete submodular maximization problem and use the multi-linear extension as a bridge between discrete and continuous settings. Finally, we extensively evaluate the performance of our algorithm on continuous and discrete submodular objective functions using both synthetic and real data.
[ { "created": "Mon, 28 Jan 2019 04:53:53 GMT", "version": "v1" }, { "created": "Sun, 1 Mar 2020 20:57:49 GMT", "version": "v2" } ]
2020-03-03
[ [ "Chen", "Lin", "" ], [ "Zhang", "Mingrui", "" ], [ "Hassani", "Hamed", "" ], [ "Karbasi", "Amin", "" ] ]
In this paper, we consider the problem of black box continuous submodular maximization where we only have access to the function values and no information about the derivatives is provided. For a monotone and continuous DR-submodular function, and subject to a bounded convex body constraint, we propose Black-box Continuous Greedy, a derivative-free algorithm that provably achieves the tight $[(1-1/e)OPT-\epsilon]$ approximation guarantee with $O(d/\epsilon^3)$ function evaluations. We then extend our result to the stochastic setting where function values are subject to stochastic zero-mean noise. It is through this stochastic generalization that we revisit the discrete submodular maximization problem and use the multi-linear extension as a bridge between discrete and continuous settings. Finally, we extensively evaluate the performance of our algorithm on continuous and discrete submodular objective functions using both synthetic and real data.
2001.05671
Yuto Nakashima
Kohei Yamada, Yuto Nakashima, Shunsuke Inenaga, Hideo Bannai, and Masayuki Takeda
Faster STR-EC-LCS Computation
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The longest common subsequence (LCS) problem is a central problem in stringology that finds the longest common subsequence of given two strings $A$ and $B$. More recently, a set of four constrained LCS problems (called generalized constrained LCS problem) were proposed by Chen and Chao [J. Comb. Optim, 2011]. In this paper, we consider the substring-excluding constrained LCS (STR-EC-LCS) problem. A string $Z$ is said to be an STR-EC-LCS of two given strings $A$ and $B$ excluding $P$ if, $Z$ is one of the longest common subsequences of $A$ and $B$ that does not contain $P$ as a substring. Wang et al. proposed a dynamic programming solution which computes an STR-EC-LCS in $O(mnr)$ time and space where $m = |A|, n = |B|, r = |P|$ [Inf. Process. Lett., 2013]. In this paper, we show a new solution for the STR-EC-LCS problem. Our algorithm computes an STR-EC-LCS in $O(n|\Sigma| + (L+1)(m-L+1)r)$ time where $|\Sigma| \leq \min\{m, n\}$ denotes the set of distinct characters occurring in both $A$ and $B$, and $L$ is the length of the STR-EC-LCS. This algorithm is faster than the $O(mnr)$-time algorithm for short/long STR-EC-LCS (namely, $L \in O(1)$ or $m-L \in O(1)$), and is at least as efficient as the $O(mnr)$-time algorithm for all cases.
[ { "created": "Thu, 16 Jan 2020 06:30:29 GMT", "version": "v1" } ]
2020-01-17
[ [ "Yamada", "Kohei", "" ], [ "Nakashima", "Yuto", "" ], [ "Inenaga", "Shunsuke", "" ], [ "Bannai", "Hideo", "" ], [ "Takeda", "Masayuki", "" ] ]
The longest common subsequence (LCS) problem is a central problem in stringology that finds the longest common subsequence of given two strings $A$ and $B$. More recently, a set of four constrained LCS problems (called generalized constrained LCS problem) were proposed by Chen and Chao [J. Comb. Optim, 2011]. In this paper, we consider the substring-excluding constrained LCS (STR-EC-LCS) problem. A string $Z$ is said to be an STR-EC-LCS of two given strings $A$ and $B$ excluding $P$ if, $Z$ is one of the longest common subsequences of $A$ and $B$ that does not contain $P$ as a substring. Wang et al. proposed a dynamic programming solution which computes an STR-EC-LCS in $O(mnr)$ time and space where $m = |A|, n = |B|, r = |P|$ [Inf. Process. Lett., 2013]. In this paper, we show a new solution for the STR-EC-LCS problem. Our algorithm computes an STR-EC-LCS in $O(n|\Sigma| + (L+1)(m-L+1)r)$ time where $|\Sigma| \leq \min\{m, n\}$ denotes the set of distinct characters occurring in both $A$ and $B$, and $L$ is the length of the STR-EC-LCS. This algorithm is faster than the $O(mnr)$-time algorithm for short/long STR-EC-LCS (namely, $L \in O(1)$ or $m-L \in O(1)$), and is at least as efficient as the $O(mnr)$-time algorithm for all cases.
2406.01869
Christine Dewi
Christine Dewi, Dhananjay Thiruvady, and Nayyar Zaidi
Fruit Classification System with Deep Learning and Neural Architecture Search
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
The fruit identification process involves analyzing and categorizing different types of fruits based on their visual characteristics. This activity can be achieved using a range of methodologies, encompassing manual examination, conventional computer vision methodologies, and more sophisticated methodologies employing machine learning and deep learning. Our study identified a total of 15 distinct categories of fruit, consisting of class Avocado, Banana, Cherry, Apple Braeburn, Apple golden 1, Apricot, Grape, Kiwi, Mango, Orange, Papaya, Peach, Pineapple, Pomegranate and Strawberry. Neural Architecture Search (NAS) is a technological advancement employed within the realm of deep learning and artificial intelligence, to automate conceptualizing and refining neural network topologies. NAS aims to identify neural network structures that are highly suitable for tasks, such as the detection of fruits. Our suggested model with 99.98% mAP increased the detection performance of the preceding research study that used Fruit datasets. In addition, after the completion of the study, a comparative analysis was carried out to assess the findings in conjunction with those of another research that is connected to the topic. When compared to the findings of earlier studies, the detector that was proposed exhibited higher performance in terms of both its accuracy and its precision.
[ { "created": "Tue, 4 Jun 2024 00:41:47 GMT", "version": "v1" } ]
2024-06-05
[ [ "Dewi", "Christine", "" ], [ "Thiruvady", "Dhananjay", "" ], [ "Zaidi", "Nayyar", "" ] ]
The fruit identification process involves analyzing and categorizing different types of fruits based on their visual characteristics. This activity can be achieved using a range of methodologies, encompassing manual examination, conventional computer vision methodologies, and more sophisticated methodologies employing machine learning and deep learning. Our study identified a total of 15 distinct categories of fruit, consisting of class Avocado, Banana, Cherry, Apple Braeburn, Apple golden 1, Apricot, Grape, Kiwi, Mango, Orange, Papaya, Peach, Pineapple, Pomegranate and Strawberry. Neural Architecture Search (NAS) is a technological advancement employed within the realm of deep learning and artificial intelligence, to automate conceptualizing and refining neural network topologies. NAS aims to identify neural network structures that are highly suitable for tasks, such as the detection of fruits. Our suggested model with 99.98% mAP increased the detection performance of the preceding research study that used Fruit datasets. In addition, after the completion of the study, a comparative analysis was carried out to assess the findings in conjunction with those of another research that is connected to the topic. When compared to the findings of earlier studies, the detector that was proposed exhibited higher performance in terms of both its accuracy and its precision.
2101.11935
Michal Kazmierski
Michal Kazmierski, Mattea Welch, Sejin Kim, Chris McIntosh, Princess Margaret Head and Neck Cancer Group, Katrina Rey-McIntyre, Shao Hui Huang, Tirth Patel, Tony Tadic, Michael Milosevic, Fei-Fei Liu, Andrew Hope, Scott Bratman and Benjamin Haibe-Kains
A Machine Learning Challenge for Prognostic Modelling in Head and Neck Cancer Using Multi-modal Data
27 pages, 7 figures, under review
null
null
null
cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate prognosis for an individual patient is a key component of precision oncology. Recent advances in machine learning have enabled the development of models using a wider range of data, including imaging. Radiomics aims to extract quantitative predictive and prognostic biomarkers from routine medical imaging, but evidence for computed tomography radiomics for prognosis remains inconclusive. We have conducted an institutional machine learning challenge to develop an accurate model for overall survival prediction in head and neck cancer using clinical data etxracted from electronic medical records and pre-treatment radiological images, as well as to evaluate the true added benefit of radiomics for head and neck cancer prognosis. Using a large, retrospective dataset of 2,552 patients and a rigorous evaluation framework, we compared 12 different submissions using imaging and clinical data, separately or in combination. The winning approach used non-linear, multitask learning on clinical data and tumour volume, achieving high prognostic accuracy for 2-year and lifetime survival prediction and outperforming models relying on clinical data only, engineered radiomics and deep learning. Combining all submissions in an ensemble model resulted in improved accuracy, with the highest gain from a image-based deep learning model. Our results show the potential of machine learning and simple, informative prognostic factors in combination with large datasets as a tool to guide personalized cancer care.
[ { "created": "Thu, 28 Jan 2021 11:20:34 GMT", "version": "v1" } ]
2021-01-29
[ [ "Kazmierski", "Michal", "" ], [ "Welch", "Mattea", "" ], [ "Kim", "Sejin", "" ], [ "McIntosh", "Chris", "" ], [ "Head", "Princess Margaret", "" ], [ "Group", "Neck Cancer", "" ], [ "Rey-McIntyre", "Katrina", "" ], [ "Huang", "Shao Hui", "" ], [ "Patel", "Tirth", "" ], [ "Tadic", "Tony", "" ], [ "Milosevic", "Michael", "" ], [ "Liu", "Fei-Fei", "" ], [ "Hope", "Andrew", "" ], [ "Bratman", "Scott", "" ], [ "Haibe-Kains", "Benjamin", "" ] ]
Accurate prognosis for an individual patient is a key component of precision oncology. Recent advances in machine learning have enabled the development of models using a wider range of data, including imaging. Radiomics aims to extract quantitative predictive and prognostic biomarkers from routine medical imaging, but evidence for computed tomography radiomics for prognosis remains inconclusive. We have conducted an institutional machine learning challenge to develop an accurate model for overall survival prediction in head and neck cancer using clinical data etxracted from electronic medical records and pre-treatment radiological images, as well as to evaluate the true added benefit of radiomics for head and neck cancer prognosis. Using a large, retrospective dataset of 2,552 patients and a rigorous evaluation framework, we compared 12 different submissions using imaging and clinical data, separately or in combination. The winning approach used non-linear, multitask learning on clinical data and tumour volume, achieving high prognostic accuracy for 2-year and lifetime survival prediction and outperforming models relying on clinical data only, engineered radiomics and deep learning. Combining all submissions in an ensemble model resulted in improved accuracy, with the highest gain from a image-based deep learning model. Our results show the potential of machine learning and simple, informative prognostic factors in combination with large datasets as a tool to guide personalized cancer care.
2406.09185
Niharika Malvia
Syed Abdul Mateen, Niharika Malvia, Syed Abdul Khader, Danny Wang, Deepti Srinivasan, Chi-Fu Jeffrey Yang, Lana Schumacher, Sandeep Manjanna
Thoracic Surgery Video Analysis for Surgical Phase Recognition
2 pages, 2 figures
ICRA-RAMI Workshop, May 2024, Japan
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper presents an approach for surgical phase recognition using video data, aiming to provide a comprehensive understanding of surgical procedures for automated workflow analysis. The advent of robotic surgery, digitized operating rooms, and the generation of vast amounts of data have opened doors for the application of machine learning and computer vision in the analysis of surgical videos. Among these advancements, Surgical Phase Recognition(SPR) stands out as an emerging technology that has the potential to recognize and assess the ongoing surgical scenario, summarize the surgery, evaluate surgical skills, offer surgical decision support, and facilitate medical training. In this paper, we analyse and evaluate both frame-based and video clipping-based phase recognition on thoracic surgery dataset consisting of 11 classes of phases. Specifically, we utilize ImageNet ViT for image-based classification and VideoMAE as the baseline model for video-based classification. We show that Masked Video Distillation(MVD) exhibits superior performance, achieving a top-1 accuracy of 72.9%, compared to 52.31% achieved by ImageNet ViT. These findings underscore the efficacy of video-based classifiers over their image-based counterparts in surgical phase recognition tasks.
[ { "created": "Thu, 13 Jun 2024 14:47:57 GMT", "version": "v1" } ]
2024-06-14
[ [ "Mateen", "Syed Abdul", "" ], [ "Malvia", "Niharika", "" ], [ "Khader", "Syed Abdul", "" ], [ "Wang", "Danny", "" ], [ "Srinivasan", "Deepti", "" ], [ "Yang", "Chi-Fu Jeffrey", "" ], [ "Schumacher", "Lana", "" ], [ "Manjanna", "Sandeep", "" ] ]
This paper presents an approach for surgical phase recognition using video data, aiming to provide a comprehensive understanding of surgical procedures for automated workflow analysis. The advent of robotic surgery, digitized operating rooms, and the generation of vast amounts of data have opened doors for the application of machine learning and computer vision in the analysis of surgical videos. Among these advancements, Surgical Phase Recognition(SPR) stands out as an emerging technology that has the potential to recognize and assess the ongoing surgical scenario, summarize the surgery, evaluate surgical skills, offer surgical decision support, and facilitate medical training. In this paper, we analyse and evaluate both frame-based and video clipping-based phase recognition on thoracic surgery dataset consisting of 11 classes of phases. Specifically, we utilize ImageNet ViT for image-based classification and VideoMAE as the baseline model for video-based classification. We show that Masked Video Distillation(MVD) exhibits superior performance, achieving a top-1 accuracy of 72.9%, compared to 52.31% achieved by ImageNet ViT. These findings underscore the efficacy of video-based classifiers over their image-based counterparts in surgical phase recognition tasks.
1807.09040
Anshoo Tandon
Anshoo Tandon, Mehul Motani, Lav R. Varshney
Are RLL Codes Suitable for Simultaneous Energy and Information Transfer?
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Run-length limited (RLL) codes are a well-studied class of constrained codes having application in diverse areas such as optical and magnetic data recording systems, DNA-based storage, and visible light communication. RLL codes have also been proposed for the emerging area of simultaneous energy and information transfer, where the receiver uses the received signal for decoding information as well as for harvesting energy to run its circuitry. In this paper, we show that RLL codes are not the best codes for simultaneous energy and information transfer, in terms of the maximum number of codewords which avoid energy outage, i.e., outage-constrained capacity. Specifically, we show that sliding window constrained (SWC) codes and subblock energy constrained (SEC) codes have significantly higher outage-constrained capacities than RLL codes.
[ { "created": "Tue, 24 Jul 2018 11:26:06 GMT", "version": "v1" } ]
2018-07-25
[ [ "Tandon", "Anshoo", "" ], [ "Motani", "Mehul", "" ], [ "Varshney", "Lav R.", "" ] ]
Run-length limited (RLL) codes are a well-studied class of constrained codes having application in diverse areas such as optical and magnetic data recording systems, DNA-based storage, and visible light communication. RLL codes have also been proposed for the emerging area of simultaneous energy and information transfer, where the receiver uses the received signal for decoding information as well as for harvesting energy to run its circuitry. In this paper, we show that RLL codes are not the best codes for simultaneous energy and information transfer, in terms of the maximum number of codewords which avoid energy outage, i.e., outage-constrained capacity. Specifically, we show that sliding window constrained (SWC) codes and subblock energy constrained (SEC) codes have significantly higher outage-constrained capacities than RLL codes.
2305.03881
Swagatika Dash
Swagatika Dash
Fairness in Image Search: A Study of Occupational Stereotyping in Image Retrieval and its Debiasing
20 Pages, Work uses Proprietary Search Systems from the year 2021
null
null
null
cs.IR cs.CL cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-modal search engines have experienced significant growth and widespread use in recent years, making them the second most common internet use. While search engine systems offer a range of services, the image search field has recently become a focal point in the information retrieval community, as the adage goes, "a picture is worth a thousand words". Although popular search engines like Google excel at image search accuracy and agility, there is an ongoing debate over whether their search results can be biased in terms of gender, language, demographics, socio-cultural aspects, and stereotypes. This potential for bias can have a significant impact on individuals' perceptions and influence their perspectives. In this paper, we present our study on bias and fairness in web search, with a focus on keyword-based image search. We first discuss several kinds of biases that exist in search systems and why it is important to mitigate them. We narrow down our study to assessing and mitigating occupational stereotypes in image search, which is a prevalent fairness issue in image retrieval. For the assessment of stereotypes, we take gender as an indicator. We explore various open-source and proprietary APIs for gender identification from images. With these, we examine the extent of gender bias in top-tanked image search results obtained for several occupational keywords. To mitigate the bias, we then propose a fairness-aware re-ranking algorithm that optimizes (a) relevance of the search result with the keyword and (b) fairness w.r.t genders identified. We experiment on 100 top-ranked images obtained for 10 occupational keywords and consider random re-ranking and re-ranking based on relevance as baselines. Our experimental results show that the fairness-aware re-ranking algorithm produces rankings with better fairness scores and competitive relevance scores than the baselines.
[ { "created": "Sat, 6 May 2023 00:24:44 GMT", "version": "v1" }, { "created": "Tue, 22 Aug 2023 16:09:59 GMT", "version": "v2" } ]
2023-08-23
[ [ "Dash", "Swagatika", "" ] ]
Multi-modal search engines have experienced significant growth and widespread use in recent years, making them the second most common internet use. While search engine systems offer a range of services, the image search field has recently become a focal point in the information retrieval community, as the adage goes, "a picture is worth a thousand words". Although popular search engines like Google excel at image search accuracy and agility, there is an ongoing debate over whether their search results can be biased in terms of gender, language, demographics, socio-cultural aspects, and stereotypes. This potential for bias can have a significant impact on individuals' perceptions and influence their perspectives. In this paper, we present our study on bias and fairness in web search, with a focus on keyword-based image search. We first discuss several kinds of biases that exist in search systems and why it is important to mitigate them. We narrow down our study to assessing and mitigating occupational stereotypes in image search, which is a prevalent fairness issue in image retrieval. For the assessment of stereotypes, we take gender as an indicator. We explore various open-source and proprietary APIs for gender identification from images. With these, we examine the extent of gender bias in top-tanked image search results obtained for several occupational keywords. To mitigate the bias, we then propose a fairness-aware re-ranking algorithm that optimizes (a) relevance of the search result with the keyword and (b) fairness w.r.t genders identified. We experiment on 100 top-ranked images obtained for 10 occupational keywords and consider random re-ranking and re-ranking based on relevance as baselines. Our experimental results show that the fairness-aware re-ranking algorithm produces rankings with better fairness scores and competitive relevance scores than the baselines.
2311.08223
Weidong Chen
Ting Wang, Weidong Chen, Yuanhe Tian, Yan Song, Zhendong Mao
Improving Image Captioning via Predicting Structured Concepts
Accepted by EMNLP 2023 (Main Conference, Oral)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Having the difficulty of solving the semantic gap between images and texts for the image captioning task, conventional studies in this area paid some attention to treating semantic concepts as a bridge between the two modalities and improved captioning performance accordingly. Although promising results on concept prediction were obtained, the aforementioned studies normally ignore the relationship among concepts, which relies on not only objects in the image, but also word dependencies in the text, so that offers a considerable potential for improving the process of generating good descriptions. In this paper, we propose a structured concept predictor (SCP) to predict concepts and their structures, then we integrate them into captioning, so as to enhance the contribution of visual signals in this task via concepts and further use their relations to distinguish cross-modal semantics for better description generation. Particularly, we design weighted graph convolutional networks (W-GCN) to depict concept relations driven by word dependencies, and then learns differentiated contributions from these concepts for following decoding process. Therefore, our approach captures potential relations among concepts and discriminatively learns different concepts, so that effectively facilitates image captioning with inherited information across modalities. Extensive experiments and their results demonstrate the effectiveness of our approach as well as each proposed module in this work.
[ { "created": "Tue, 14 Nov 2023 15:01:58 GMT", "version": "v1" }, { "created": "Tue, 28 Nov 2023 04:05:03 GMT", "version": "v2" } ]
2023-11-29
[ [ "Wang", "Ting", "" ], [ "Chen", "Weidong", "" ], [ "Tian", "Yuanhe", "" ], [ "Song", "Yan", "" ], [ "Mao", "Zhendong", "" ] ]
Having the difficulty of solving the semantic gap between images and texts for the image captioning task, conventional studies in this area paid some attention to treating semantic concepts as a bridge between the two modalities and improved captioning performance accordingly. Although promising results on concept prediction were obtained, the aforementioned studies normally ignore the relationship among concepts, which relies on not only objects in the image, but also word dependencies in the text, so that offers a considerable potential for improving the process of generating good descriptions. In this paper, we propose a structured concept predictor (SCP) to predict concepts and their structures, then we integrate them into captioning, so as to enhance the contribution of visual signals in this task via concepts and further use their relations to distinguish cross-modal semantics for better description generation. Particularly, we design weighted graph convolutional networks (W-GCN) to depict concept relations driven by word dependencies, and then learns differentiated contributions from these concepts for following decoding process. Therefore, our approach captures potential relations among concepts and discriminatively learns different concepts, so that effectively facilitates image captioning with inherited information across modalities. Extensive experiments and their results demonstrate the effectiveness of our approach as well as each proposed module in this work.
2103.14712
Arijit Ray
Arijit Ray, Michael Cogswell, Xiao Lin, Kamran Alipour, Ajay Divakaran, Yi Yao, Giedrius Burachas
Generating and Evaluating Explanations of Attended and Error-Inducing Input Regions for VQA Models
Applied AI Letters, Wiley, 25 October 2021
null
10.22541/au.162464902.28050142/v1
null
cs.CV cs.AI cs.CY cs.HC
http://creativecommons.org/licenses/by/4.0/
Attention maps, a popular heatmap-based explanation method for Visual Question Answering (VQA), are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer. Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err. Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users' understanding of those cases. To evaluate our new explanations, we further introduce a metric that simulates users' interpretation of explanations to evaluate their potential helpfulness to understand model correctness. We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30\% and that our proxy helpfulness metrics correlate strongly ($\rho>0.97$) with how well users can predict model correctness.
[ { "created": "Fri, 26 Mar 2021 19:52:32 GMT", "version": "v1" }, { "created": "Wed, 31 Mar 2021 21:15:40 GMT", "version": "v2" }, { "created": "Mon, 25 Oct 2021 18:58:42 GMT", "version": "v3" } ]
2021-10-27
[ [ "Ray", "Arijit", "" ], [ "Cogswell", "Michael", "" ], [ "Lin", "Xiao", "" ], [ "Alipour", "Kamran", "" ], [ "Divakaran", "Ajay", "" ], [ "Yao", "Yi", "" ], [ "Burachas", "Giedrius", "" ] ]
Attention maps, a popular heatmap-based explanation method for Visual Question Answering (VQA), are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer. Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err. Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users' understanding of those cases. To evaluate our new explanations, we further introduce a metric that simulates users' interpretation of explanations to evaluate their potential helpfulness to understand model correctness. We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30\% and that our proxy helpfulness metrics correlate strongly ($\rho>0.97$) with how well users can predict model correctness.
2205.04935
Sara Saeidian
Sara Saeidian (1), Giulia Cervia (2), Tobias J. Oechtering (1), Mikael Skoglund (1) ((1) KTH Royal Institute of Technology, (2) IMT Nord Europe)
Pointwise Maximal Leakage
Results unchanged. New examples added. This version has been accepted for publication in IEEE Transactions on Information Theory
null
10.1109/TIT.2023.3304378
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a privacy measure called pointwise maximal leakage, generalizing the pre-existing notion of maximal leakage, which quantifies the amount of information leaking about a secret $X$ by disclosing a single outcome of a (randomized) function calculated on $X$. Pointwise maximal leakage is a robust and operationally meaningful privacy measure that captures the largest amount of information leaking about $X$ to adversaries seeking to guess arbitrary (possibly randomized) functions of $X$, or equivalently, aiming to maximize arbitrary gain functions. We study several properties of pointwise maximal leakage, e.g., how it composes over multiple outcomes, how it is affected by pre- and post-processing, etc. Furthermore, we propose to view information leakage as a random variable which, in turn, allows us to regard privacy guarantees as requirements imposed on different statistical properties of the information leakage random variable. We define several privacy guarantees and study how they behave under pre-processing, post-processing and composition. Finally, we examine the relationship between pointwise maximal leakage and other privacy notions such as local differential privacy, local information privacy, $f$-information, and so on. Overall, our paper constructs a robust and flexible framework for privacy risk assessment whose central notion has a strong operational meaning which can be adapted to a variety of applications and practical scenarios.
[ { "created": "Tue, 10 May 2022 14:46:41 GMT", "version": "v1" }, { "created": "Tue, 15 Aug 2023 14:03:40 GMT", "version": "v2" } ]
2023-08-16
[ [ "Saeidian", "Sara", "", "KTH Royal Institute of Technology" ], [ "Cervia", "Giulia", "", "IMT Nord Europe" ], [ "Oechtering", "Tobias J.", "", "KTH Royal Institute of Technology" ], [ "Skoglund", "Mikael", "", "KTH Royal Institute of Technology" ] ]
We introduce a privacy measure called pointwise maximal leakage, generalizing the pre-existing notion of maximal leakage, which quantifies the amount of information leaking about a secret $X$ by disclosing a single outcome of a (randomized) function calculated on $X$. Pointwise maximal leakage is a robust and operationally meaningful privacy measure that captures the largest amount of information leaking about $X$ to adversaries seeking to guess arbitrary (possibly randomized) functions of $X$, or equivalently, aiming to maximize arbitrary gain functions. We study several properties of pointwise maximal leakage, e.g., how it composes over multiple outcomes, how it is affected by pre- and post-processing, etc. Furthermore, we propose to view information leakage as a random variable which, in turn, allows us to regard privacy guarantees as requirements imposed on different statistical properties of the information leakage random variable. We define several privacy guarantees and study how they behave under pre-processing, post-processing and composition. Finally, we examine the relationship between pointwise maximal leakage and other privacy notions such as local differential privacy, local information privacy, $f$-information, and so on. Overall, our paper constructs a robust and flexible framework for privacy risk assessment whose central notion has a strong operational meaning which can be adapted to a variety of applications and practical scenarios.
2306.04628
Sangjun Han
Sangjun Han, Hyeongrae Ihm, Woohyung Lim
Systematic Analysis of Music Representations from BERT
null
null
null
null
cs.SD cs.MM eess.AS
http://creativecommons.org/licenses/by/4.0/
There have been numerous attempts to represent raw data as numerical vectors that effectively capture semantic and contextual information. However, in the field of symbolic music, previous works have attempted to validate their music embeddings by observing the performance improvement of various fine-tuning tasks. In this work, we directly analyze embeddings from BERT and BERT with contrastive learning trained on bar-level MIDI, inspecting their musical information that can be obtained from MIDI events. We observe that the embeddings exhibit distinct characteristics of information depending on the contrastive objectives and the choice of layers. Our code is available at https://github.com/sjhan91/MusicBERT.
[ { "created": "Tue, 6 Jun 2023 13:26:55 GMT", "version": "v1" } ]
2023-06-08
[ [ "Han", "Sangjun", "" ], [ "Ihm", "Hyeongrae", "" ], [ "Lim", "Woohyung", "" ] ]
There have been numerous attempts to represent raw data as numerical vectors that effectively capture semantic and contextual information. However, in the field of symbolic music, previous works have attempted to validate their music embeddings by observing the performance improvement of various fine-tuning tasks. In this work, we directly analyze embeddings from BERT and BERT with contrastive learning trained on bar-level MIDI, inspecting their musical information that can be obtained from MIDI events. We observe that the embeddings exhibit distinct characteristics of information depending on the contrastive objectives and the choice of layers. Our code is available at https://github.com/sjhan91/MusicBERT.
cs/0701095
Pedro Cabalar
Pedro Cabalar and Paolo Ferraris
Propositional theories are strongly equivalent to logic programs
15 pages
null
null
null
cs.AI cs.LO
null
This paper presents a property of propositional theories under the answer sets semantics (called Equilibrium Logic for this general syntax): any theory can always be reexpressed as a strongly equivalent disjunctive logic program, possibly with negation in the head. We provide two different proofs for this result: one involving a syntactic transformation, and one that constructs a program starting from the countermodels of the theory in the intermediate logic of here-and-there.
[ { "created": "Tue, 16 Jan 2007 12:29:55 GMT", "version": "v1" } ]
2007-05-23
[ [ "Cabalar", "Pedro", "" ], [ "Ferraris", "Paolo", "" ] ]
This paper presents a property of propositional theories under the answer sets semantics (called Equilibrium Logic for this general syntax): any theory can always be reexpressed as a strongly equivalent disjunctive logic program, possibly with negation in the head. We provide two different proofs for this result: one involving a syntactic transformation, and one that constructs a program starting from the countermodels of the theory in the intermediate logic of here-and-there.
2302.10289
Shantanu Ghosh
Shantanu Ghosh, Ke Yu, Forough Arabshahi, Kayhan Batmanghelich
Tackling Shortcut Learning in Deep Neural Networks: An Iterative Approach with Interpretable Models
2nd Workshop on Spurious Correlations, Invariance, and Stability, ICML 2023
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
We use concept-based interpretable models to mitigate shortcut learning. Existing methods lack interpretability. Beginning with a Blackbox, we iteratively carve out a mixture of interpretable experts (MoIE) and a residual network. Each expert explains a subset of data using First Order Logic (FOL). While explaining a sample, the FOL from biased BB-derived MoIE detects the shortcut effectively. Finetuning the BB with Metadata Normalization (MDN) eliminates the shortcut. The FOLs from the finetuned-BB-derived MoIE verify the elimination of the shortcut. Our experiments show that MoIE does not hurt the accuracy of the original BB and eliminates shortcuts effectively.
[ { "created": "Mon, 20 Feb 2023 20:25:41 GMT", "version": "v1" }, { "created": "Thu, 27 Apr 2023 18:40:36 GMT", "version": "v2" }, { "created": "Wed, 3 May 2023 03:37:49 GMT", "version": "v3" }, { "created": "Tue, 9 May 2023 18:49:36 GMT", "version": "v4" }, { "created": "Sat, 20 May 2023 00:36:18 GMT", "version": "v5" }, { "created": "Wed, 14 Jun 2023 16:47:36 GMT", "version": "v6" }, { "created": "Tue, 20 Jun 2023 15:56:23 GMT", "version": "v7" }, { "created": "Sun, 2 Jul 2023 03:44:09 GMT", "version": "v8" }, { "created": "Fri, 7 Jul 2023 05:50:00 GMT", "version": "v9" } ]
2023-07-10
[ [ "Ghosh", "Shantanu", "" ], [ "Yu", "Ke", "" ], [ "Arabshahi", "Forough", "" ], [ "Batmanghelich", "Kayhan", "" ] ]
We use concept-based interpretable models to mitigate shortcut learning. Existing methods lack interpretability. Beginning with a Blackbox, we iteratively carve out a mixture of interpretable experts (MoIE) and a residual network. Each expert explains a subset of data using First Order Logic (FOL). While explaining a sample, the FOL from biased BB-derived MoIE detects the shortcut effectively. Finetuning the BB with Metadata Normalization (MDN) eliminates the shortcut. The FOLs from the finetuned-BB-derived MoIE verify the elimination of the shortcut. Our experiments show that MoIE does not hurt the accuracy of the original BB and eliminates shortcuts effectively.
2007.15362
Simon D. Fink
Thomas Bl\"asius, Simon D. Fink, Ignaz Rutter
Synchronized Planarity with Applications to Constrained Planarity Problems
to appear in Proceedings of ESA 2021
null
null
null
cs.DS cs.DM
http://creativecommons.org/licenses/by/4.0/
We introduce the problem Synchronized Planarity. Roughly speaking, its input is a loop-free multi-graph together with synchronization constraints that, e.g., match pairs of vertices of equal degree by providing a bijection between their edges. Synchronized Planarity then asks whether the graph admits a crossing-free embedding into the plane such that the orders of edges around synchronized vertices are consistent. We show, on the one hand, that Synchronized Planarity can be solved in quadratic time, and, on the other hand, that it serves as a powerful modeling language that lets us easily formulate several constrained planarity problems as instances of Synchronized Planarity. In particular, this lets us solve Clustered Planarity in quadratic time, where the most efficient previously known algorithm has an upper bound of $O(n^{8})$.
[ { "created": "Thu, 30 Jul 2020 10:26:55 GMT", "version": "v1" }, { "created": "Thu, 22 Jul 2021 07:49:08 GMT", "version": "v2" } ]
2021-07-23
[ [ "Bläsius", "Thomas", "" ], [ "Fink", "Simon D.", "" ], [ "Rutter", "Ignaz", "" ] ]
We introduce the problem Synchronized Planarity. Roughly speaking, its input is a loop-free multi-graph together with synchronization constraints that, e.g., match pairs of vertices of equal degree by providing a bijection between their edges. Synchronized Planarity then asks whether the graph admits a crossing-free embedding into the plane such that the orders of edges around synchronized vertices are consistent. We show, on the one hand, that Synchronized Planarity can be solved in quadratic time, and, on the other hand, that it serves as a powerful modeling language that lets us easily formulate several constrained planarity problems as instances of Synchronized Planarity. In particular, this lets us solve Clustered Planarity in quadratic time, where the most efficient previously known algorithm has an upper bound of $O(n^{8})$.
2405.02066
Youngdong Jang
Youngdong Jang, Dong In Lee, MinHyuk Jang, Jong Wook Kim, Feng Yang, Sangpil Kim
WateRF: Robust Watermarks in Radiance Fields for Protection of Copyrights
null
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advances in the Neural Radiance Fields (NeRF) research offer extensive applications in diverse domains, but protecting their copyrights has not yet been researched in depth. Recently, NeRF watermarking has been considered one of the pivotal solutions for safely deploying NeRF-based 3D representations. However, existing methods are designed to apply only to implicit or explicit NeRF representations. In this work, we introduce an innovative watermarking method that can be employed in both representations of NeRF. This is achieved by fine-tuning NeRF to embed binary messages in the rendering process. In detail, we propose utilizing the discrete wavelet transform in the NeRF space for watermarking. Furthermore, we adopt a deferred back-propagation technique and introduce a combination with the patch-wise loss to improve rendering quality and bit accuracy with minimum trade-offs. We evaluate our method in three different aspects: capacity, invisibility, and robustness of the embedded watermarks in the 2D-rendered images. Our method achieves state-of-the-art performance with faster training speed over the compared state-of-the-art methods.
[ { "created": "Fri, 3 May 2024 12:56:34 GMT", "version": "v1" }, { "created": "Thu, 9 May 2024 19:23:33 GMT", "version": "v2" }, { "created": "Mon, 27 May 2024 11:48:37 GMT", "version": "v3" }, { "created": "Fri, 12 Jul 2024 03:35:19 GMT", "version": "v4" } ]
2024-07-15
[ [ "Jang", "Youngdong", "" ], [ "Lee", "Dong In", "" ], [ "Jang", "MinHyuk", "" ], [ "Kim", "Jong Wook", "" ], [ "Yang", "Feng", "" ], [ "Kim", "Sangpil", "" ] ]
The advances in the Neural Radiance Fields (NeRF) research offer extensive applications in diverse domains, but protecting their copyrights has not yet been researched in depth. Recently, NeRF watermarking has been considered one of the pivotal solutions for safely deploying NeRF-based 3D representations. However, existing methods are designed to apply only to implicit or explicit NeRF representations. In this work, we introduce an innovative watermarking method that can be employed in both representations of NeRF. This is achieved by fine-tuning NeRF to embed binary messages in the rendering process. In detail, we propose utilizing the discrete wavelet transform in the NeRF space for watermarking. Furthermore, we adopt a deferred back-propagation technique and introduce a combination with the patch-wise loss to improve rendering quality and bit accuracy with minimum trade-offs. We evaluate our method in three different aspects: capacity, invisibility, and robustness of the embedded watermarks in the 2D-rendered images. Our method achieves state-of-the-art performance with faster training speed over the compared state-of-the-art methods.
1802.08765
Yejia Liu
Oliver Schulte and Yejia Liu and Chao Li
Model Trees for Identifying Exceptional Players in the NHL Draft
14 pages
null
null
null
cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Drafting strong players is crucial for the team success. We describe a new data-driven interpretable approach for assessing draft prospects in the National Hockey League. Successful previous approaches have built a predictive model based on player features, or derived performance predictions from the observed performance of comparable players in a cohort. This paper develops model tree learning, which incorporates strengths of both model-based and cohort-based approaches. A model tree partitions the feature space according to the values of discrete features, or learned thresholds for continuous features. Each leaf node in the tree defines a group of players, easily described to hockey experts, with its own group regression model. Compared to a single model, the model tree forms an ensemble that increases predictive power. Compared to cohort-based approaches, the groups of comparables are discovered from the data, without requiring a similarity metric. The performance predictions of the model tree are competitive with the state-of-the-art methods, which validates our model empirically. We show in case studies that the model tree player ranking can be used to highlight strong and weak points of players.
[ { "created": "Fri, 23 Feb 2018 23:39:41 GMT", "version": "v1" } ]
2018-02-27
[ [ "Schulte", "Oliver", "" ], [ "Liu", "Yejia", "" ], [ "Li", "Chao", "" ] ]
Drafting strong players is crucial for the team success. We describe a new data-driven interpretable approach for assessing draft prospects in the National Hockey League. Successful previous approaches have built a predictive model based on player features, or derived performance predictions from the observed performance of comparable players in a cohort. This paper develops model tree learning, which incorporates strengths of both model-based and cohort-based approaches. A model tree partitions the feature space according to the values of discrete features, or learned thresholds for continuous features. Each leaf node in the tree defines a group of players, easily described to hockey experts, with its own group regression model. Compared to a single model, the model tree forms an ensemble that increases predictive power. Compared to cohort-based approaches, the groups of comparables are discovered from the data, without requiring a similarity metric. The performance predictions of the model tree are competitive with the state-of-the-art methods, which validates our model empirically. We show in case studies that the model tree player ranking can be used to highlight strong and weak points of players.
2306.13414
Onur Dizdar
Onur Dizdar, Ata Sattarzadeh, Yi Xien Yap, and Stephen Wang
A Low-Complexity Design for Rate-Splitting Multiple Access in Overloaded MIMO Networks
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rate-Splitting Multiple Access (RSMA) is a robust multiple access scheme for multi-antenna wireless networks. In this work, we study the performance of RSMA in downlink overloaded networks, where the number of transmit antennas is smaller than the number of users. We provide analysis and closed-form solutions for optimal power and rate allocations that maximize max-min fairness when low-complexity precoding schemes are employed. The derived closed-form solutions are used to propose a low-complexity RSMA system design for precoder selection and resource allocation for arbitrary number of users and antennas under perfect Channel State Information at the Transmitter (CSIT). We compare the performance of the proposed design with benchmark designs based on Space Division Multiple Access (SDMA) to show that the proposed low-complexity RSMA design achieves a significantly higher performance gain in overloaded networks.
[ { "created": "Fri, 23 Jun 2023 10:00:14 GMT", "version": "v1" } ]
2023-06-26
[ [ "Dizdar", "Onur", "" ], [ "Sattarzadeh", "Ata", "" ], [ "Yap", "Yi Xien", "" ], [ "Wang", "Stephen", "" ] ]
Rate-Splitting Multiple Access (RSMA) is a robust multiple access scheme for multi-antenna wireless networks. In this work, we study the performance of RSMA in downlink overloaded networks, where the number of transmit antennas is smaller than the number of users. We provide analysis and closed-form solutions for optimal power and rate allocations that maximize max-min fairness when low-complexity precoding schemes are employed. The derived closed-form solutions are used to propose a low-complexity RSMA system design for precoder selection and resource allocation for arbitrary number of users and antennas under perfect Channel State Information at the Transmitter (CSIT). We compare the performance of the proposed design with benchmark designs based on Space Division Multiple Access (SDMA) to show that the proposed low-complexity RSMA design achieves a significantly higher performance gain in overloaded networks.
1805.06992
Jun Wang
Jun Wang, Sujoy Sikdar, Tyler Shepherd, Zhibing Zhao, Chunheng Jiang, Lirong Xia
Practical Algorithms for STV and Ranked Pairs with Parallel Universes Tiebreaking
15 pages, 12 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
STV and ranked pairs (RP) are two well-studied voting rules for group decision-making. They proceed in multiple rounds, and are affected by how ties are broken in each round. However, the literature is surprisingly vague about how ties should be broken. We propose the first algorithms for computing the set of alternatives that are winners under some tiebreaking mechanism under STV and RP, which is also known as parallel-universes tiebreaking (PUT). Unfortunately, PUT-winners are NP-complete to compute under STV and RP, and standard search algorithms from AI do not apply. We propose multiple DFS-based algorithms along with pruning strategies and heuristics to prioritize search direction to significantly improve the performance using machine learning. We also propose novel ILP formulations for PUT-winners under STV and RP, respectively. Experiments on synthetic and real-world data show that our algorithms are overall significantly faster than ILP, while there are a few cases where ILP is significantly faster for RP.
[ { "created": "Thu, 17 May 2018 23:20:57 GMT", "version": "v1" } ]
2018-05-21
[ [ "Wang", "Jun", "" ], [ "Sikdar", "Sujoy", "" ], [ "Shepherd", "Tyler", "" ], [ "Zhao", "Zhibing", "" ], [ "Jiang", "Chunheng", "" ], [ "Xia", "Lirong", "" ] ]
STV and ranked pairs (RP) are two well-studied voting rules for group decision-making. They proceed in multiple rounds, and are affected by how ties are broken in each round. However, the literature is surprisingly vague about how ties should be broken. We propose the first algorithms for computing the set of alternatives that are winners under some tiebreaking mechanism under STV and RP, which is also known as parallel-universes tiebreaking (PUT). Unfortunately, PUT-winners are NP-complete to compute under STV and RP, and standard search algorithms from AI do not apply. We propose multiple DFS-based algorithms along with pruning strategies and heuristics to prioritize search direction to significantly improve the performance using machine learning. We also propose novel ILP formulations for PUT-winners under STV and RP, respectively. Experiments on synthetic and real-world data show that our algorithms are overall significantly faster than ILP, while there are a few cases where ILP is significantly faster for RP.
2112.03354
Tica Lin
Tica Lin, Yalong Yang, Johanna Beyer, Hanspeter Pfister
Labeling Out-of-View Objects in Immersive Analytics to Support Situated Visual Searching
To be published in IEEE Transactions on Visualization and Computer Graphics
null
10.1109/TVCG.2021.3133511
null
cs.HC cs.GR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Augmented Reality (AR) embeds digital information into objects of the physical world. Data can be shown in-situ, thereby enabling real-time visual comparisons and object search in real-life user tasks, such as comparing products and looking up scores in a sports game. While there have been studies on designing AR interfaces for situated information retrieval, there has only been limited research on AR object labeling for visual search tasks in the spatial environment. In this paper, we identify and categorize different design aspects in AR label design and report on a formal user study on labels for out-of-view objects to support visual search tasks in AR. We design three visualization techniques for out-of-view object labeling in AR, which respectively encode the relative physical position (height-encoded), the rotational direction (angle-encoded), and the label values (value-encoded) of the objects. We further implement two traditional in-view object labeling techniques, where labels are placed either next to the respective objects (situated) or at the edge of the AR FoV (boundary). We evaluate these five different label conditions in three visual search tasks for static objects. Our study shows that out-of-view object labels are beneficial when searching for objects outside the FoV, spatial orientation, and when comparing multiple spatially sparse objects. Angle-encoded labels with directional cues of the surrounding objects have the overall best performance with the highest user satisfaction. We discuss the implications of our findings for future immersive AR interface design.
[ { "created": "Mon, 6 Dec 2021 20:58:04 GMT", "version": "v1" }, { "created": "Sat, 25 Dec 2021 03:11:52 GMT", "version": "v2" } ]
2021-12-28
[ [ "Lin", "Tica", "" ], [ "Yang", "Yalong", "" ], [ "Beyer", "Johanna", "" ], [ "Pfister", "Hanspeter", "" ] ]
Augmented Reality (AR) embeds digital information into objects of the physical world. Data can be shown in-situ, thereby enabling real-time visual comparisons and object search in real-life user tasks, such as comparing products and looking up scores in a sports game. While there have been studies on designing AR interfaces for situated information retrieval, there has only been limited research on AR object labeling for visual search tasks in the spatial environment. In this paper, we identify and categorize different design aspects in AR label design and report on a formal user study on labels for out-of-view objects to support visual search tasks in AR. We design three visualization techniques for out-of-view object labeling in AR, which respectively encode the relative physical position (height-encoded), the rotational direction (angle-encoded), and the label values (value-encoded) of the objects. We further implement two traditional in-view object labeling techniques, where labels are placed either next to the respective objects (situated) or at the edge of the AR FoV (boundary). We evaluate these five different label conditions in three visual search tasks for static objects. Our study shows that out-of-view object labels are beneficial when searching for objects outside the FoV, spatial orientation, and when comparing multiple spatially sparse objects. Angle-encoded labels with directional cues of the surrounding objects have the overall best performance with the highest user satisfaction. We discuss the implications of our findings for future immersive AR interface design.
2403.15285
Xiaofeng Luo
Jiawen Kang, Xiaofeng Luo, Jiangtian Nie, Tianhao Wu, Haibo Zhou, Yonghua Wang, Dusit Niyato, Shiwen Mao, Shengli Xie
Blockchain-based Pseudonym Management for Vehicle Twin Migrations in Vehicular Edge Metaverse
14 pages, 9 figures
null
null
null
cs.NI cs.CR cs.HC cs.LG
http://creativecommons.org/licenses/by/4.0/
Driven by the great advances in metaverse and edge computing technologies, vehicular edge metaverses are expected to disrupt the current paradigm of intelligent transportation systems. As highly computerized avatars of Vehicular Metaverse Users (VMUs), the Vehicle Twins (VTs) deployed in edge servers can provide valuable metaverse services to improve driving safety and on-board satisfaction for their VMUs throughout journeys. To maintain uninterrupted metaverse experiences, VTs must be migrated among edge servers following the movements of vehicles. This can raise concerns about privacy breaches during the dynamic communications among vehicular edge metaverses. To address these concerns and safeguard location privacy, pseudonyms as temporary identifiers can be leveraged by both VMUs and VTs to realize anonymous communications in the physical space and virtual spaces. However, existing pseudonym management methods fall short in meeting the extensive pseudonym demands in vehicular edge metaverses, thus dramatically diminishing the performance of privacy preservation. To this end, we present a cross-metaverse empowered dual pseudonym management framework. We utilize cross-chain technology to enhance management efficiency and data security for pseudonyms. Furthermore, we propose a metric to assess the privacy level and employ a Multi-Agent Deep Reinforcement Learning (MADRL) approach to obtain an optimal pseudonym generating strategy. Numerical results demonstrate that our proposed schemes are high-efficiency and cost-effective, showcasing their promising applications in vehicular edge metaverses.
[ { "created": "Fri, 22 Mar 2024 15:31:37 GMT", "version": "v1" } ]
2024-03-25
[ [ "Kang", "Jiawen", "" ], [ "Luo", "Xiaofeng", "" ], [ "Nie", "Jiangtian", "" ], [ "Wu", "Tianhao", "" ], [ "Zhou", "Haibo", "" ], [ "Wang", "Yonghua", "" ], [ "Niyato", "Dusit", "" ], [ "Mao", "Shiwen", "" ], [ "Xie", "Shengli", "" ] ]
Driven by the great advances in metaverse and edge computing technologies, vehicular edge metaverses are expected to disrupt the current paradigm of intelligent transportation systems. As highly computerized avatars of Vehicular Metaverse Users (VMUs), the Vehicle Twins (VTs) deployed in edge servers can provide valuable metaverse services to improve driving safety and on-board satisfaction for their VMUs throughout journeys. To maintain uninterrupted metaverse experiences, VTs must be migrated among edge servers following the movements of vehicles. This can raise concerns about privacy breaches during the dynamic communications among vehicular edge metaverses. To address these concerns and safeguard location privacy, pseudonyms as temporary identifiers can be leveraged by both VMUs and VTs to realize anonymous communications in the physical space and virtual spaces. However, existing pseudonym management methods fall short in meeting the extensive pseudonym demands in vehicular edge metaverses, thus dramatically diminishing the performance of privacy preservation. To this end, we present a cross-metaverse empowered dual pseudonym management framework. We utilize cross-chain technology to enhance management efficiency and data security for pseudonyms. Furthermore, we propose a metric to assess the privacy level and employ a Multi-Agent Deep Reinforcement Learning (MADRL) approach to obtain an optimal pseudonym generating strategy. Numerical results demonstrate that our proposed schemes are high-efficiency and cost-effective, showcasing their promising applications in vehicular edge metaverses.
1710.07915
Pan Cunhua
Cunhua Pan, Hani Mehrpouyan, Yuanwei Liu, Maged Elkashlan, and Arumugam Nallanathan
Joint Pilot Allocation and Robust Transmission Design for Ultra-dense User-centric TDD C-RAN with Imperfect CSI
Under revision in IEEE TWC
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper considers the unavailability of complete channel state information (CSI) in ultra-dense cloud radio access networks (C-RANs). The user-centric cluster is adopted to reduce the computational complexity, while the incomplete CSI is considered to reduce the heavy channel training overhead, where only large-scale inter-cluster CSI is available. Channel estimation for intra-cluster CSI is also considered, where we formulate a joint pilot allocation and user equipment (UE) selection problem to maximize the number of admitted UEs with fixed number of pilots. A novel pilot allocation algorithm is proposed by considering the multi-UE pilot interference. Then, we consider robust beam-vector optimization problem subject to UEs' data rate requirements and fronthaul capacity constraints, where the channel estimation error and incomplete inter-cluster CSI are considered. The exact data rate is difficult to obtain in closed form, and instead we conservatively replace it with its lower-bound. The resulting problem is non-convex, combinatorial, and even infeasible. A practical algorithm, based on UE selection, successive convex approximation (SCA) and semi-definite relaxation approach, is proposed to solve this problem with guaranteed convergence. We strictly prove that semidefinite relaxation is tight with probability 1. Finally, extensive simulation results are presented to show the fast convergence of our proposed algorithm and demonstrate its superiority over the existing algorithms.
[ { "created": "Sun, 22 Oct 2017 09:38:58 GMT", "version": "v1" } ]
2017-10-24
[ [ "Pan", "Cunhua", "" ], [ "Mehrpouyan", "Hani", "" ], [ "Liu", "Yuanwei", "" ], [ "Elkashlan", "Maged", "" ], [ "Nallanathan", "Arumugam", "" ] ]
This paper considers the unavailability of complete channel state information (CSI) in ultra-dense cloud radio access networks (C-RANs). The user-centric cluster is adopted to reduce the computational complexity, while the incomplete CSI is considered to reduce the heavy channel training overhead, where only large-scale inter-cluster CSI is available. Channel estimation for intra-cluster CSI is also considered, where we formulate a joint pilot allocation and user equipment (UE) selection problem to maximize the number of admitted UEs with fixed number of pilots. A novel pilot allocation algorithm is proposed by considering the multi-UE pilot interference. Then, we consider robust beam-vector optimization problem subject to UEs' data rate requirements and fronthaul capacity constraints, where the channel estimation error and incomplete inter-cluster CSI are considered. The exact data rate is difficult to obtain in closed form, and instead we conservatively replace it with its lower-bound. The resulting problem is non-convex, combinatorial, and even infeasible. A practical algorithm, based on UE selection, successive convex approximation (SCA) and semi-definite relaxation approach, is proposed to solve this problem with guaranteed convergence. We strictly prove that semidefinite relaxation is tight with probability 1. Finally, extensive simulation results are presented to show the fast convergence of our proposed algorithm and demonstrate its superiority over the existing algorithms.
2102.12421
V Lalitha
Shreya Gupta and V. Lalitha
Rack-Aware Cooperative Regenerating Codes
5 pages, 1 figure, accepted for publication in ISITA 2020
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In distributed storage systems, cooperative regenerating codes tradeoff storage for repair bandwidth in the case of multiple node failures. In rack-aware distributed storage systems, there is no cost associated with transferring symbols within a rack. Hence, the repair bandwidth will only take into account cross-rack transfer. Rack-aware regenerating codes for the case of single node failures have been studied and their repair bandwidth tradeoff characterized. In this paper, we consider the framework of rack-aware cooperative regenerating codes for the case of multiple node failures where the node failures are uniformly distributed among a certain number of racks. We characterize the storage repair-bandwidth tradeoff as well as derive the minimum storage and minimum repair bandwidth points of the tradeoff. We also provide constructions of minimum bandwidth rack-aware cooperative regenerating codes for all parameters.
[ { "created": "Wed, 24 Feb 2021 17:31:33 GMT", "version": "v1" } ]
2021-02-25
[ [ "Gupta", "Shreya", "" ], [ "Lalitha", "V.", "" ] ]
In distributed storage systems, cooperative regenerating codes tradeoff storage for repair bandwidth in the case of multiple node failures. In rack-aware distributed storage systems, there is no cost associated with transferring symbols within a rack. Hence, the repair bandwidth will only take into account cross-rack transfer. Rack-aware regenerating codes for the case of single node failures have been studied and their repair bandwidth tradeoff characterized. In this paper, we consider the framework of rack-aware cooperative regenerating codes for the case of multiple node failures where the node failures are uniformly distributed among a certain number of racks. We characterize the storage repair-bandwidth tradeoff as well as derive the minimum storage and minimum repair bandwidth points of the tradeoff. We also provide constructions of minimum bandwidth rack-aware cooperative regenerating codes for all parameters.
2009.02353
Gianni Antichi
Giuseppe Siracusano, Salvator Galea, Davide Sanvito, Mohammad Malekzadeh, Hamed Haddadi, Gianni Antichi, Roberto Bifulco
Running Neural Networks on the NIC
null
null
null
null
cs.DC cs.AI cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we show that the data plane of commodity programmable (Network Interface Cards) NICs can run neural network inference tasks required by packet monitoring applications, with low overhead. This is particularly important as the data transfer costs to the host system and dedicated machine learning accelerators, e.g., GPUs, can be more expensive than the processing task itself. We design and implement our system -- N3IC -- on two different NICs and we show that it can greatly benefit three different network monitoring use cases that require machine learning inference as first-class-primitive. N3IC can perform inference for millions of network flows per second, while forwarding traffic at 40Gb/s. Compared to an equivalent solution implemented on a general purpose CPU, N3IC can provide 100x lower processing latency, with 1.5x increase in throughput.
[ { "created": "Fri, 4 Sep 2020 18:35:58 GMT", "version": "v1" } ]
2020-09-08
[ [ "Siracusano", "Giuseppe", "" ], [ "Galea", "Salvator", "" ], [ "Sanvito", "Davide", "" ], [ "Malekzadeh", "Mohammad", "" ], [ "Haddadi", "Hamed", "" ], [ "Antichi", "Gianni", "" ], [ "Bifulco", "Roberto", "" ] ]
In this paper we show that the data plane of commodity programmable (Network Interface Cards) NICs can run neural network inference tasks required by packet monitoring applications, with low overhead. This is particularly important as the data transfer costs to the host system and dedicated machine learning accelerators, e.g., GPUs, can be more expensive than the processing task itself. We design and implement our system -- N3IC -- on two different NICs and we show that it can greatly benefit three different network monitoring use cases that require machine learning inference as first-class-primitive. N3IC can perform inference for millions of network flows per second, while forwarding traffic at 40Gb/s. Compared to an equivalent solution implemented on a general purpose CPU, N3IC can provide 100x lower processing latency, with 1.5x increase in throughput.
2402.04616
Yijun Tian
Yijun Tian, Yikun Han, Xiusi Chen, Wei Wang, Nitesh V. Chawla
TinyLLM: Learning a Small Student from Multiple Large Language Models
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transferring the reasoning capability from stronger large language models (LLMs) to smaller ones has been quite appealing, as smaller LLMs are more flexible to deploy with less expense. Among the existing solutions, knowledge distillation stands out due to its outstanding efficiency and generalization. However, existing methods suffer from several drawbacks, including limited knowledge diversity and the lack of rich contextual information. To solve the problems and facilitate the learning of compact language models, we propose TinyLLM, a new knowledge distillation paradigm to learn a small student LLM from multiple large teacher LLMs. In particular, we encourage the student LLM to not only generate the correct answers but also understand the rationales behind these answers. Given that different LLMs possess diverse reasoning skills, we guide the student model to assimilate knowledge from various teacher LLMs. We further introduce an in-context example generator and a teacher-forcing Chain-of-Thought strategy to ensure that the rationales are accurate and grounded in contextually appropriate scenarios. Extensive experiments on six datasets across two reasoning tasks demonstrate the superiority of our method. Results show that TinyLLM can outperform large teacher LLMs significantly, despite a considerably smaller model size.
[ { "created": "Wed, 7 Feb 2024 06:48:24 GMT", "version": "v1" }, { "created": "Mon, 1 Apr 2024 01:28:48 GMT", "version": "v2" } ]
2024-04-02
[ [ "Tian", "Yijun", "" ], [ "Han", "Yikun", "" ], [ "Chen", "Xiusi", "" ], [ "Wang", "Wei", "" ], [ "Chawla", "Nitesh V.", "" ] ]
Transferring the reasoning capability from stronger large language models (LLMs) to smaller ones has been quite appealing, as smaller LLMs are more flexible to deploy with less expense. Among the existing solutions, knowledge distillation stands out due to its outstanding efficiency and generalization. However, existing methods suffer from several drawbacks, including limited knowledge diversity and the lack of rich contextual information. To solve the problems and facilitate the learning of compact language models, we propose TinyLLM, a new knowledge distillation paradigm to learn a small student LLM from multiple large teacher LLMs. In particular, we encourage the student LLM to not only generate the correct answers but also understand the rationales behind these answers. Given that different LLMs possess diverse reasoning skills, we guide the student model to assimilate knowledge from various teacher LLMs. We further introduce an in-context example generator and a teacher-forcing Chain-of-Thought strategy to ensure that the rationales are accurate and grounded in contextually appropriate scenarios. Extensive experiments on six datasets across two reasoning tasks demonstrate the superiority of our method. Results show that TinyLLM can outperform large teacher LLMs significantly, despite a considerably smaller model size.
2103.08759
Gene Louis Kim
Gene Louis Kim, Viet Duong, Xin Lu, Lenhart Schubert
A Transition-based Parser for Unscoped Episodic Logical Forms
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
"Episodic Logic:Unscoped Logical Form" (EL-ULF) is a semantic representation capturing predicate-argument structure as well as more challenging aspects of language within the Episodic Logic formalism. We present the first learned approach for parsing sentences into ULFs, using a growing set of annotated examples. The results provide a strong baseline for future improvement. Our method learns a sequence-to-sequence model for predicting the transition action sequence within a modified cache transition system. We evaluate the efficacy of type grammar-based constraints, a word-to-symbol lexicon, and transition system state features in this task. Our system is available at https://github.com/genelkim/ulf-transition-parser We also present the first official annotated ULF dataset at https://www.cs.rochester.edu/u/gkim21/ulf/resources/.
[ { "created": "Mon, 15 Mar 2021 23:09:32 GMT", "version": "v1" } ]
2021-03-17
[ [ "Kim", "Gene Louis", "" ], [ "Duong", "Viet", "" ], [ "Lu", "Xin", "" ], [ "Schubert", "Lenhart", "" ] ]
"Episodic Logic:Unscoped Logical Form" (EL-ULF) is a semantic representation capturing predicate-argument structure as well as more challenging aspects of language within the Episodic Logic formalism. We present the first learned approach for parsing sentences into ULFs, using a growing set of annotated examples. The results provide a strong baseline for future improvement. Our method learns a sequence-to-sequence model for predicting the transition action sequence within a modified cache transition system. We evaluate the efficacy of type grammar-based constraints, a word-to-symbol lexicon, and transition system state features in this task. Our system is available at https://github.com/genelkim/ulf-transition-parser We also present the first official annotated ULF dataset at https://www.cs.rochester.edu/u/gkim21/ulf/resources/.
1911.07132
Quanming Yao
Yongqi Zhang, Quanming Yao, Lei Chen
Interstellar: Searching Recurrent Architecture for Knowledge Graph Embedding
Accepted to NeurIPS 2020
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge graph (KG) embedding is well-known in learning representations of KGs. Many models have been proposed to learn the interactions between entities and relations of the triplets. However, long-term information among multiple triplets is also important to KG. In this work, based on the relational paths, which are composed of a sequence of triplets, we define the Interstellar as a recurrent neural architecture search problem for the short-term and long-term information along the paths. First, we analyze the difficulty of using a unified model to work as the Interstellar. Then, we propose to search for recurrent architecture as the Interstellar for different KG tasks. A case study on synthetic data illustrates the importance of the defined search problem. Experiments on real datasets demonstrate the effectiveness of the searched models and the efficiency of the proposed hybrid-search algorithm.
[ { "created": "Sun, 17 Nov 2019 02:16:24 GMT", "version": "v1" }, { "created": "Sat, 24 Oct 2020 14:24:10 GMT", "version": "v2" }, { "created": "Wed, 28 Apr 2021 07:16:19 GMT", "version": "v3" } ]
2021-04-29
[ [ "Zhang", "Yongqi", "" ], [ "Yao", "Quanming", "" ], [ "Chen", "Lei", "" ] ]
Knowledge graph (KG) embedding is well-known in learning representations of KGs. Many models have been proposed to learn the interactions between entities and relations of the triplets. However, long-term information among multiple triplets is also important to KG. In this work, based on the relational paths, which are composed of a sequence of triplets, we define the Interstellar as a recurrent neural architecture search problem for the short-term and long-term information along the paths. First, we analyze the difficulty of using a unified model to work as the Interstellar. Then, we propose to search for recurrent architecture as the Interstellar for different KG tasks. A case study on synthetic data illustrates the importance of the defined search problem. Experiments on real datasets demonstrate the effectiveness of the searched models and the efficiency of the proposed hybrid-search algorithm.
1806.10051
Sepehr Assadi
Sepehr Assadi, Krzysztof Onak, Baruch Schieber, Shay Solomon
Fully Dynamic Maximal Independent Set with Sublinear in n Update Time
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The first fully dynamic algorithm for maintaining a maximal independent set (MIS) with update time that is sublinear in the number of edges was presented recently by the authors of this paper [Assadi et.al. STOC'18]. The algorithm is deterministic and its update time is $O(m^{3/4})$, where $m$ is the (dynamically changing) number of edges. Subsequently, Gupta and Khan and independently Du and Zhang [arXiv, April 2018] presented deterministic algorithms for dynamic MIS with update times of $O(m^{2/3})$ and $O(m^{2/3} \sqrt{\log m})$, respectively. Du and Zhang also gave a randomized algorithm with update time $\widetilde{O}(\sqrt{m})$. Moreover, they provided some partial (conditional) hardness results hinting that update time of $m^{1/2-\epsilon}$, and in particular $n^{1-\epsilon}$ for $n$-vertex dense graphs, is a natural barrier for this problem for any constant $\epsilon >0$, for both deterministic and randomized algorithms that satisfy a certain natural property. In this paper, we break this natural barrier and present the first fully dynamic (randomized) algorithm for maintaining an MIS with update time that is always sublinear in the number of vertices, namely, an $\widetilde{O}(\sqrt{n})$ expected amortized update time algorithm. We also show that a simpler variant of our algorithm can already achieve an $\widetilde{O}(m^{1/3})$ expected amortized update time, which results in an improved performance over our $\widetilde{O}(\sqrt{n})$ update time algorithm for sufficiently sparse graphs, and breaks the $m^{1/2}$ barrier of Du and Zhang for all values of $m$.
[ { "created": "Tue, 26 Jun 2018 15:07:47 GMT", "version": "v1" } ]
2018-06-27
[ [ "Assadi", "Sepehr", "" ], [ "Onak", "Krzysztof", "" ], [ "Schieber", "Baruch", "" ], [ "Solomon", "Shay", "" ] ]
The first fully dynamic algorithm for maintaining a maximal independent set (MIS) with update time that is sublinear in the number of edges was presented recently by the authors of this paper [Assadi et.al. STOC'18]. The algorithm is deterministic and its update time is $O(m^{3/4})$, where $m$ is the (dynamically changing) number of edges. Subsequently, Gupta and Khan and independently Du and Zhang [arXiv, April 2018] presented deterministic algorithms for dynamic MIS with update times of $O(m^{2/3})$ and $O(m^{2/3} \sqrt{\log m})$, respectively. Du and Zhang also gave a randomized algorithm with update time $\widetilde{O}(\sqrt{m})$. Moreover, they provided some partial (conditional) hardness results hinting that update time of $m^{1/2-\epsilon}$, and in particular $n^{1-\epsilon}$ for $n$-vertex dense graphs, is a natural barrier for this problem for any constant $\epsilon >0$, for both deterministic and randomized algorithms that satisfy a certain natural property. In this paper, we break this natural barrier and present the first fully dynamic (randomized) algorithm for maintaining an MIS with update time that is always sublinear in the number of vertices, namely, an $\widetilde{O}(\sqrt{n})$ expected amortized update time algorithm. We also show that a simpler variant of our algorithm can already achieve an $\widetilde{O}(m^{1/3})$ expected amortized update time, which results in an improved performance over our $\widetilde{O}(\sqrt{n})$ update time algorithm for sufficiently sparse graphs, and breaks the $m^{1/2}$ barrier of Du and Zhang for all values of $m$.
1904.08754
Fabio Giachelle
Fabio Giachelle, Gianmaria Silvello
A Progressive Visual Analytics Tool for Incremental Experimental Evaluation
null
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
This paper presents a visual tool, AVIATOR, that integrates the progressive visual analytics paradigm in the IR evaluation process. This tool serves to speed-up and facilitate the performance assessment of retrieval models enabling a result analysis through visual facilities. AVIATOR goes one step beyond the common "compute wait visualize" analytics paradigm, introducing a continuous evaluation mechanism that minimizes human and computational resource consumption.
[ { "created": "Thu, 18 Apr 2019 13:16:53 GMT", "version": "v1" } ]
2019-04-19
[ [ "Giachelle", "Fabio", "" ], [ "Silvello", "Gianmaria", "" ] ]
This paper presents a visual tool, AVIATOR, that integrates the progressive visual analytics paradigm in the IR evaluation process. This tool serves to speed-up and facilitate the performance assessment of retrieval models enabling a result analysis through visual facilities. AVIATOR goes one step beyond the common "compute wait visualize" analytics paradigm, introducing a continuous evaluation mechanism that minimizes human and computational resource consumption.
2006.04672
Weichao Mao
Weichao Mao, Kaiqing Zhang, Qiaomin Xie, Tamer Ba\c{s}ar
POLY-HOOT: Monte-Carlo Planning in Continuous Space MDPs with Non-Asymptotic Analysis
NeurIPS 2020
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monte-Carlo planning, as exemplified by Monte-Carlo Tree Search (MCTS), has demonstrated remarkable performance in applications with finite spaces. In this paper, we consider Monte-Carlo planning in an environment with continuous state-action spaces, a much less understood problem with important applications in control and robotics. We introduce POLY-HOOT, an algorithm that augments MCTS with a continuous armed bandit strategy named Hierarchical Optimistic Optimization (HOO) (Bubeck et al., 2011). Specifically, we enhance HOO by using an appropriate polynomial, rather than logarithmic, bonus term in the upper confidence bounds. Such a polynomial bonus is motivated by its empirical successes in AlphaGo Zero (Silver et al., 2017b), as well as its significant role in achieving theoretical guarantees of finite space MCTS (Shah et al., 2019). We investigate, for the first time, the regret of the enhanced HOO algorithm in non-stationary bandit problems. Using this result as a building block, we establish non-asymptotic convergence guarantees for POLY-HOOT: the value estimate converges to an arbitrarily small neighborhood of the optimal value function at a polynomial rate. We further provide experimental results that corroborate our theoretical findings.
[ { "created": "Mon, 8 Jun 2020 15:23:19 GMT", "version": "v1" }, { "created": "Wed, 30 Dec 2020 05:21:05 GMT", "version": "v2" } ]
2021-01-01
[ [ "Mao", "Weichao", "" ], [ "Zhang", "Kaiqing", "" ], [ "Xie", "Qiaomin", "" ], [ "Başar", "Tamer", "" ] ]
Monte-Carlo planning, as exemplified by Monte-Carlo Tree Search (MCTS), has demonstrated remarkable performance in applications with finite spaces. In this paper, we consider Monte-Carlo planning in an environment with continuous state-action spaces, a much less understood problem with important applications in control and robotics. We introduce POLY-HOOT, an algorithm that augments MCTS with a continuous armed bandit strategy named Hierarchical Optimistic Optimization (HOO) (Bubeck et al., 2011). Specifically, we enhance HOO by using an appropriate polynomial, rather than logarithmic, bonus term in the upper confidence bounds. Such a polynomial bonus is motivated by its empirical successes in AlphaGo Zero (Silver et al., 2017b), as well as its significant role in achieving theoretical guarantees of finite space MCTS (Shah et al., 2019). We investigate, for the first time, the regret of the enhanced HOO algorithm in non-stationary bandit problems. Using this result as a building block, we establish non-asymptotic convergence guarantees for POLY-HOOT: the value estimate converges to an arbitrarily small neighborhood of the optimal value function at a polynomial rate. We further provide experimental results that corroborate our theoretical findings.
2401.00162
Guojian Wang
Guojian Wang, Faguo Wu, Xiao Zhang, Tianyuan Chen
Policy Optimization with Smooth Guidance Learned from State-Only Demonstrations
31 pages, 23 figures; This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The sparsity of reward feedback remains a challenging problem in online deep reinforcement learning (DRL). Previous approaches have utilized offline demonstrations to achieve impressive results in multiple hard tasks. However, these approaches place high demands on demonstration quality, and obtaining expert-like actions is often costly and unrealistic. To tackle these problems, we propose a simple and efficient algorithm called Policy Optimization with Smooth Guidance (POSG), which leverages a small set of state-only demonstrations (where expert action information is not included in demonstrations) to indirectly make approximate and feasible long-term credit assignments and facilitate exploration. Specifically, we first design a trajectory-importance evaluation mechanism to determine the quality of the current trajectory against demonstrations. Then, we introduce a guidance reward computation technology based on trajectory importance to measure the impact of each state-action pair, fusing the demonstrator's state distribution with reward information into the guidance reward. We theoretically analyze the performance improvement caused by smooth guidance rewards and derive a new worst-case lower bound on the performance improvement. Extensive results demonstrate POSG's significant advantages in control performance and convergence speed in four sparse-reward environments, including the grid-world maze, Hopper-v4, HalfCheetah-v4, and Ant maze. Notably, the specific metrics and quantifiable results are investigated to demonstrate the superiority of POSG.
[ { "created": "Sat, 30 Dec 2023 07:41:45 GMT", "version": "v1" }, { "created": "Wed, 10 Apr 2024 13:32:06 GMT", "version": "v2" }, { "created": "Sat, 3 Aug 2024 01:14:11 GMT", "version": "v3" } ]
2024-08-06
[ [ "Wang", "Guojian", "" ], [ "Wu", "Faguo", "" ], [ "Zhang", "Xiao", "" ], [ "Chen", "Tianyuan", "" ] ]
The sparsity of reward feedback remains a challenging problem in online deep reinforcement learning (DRL). Previous approaches have utilized offline demonstrations to achieve impressive results in multiple hard tasks. However, these approaches place high demands on demonstration quality, and obtaining expert-like actions is often costly and unrealistic. To tackle these problems, we propose a simple and efficient algorithm called Policy Optimization with Smooth Guidance (POSG), which leverages a small set of state-only demonstrations (where expert action information is not included in demonstrations) to indirectly make approximate and feasible long-term credit assignments and facilitate exploration. Specifically, we first design a trajectory-importance evaluation mechanism to determine the quality of the current trajectory against demonstrations. Then, we introduce a guidance reward computation technology based on trajectory importance to measure the impact of each state-action pair, fusing the demonstrator's state distribution with reward information into the guidance reward. We theoretically analyze the performance improvement caused by smooth guidance rewards and derive a new worst-case lower bound on the performance improvement. Extensive results demonstrate POSG's significant advantages in control performance and convergence speed in four sparse-reward environments, including the grid-world maze, Hopper-v4, HalfCheetah-v4, and Ant maze. Notably, the specific metrics and quantifiable results are investigated to demonstrate the superiority of POSG.
2009.09144
Bojian Wu
Jiahui Lyu, Bojian Wu, Dani Lischinski, Daniel Cohen-Or, Hui Huang
Differentiable Refraction-Tracing for Mesh Reconstruction of Transparent Objects
13 pages, 21 figures
ACM Trans. on Graphics (Proc. of SIGGRAPH Asia 2020)
10.1145/3414685.3417815
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Capturing the 3D geometry of transparent objects is a challenging task, ill-suited for general-purpose scanning and reconstruction techniques, since these cannot handle specular light transport phenomena. Existing state-of-the-art methods, designed specifically for this task, either involve a complex setup to reconstruct complete refractive ray paths, or leverage a data-driven approach based on synthetic training data. In either case, the reconstructed 3D models suffer from over-smoothing and loss of fine detail. This paper introduces a novel, high precision, 3D acquisition and reconstruction method for solid transparent objects. Using a static background with a coded pattern, we establish a mapping between the camera view rays and locations on the background. Differentiable tracing of refractive ray paths is then used to directly optimize a 3D mesh approximation of the object, while simultaneously ensuring silhouette consistency and smoothness. Extensive experiments and comparisons demonstrate the superior accuracy of our method.
[ { "created": "Sat, 19 Sep 2020 02:29:00 GMT", "version": "v1" } ]
2020-09-22
[ [ "Lyu", "Jiahui", "" ], [ "Wu", "Bojian", "" ], [ "Lischinski", "Dani", "" ], [ "Cohen-Or", "Daniel", "" ], [ "Huang", "Hui", "" ] ]
Capturing the 3D geometry of transparent objects is a challenging task, ill-suited for general-purpose scanning and reconstruction techniques, since these cannot handle specular light transport phenomena. Existing state-of-the-art methods, designed specifically for this task, either involve a complex setup to reconstruct complete refractive ray paths, or leverage a data-driven approach based on synthetic training data. In either case, the reconstructed 3D models suffer from over-smoothing and loss of fine detail. This paper introduces a novel, high precision, 3D acquisition and reconstruction method for solid transparent objects. Using a static background with a coded pattern, we establish a mapping between the camera view rays and locations on the background. Differentiable tracing of refractive ray paths is then used to directly optimize a 3D mesh approximation of the object, while simultaneously ensuring silhouette consistency and smoothness. Extensive experiments and comparisons demonstrate the superior accuracy of our method.
1909.06997
Han Liu
Han Liu, Ge Gao, Hehua Zhang, Yu-Shen Liu, Yan Song, Ming Gu
MVDLite: a Fast Validation Algorithm for Model View Definition Rules
Preprint submitted to 29th International Workshop on Intelligent Computing in Engineering (EG-ICE)
null
10.7146/aul.455.c192
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Model View Definition (MVD) is the standard methodology to define the data exchange requirements and rule constraints for Building Information Models (BIMs). In this paper, the MVDLite algorithm is proposed for the fast validation of MVD rules. A "rule chain" structure is introduced to combine the data templates, constraint statements, and logical interconnections in an input mvdXML ruleset, which leads to fast filtering of data nodes through the rule chain. By establishing the correspondence of each prefix of the rule chain with a string, the deep-caching strategy further improves efficiency. The outperforming experimental results show that our algorithm significantly reduces the running time of MVD validation on large real-world BIMs.
[ { "created": "Mon, 16 Sep 2019 05:39:49 GMT", "version": "v1" }, { "created": "Fri, 31 Jan 2020 08:54:35 GMT", "version": "v2" }, { "created": "Fri, 4 Mar 2022 00:41:00 GMT", "version": "v3" }, { "created": "Sun, 17 Apr 2022 15:18:16 GMT", "version": "v4" } ]
2023-12-29
[ [ "Liu", "Han", "" ], [ "Gao", "Ge", "" ], [ "Zhang", "Hehua", "" ], [ "Liu", "Yu-Shen", "" ], [ "Song", "Yan", "" ], [ "Gu", "Ming", "" ] ]
Model View Definition (MVD) is the standard methodology to define the data exchange requirements and rule constraints for Building Information Models (BIMs). In this paper, the MVDLite algorithm is proposed for the fast validation of MVD rules. A "rule chain" structure is introduced to combine the data templates, constraint statements, and logical interconnections in an input mvdXML ruleset, which leads to fast filtering of data nodes through the rule chain. By establishing the correspondence of each prefix of the rule chain with a string, the deep-caching strategy further improves efficiency. The outperforming experimental results show that our algorithm significantly reduces the running time of MVD validation on large real-world BIMs.
2308.00566
Amir Bar
Amir Bar, Florian Bordes, Assaf Shocher, Mahmoud Assran, Pascal Vincent, Nicolas Ballas, Trevor Darrell, Amir Globerson, Yann LeCun
Stochastic positional embeddings improve masked image modeling
Code and models available in https://github.com/amirbar/StoP
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Masked Image Modeling (MIM) is a promising self-supervised learning approach that enables learning from unlabeled images. Despite its recent success, learning good representations through MIM remains challenging because it requires predicting the right semantic content in accurate locations. For example, given an incomplete picture of a dog, we can guess that there is a tail, but we cannot determine its exact location. In this work, we propose to incorporate location uncertainty into MIM by using stochastic positional embeddings (StoP). Specifically, we condition the model on stochastic masked token positions drawn from a Gaussian distribution. StoP reduces overfitting to location features and guides the model toward learning features that are more robust to location uncertainties. Quantitatively, StoP improves downstream MIM performance on a variety of downstream tasks, including $+1.7\%$ on ImageNet linear probing using ViT-B, and $+2.5\%$ for ViT-H using $1\%$ of the data.
[ { "created": "Mon, 31 Jul 2023 17:59:08 GMT", "version": "v1" }, { "created": "Tue, 27 Feb 2024 18:59:14 GMT", "version": "v2" } ]
2024-02-28
[ [ "Bar", "Amir", "" ], [ "Bordes", "Florian", "" ], [ "Shocher", "Assaf", "" ], [ "Assran", "Mahmoud", "" ], [ "Vincent", "Pascal", "" ], [ "Ballas", "Nicolas", "" ], [ "Darrell", "Trevor", "" ], [ "Globerson", "Amir", "" ], [ "LeCun", "Yann", "" ] ]
Masked Image Modeling (MIM) is a promising self-supervised learning approach that enables learning from unlabeled images. Despite its recent success, learning good representations through MIM remains challenging because it requires predicting the right semantic content in accurate locations. For example, given an incomplete picture of a dog, we can guess that there is a tail, but we cannot determine its exact location. In this work, we propose to incorporate location uncertainty into MIM by using stochastic positional embeddings (StoP). Specifically, we condition the model on stochastic masked token positions drawn from a Gaussian distribution. StoP reduces overfitting to location features and guides the model toward learning features that are more robust to location uncertainties. Quantitatively, StoP improves downstream MIM performance on a variety of downstream tasks, including $+1.7\%$ on ImageNet linear probing using ViT-B, and $+2.5\%$ for ViT-H using $1\%$ of the data.
2010.06059
Sonia Baee
Sonia Baee, Mark Rucker, Anna Baglione, Mawulolo K. Ameko, Laura Barnes
A Framework for Addressing the Risks and Opportunities In AI-Supported Virtual Health Coaches
4 pages
null
10.1145/3421937.3421971
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Virtual coaching has rapidly evolved into a foundational component of modern clinical practice. At a time when healthcare professionals are in short supply and the demand for low-cost treatments is ever-increasing, virtual health coaches (VHCs) offer intervention-on-demand for those limited by finances or geographic access to care. More recently, AI-powered virtual coaches have become a viable complement to human coaches. However, the push for AI-powered coaching systems raises several important issues for researchers, designers, clinicians, and patients. In this paper, we present a novel framework to guide the design and development of virtual coaching systems. This framework augments a traditional data science pipeline with four key guiding goals: reliability, fairness, engagement, and ethics.
[ { "created": "Mon, 12 Oct 2020 22:41:35 GMT", "version": "v1" } ]
2021-01-06
[ [ "Baee", "Sonia", "" ], [ "Rucker", "Mark", "" ], [ "Baglione", "Anna", "" ], [ "Ameko", "Mawulolo K.", "" ], [ "Barnes", "Laura", "" ] ]
Virtual coaching has rapidly evolved into a foundational component of modern clinical practice. At a time when healthcare professionals are in short supply and the demand for low-cost treatments is ever-increasing, virtual health coaches (VHCs) offer intervention-on-demand for those limited by finances or geographic access to care. More recently, AI-powered virtual coaches have become a viable complement to human coaches. However, the push for AI-powered coaching systems raises several important issues for researchers, designers, clinicians, and patients. In this paper, we present a novel framework to guide the design and development of virtual coaching systems. This framework augments a traditional data science pipeline with four key guiding goals: reliability, fairness, engagement, and ethics.
2310.11886
Yanhao Wang
Jiaxi Pu, Yanhao Wang, Yuchen Li, Xuan Zhou
Sampling Algorithms for Butterfly Counting on Temporal Bipartite Graphs
10 pages, 10 figures; under review
null
null
null
cs.SI cs.DB cs.DS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Temporal bipartite graphs are widely used to denote time-evolving relationships between two disjoint sets of nodes, such as customer-product interactions in E-commerce and user-group memberships in social networks. Temporal butterflies, $(2,2)$-bicliques that occur within a short period and in a prescribed order, are essential in modeling the structural and sequential patterns of such graphs. Counting the number of temporal butterflies is thus a fundamental task in analyzing temporal bipartite graphs. However, existing algorithms for butterfly counting on static bipartite graphs and motif counting on temporal unipartite graphs are inefficient for this purpose. In this paper, we present a general framework with three sampling strategies for temporal butterfly counting. Since exact counting can be time-consuming on large graphs, our approach alternatively computes approximate estimates accurately and efficiently. We also provide analytical bounds on the number of samples each strategy requires to obtain estimates with small relative errors and high probability. We finally evaluate our framework on six real-world datasets and demonstrate its superior accuracy and efficiency compared to several baselines. Overall, our proposed framework and sampling strategies provide efficient and accurate approaches to approximating temporal butterfly counts on large-scale temporal bipartite graphs.
[ { "created": "Wed, 18 Oct 2023 11:11:19 GMT", "version": "v1" } ]
2023-10-19
[ [ "Pu", "Jiaxi", "" ], [ "Wang", "Yanhao", "" ], [ "Li", "Yuchen", "" ], [ "Zhou", "Xuan", "" ] ]
Temporal bipartite graphs are widely used to denote time-evolving relationships between two disjoint sets of nodes, such as customer-product interactions in E-commerce and user-group memberships in social networks. Temporal butterflies, $(2,2)$-bicliques that occur within a short period and in a prescribed order, are essential in modeling the structural and sequential patterns of such graphs. Counting the number of temporal butterflies is thus a fundamental task in analyzing temporal bipartite graphs. However, existing algorithms for butterfly counting on static bipartite graphs and motif counting on temporal unipartite graphs are inefficient for this purpose. In this paper, we present a general framework with three sampling strategies for temporal butterfly counting. Since exact counting can be time-consuming on large graphs, our approach alternatively computes approximate estimates accurately and efficiently. We also provide analytical bounds on the number of samples each strategy requires to obtain estimates with small relative errors and high probability. We finally evaluate our framework on six real-world datasets and demonstrate its superior accuracy and efficiency compared to several baselines. Overall, our proposed framework and sampling strategies provide efficient and accurate approaches to approximating temporal butterfly counts on large-scale temporal bipartite graphs.
0708.1211
Mark Iwen
M. A. Iwen
A Deterministic Sub-linear Time Sparse Fourier Algorithm via Non-adaptive Compressed Sensing Methods
16 pages total, 10 in paper, 6 in appended
null
null
null
cs.DM cs.NA
null
We study the problem of estimating the best B term Fourier representation for a given frequency-sparse signal (i.e., vector) $\textbf{A}$ of length $N \gg B$. More explicitly, we investigate how to deterministically identify B of the largest magnitude frequencies of $\hat{\textbf{A}}$, and estimate their coefficients, in polynomial$(B,\log N)$ time. Randomized sub-linear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem. However, for failure intolerant applications such as those involving mission-critical hardware designed to process many signals over a long lifetime, deterministic algorithms with no probability of failure are highly desirable. In this paper we build on the deterministic Compressed Sensing results of Cormode and Muthukrishnan (CM) \cite{CMDetCS3,CMDetCS1,CMDetCS2} in order to develop the first known deterministic sub-linear time sparse Fourier Transform algorithm suitable for failure intolerant applications. Furthermore, in the process of developing our new Fourier algorithm, we present a simplified deterministic Compressed Sensing algorithm which improves on CM's algebraic compressibility results while simultaneously maintaining their results concerning exponential decay.
[ { "created": "Thu, 9 Aug 2007 04:07:06 GMT", "version": "v1" } ]
2007-08-10
[ [ "Iwen", "M. A.", "" ] ]
We study the problem of estimating the best B term Fourier representation for a given frequency-sparse signal (i.e., vector) $\textbf{A}$ of length $N \gg B$. More explicitly, we investigate how to deterministically identify B of the largest magnitude frequencies of $\hat{\textbf{A}}$, and estimate their coefficients, in polynomial$(B,\log N)$ time. Randomized sub-linear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem. However, for failure intolerant applications such as those involving mission-critical hardware designed to process many signals over a long lifetime, deterministic algorithms with no probability of failure are highly desirable. In this paper we build on the deterministic Compressed Sensing results of Cormode and Muthukrishnan (CM) \cite{CMDetCS3,CMDetCS1,CMDetCS2} in order to develop the first known deterministic sub-linear time sparse Fourier Transform algorithm suitable for failure intolerant applications. Furthermore, in the process of developing our new Fourier algorithm, we present a simplified deterministic Compressed Sensing algorithm which improves on CM's algebraic compressibility results while simultaneously maintaining their results concerning exponential decay.
2306.00612
Jiakang Yuan
Jiakang Yuan, Bo Zhang, Xiangchao Yan, Tao Chen, Botian Shi, Yikang Li, Yu Qiao
AD-PT: Autonomous Driving Pre-Training with Large-scale Point Cloud Dataset
Accepted by NeurIPS 2023. Project page: https://jiakangyuan.github.io/AD-PT.github.io/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is a long-term vision for Autonomous Driving (AD) community that the perception models can learn from a large-scale point cloud dataset, to obtain unified representations that can achieve promising results on different tasks or benchmarks. Previous works mainly focus on the self-supervised pre-training pipeline, meaning that they perform the pre-training and fine-tuning on the same benchmark, which is difficult to attain the performance scalability and cross-dataset application for the pre-training checkpoint. In this paper, for the first time, we are committed to building a large-scale pre-training point-cloud dataset with diverse data distribution, and meanwhile learning generalizable representations from such a diverse pre-training dataset. We formulate the point-cloud pre-training task as a semi-supervised problem, which leverages the few-shot labeled and massive unlabeled point-cloud data to generate the unified backbone representations that can be directly applied to many baseline models and benchmarks, decoupling the AD-related pre-training process and downstream fine-tuning task. During the period of backbone pre-training, by enhancing the scene- and instance-level distribution diversity and exploiting the backbone's ability to learn from unknown instances, we achieve significant performance gains on a series of downstream perception benchmarks including Waymo, nuScenes, and KITTI, under different baseline models like PV-RCNN++, SECOND, CenterPoint.
[ { "created": "Thu, 1 Jun 2023 12:32:52 GMT", "version": "v1" }, { "created": "Mon, 10 Jul 2023 12:32:23 GMT", "version": "v2" }, { "created": "Thu, 26 Oct 2023 15:20:31 GMT", "version": "v3" } ]
2023-10-27
[ [ "Yuan", "Jiakang", "" ], [ "Zhang", "Bo", "" ], [ "Yan", "Xiangchao", "" ], [ "Chen", "Tao", "" ], [ "Shi", "Botian", "" ], [ "Li", "Yikang", "" ], [ "Qiao", "Yu", "" ] ]
It is a long-term vision for Autonomous Driving (AD) community that the perception models can learn from a large-scale point cloud dataset, to obtain unified representations that can achieve promising results on different tasks or benchmarks. Previous works mainly focus on the self-supervised pre-training pipeline, meaning that they perform the pre-training and fine-tuning on the same benchmark, which is difficult to attain the performance scalability and cross-dataset application for the pre-training checkpoint. In this paper, for the first time, we are committed to building a large-scale pre-training point-cloud dataset with diverse data distribution, and meanwhile learning generalizable representations from such a diverse pre-training dataset. We formulate the point-cloud pre-training task as a semi-supervised problem, which leverages the few-shot labeled and massive unlabeled point-cloud data to generate the unified backbone representations that can be directly applied to many baseline models and benchmarks, decoupling the AD-related pre-training process and downstream fine-tuning task. During the period of backbone pre-training, by enhancing the scene- and instance-level distribution diversity and exploiting the backbone's ability to learn from unknown instances, we achieve significant performance gains on a series of downstream perception benchmarks including Waymo, nuScenes, and KITTI, under different baseline models like PV-RCNN++, SECOND, CenterPoint.
2405.01716
Jayshree Sarathy
Rachel Cummings and Shlomi Hod and Jayshree Sarathy and Marika Swanberg
ATTAXONOMY: Unpacking Differential Privacy Guarantees Against Practical Adversaries
null
null
null
null
cs.CR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differential Privacy (DP) is a mathematical framework that is increasingly deployed to mitigate privacy risks associated with machine learning and statistical analyses. Despite the growing adoption of DP, its technical privacy parameters do not lend themselves to an intelligible description of the real-world privacy risks associated with that deployment: the guarantee that most naturally follows from the DP definition is protection against membership inference by an adversary who knows all but one data record and has unlimited auxiliary knowledge. In many settings, this adversary is far too strong to inform how to set real-world privacy parameters. One approach for contextualizing privacy parameters is via defining and measuring the success of technical attacks, but doing so requires a systematic categorization of the relevant attack space. In this work, we offer a detailed taxonomy of attacks, showing the various dimensions of attacks and highlighting that many real-world settings have been understudied. Our taxonomy provides a roadmap for analyzing real-world deployments and developing theoretical bounds for more informative privacy attacks. We operationalize our taxonomy by using it to analyze a real-world case study, the Israeli Ministry of Health's recent release of a birth dataset using DP, showing how the taxonomy enables fine-grained threat modeling and provides insight towards making informed privacy parameter choices. Finally, we leverage the taxonomy towards defining a more realistic attack than previously considered in the literature, namely a distributional reconstruction attack: we generalize Balle et al.'s notion of reconstruction robustness to a less-informed adversary with distributional uncertainty, and extend the worst-case guarantees of DP to this average-case setting.
[ { "created": "Thu, 2 May 2024 20:23:23 GMT", "version": "v1" } ]
2024-05-06
[ [ "Cummings", "Rachel", "" ], [ "Hod", "Shlomi", "" ], [ "Sarathy", "Jayshree", "" ], [ "Swanberg", "Marika", "" ] ]
Differential Privacy (DP) is a mathematical framework that is increasingly deployed to mitigate privacy risks associated with machine learning and statistical analyses. Despite the growing adoption of DP, its technical privacy parameters do not lend themselves to an intelligible description of the real-world privacy risks associated with that deployment: the guarantee that most naturally follows from the DP definition is protection against membership inference by an adversary who knows all but one data record and has unlimited auxiliary knowledge. In many settings, this adversary is far too strong to inform how to set real-world privacy parameters. One approach for contextualizing privacy parameters is via defining and measuring the success of technical attacks, but doing so requires a systematic categorization of the relevant attack space. In this work, we offer a detailed taxonomy of attacks, showing the various dimensions of attacks and highlighting that many real-world settings have been understudied. Our taxonomy provides a roadmap for analyzing real-world deployments and developing theoretical bounds for more informative privacy attacks. We operationalize our taxonomy by using it to analyze a real-world case study, the Israeli Ministry of Health's recent release of a birth dataset using DP, showing how the taxonomy enables fine-grained threat modeling and provides insight towards making informed privacy parameter choices. Finally, we leverage the taxonomy towards defining a more realistic attack than previously considered in the literature, namely a distributional reconstruction attack: we generalize Balle et al.'s notion of reconstruction robustness to a less-informed adversary with distributional uncertainty, and extend the worst-case guarantees of DP to this average-case setting.
1902.09212
Bin Xiao
Ke Sun and Bin Xiao and Dong Liu and Jingdong Wang
Deep High-Resolution Representation Learning for Human Pose Estimation
accepted by CVPR2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is an official pytorch implementation of Deep High-Resolution Representation Learning for Human Pose Estimation. In this work, we are interested in the human pose estimation problem with a focus on learning reliable high-resolution representations. Most existing methods recover high-resolution representations from low-resolution representations produced by a high-to-low resolution network. Instead, our proposed network maintains high-resolution representations through the whole process. We start from a high-resolution subnetwork as the first stage, gradually add high-to-low resolution subnetworks one by one to form more stages, and connect the mutli-resolution subnetworks in parallel. We conduct repeated multi-scale fusions such that each of the high-to-low resolution representations receives information from other parallel representations over and over, leading to rich high-resolution representations. As a result, the predicted keypoint heatmap is potentially more accurate and spatially more precise. We empirically demonstrate the effectiveness of our network through the superior pose estimation results over two benchmark datasets: the COCO keypoint detection dataset and the MPII Human Pose dataset. The code and models have been publicly available at \url{https://github.com/leoxiaobin/deep-high-resolution-net.pytorch}.
[ { "created": "Mon, 25 Feb 2019 11:55:28 GMT", "version": "v1" } ]
2019-02-26
[ [ "Sun", "Ke", "" ], [ "Xiao", "Bin", "" ], [ "Liu", "Dong", "" ], [ "Wang", "Jingdong", "" ] ]
This is an official pytorch implementation of Deep High-Resolution Representation Learning for Human Pose Estimation. In this work, we are interested in the human pose estimation problem with a focus on learning reliable high-resolution representations. Most existing methods recover high-resolution representations from low-resolution representations produced by a high-to-low resolution network. Instead, our proposed network maintains high-resolution representations through the whole process. We start from a high-resolution subnetwork as the first stage, gradually add high-to-low resolution subnetworks one by one to form more stages, and connect the mutli-resolution subnetworks in parallel. We conduct repeated multi-scale fusions such that each of the high-to-low resolution representations receives information from other parallel representations over and over, leading to rich high-resolution representations. As a result, the predicted keypoint heatmap is potentially more accurate and spatially more precise. We empirically demonstrate the effectiveness of our network through the superior pose estimation results over two benchmark datasets: the COCO keypoint detection dataset and the MPII Human Pose dataset. The code and models have been publicly available at \url{https://github.com/leoxiaobin/deep-high-resolution-net.pytorch}.
1409.1544
Thomas Streicher
Ingo Battenfeld, Klaus Keimel, Thomas Streicher
Observationally-induced algebras in Domain Theory
26 pages
Logical Methods in Computer Science, Volume 10, Issue 3 (September 11, 2014) lmcs:963
10.2168/LMCS-10(3:18)2014
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we revise and simplify the notion of observationally induced algebra introduced by Simpson and Schroeder for the purpose of modelling computational effects in the particular case where the ambient category is given by classical domain theory. As examples of the general framework we consider the various powerdomains. For the particular case of the Plotkin powerdomain the general recipe leads to a somewhat unexpected result which, however, makes sense from a Computer Science perspective. We analyze this "deviation" and show how to reobtain the original Plotkin powerdomain by imposing further conditions previously considered by R.~Heckmann and J.~Goubault-Larrecq.
[ { "created": "Thu, 4 Sep 2014 19:09:12 GMT", "version": "v1" }, { "created": "Fri, 5 Sep 2014 10:41:07 GMT", "version": "v2" }, { "created": "Wed, 10 Sep 2014 14:57:23 GMT", "version": "v3" }, { "created": "Thu, 27 Oct 2016 15:29:15 GMT", "version": "v4" } ]
2016-10-28
[ [ "Battenfeld", "Ingo", "" ], [ "Keimel", "Klaus", "" ], [ "Streicher", "Thomas", "" ] ]
In this paper we revise and simplify the notion of observationally induced algebra introduced by Simpson and Schroeder for the purpose of modelling computational effects in the particular case where the ambient category is given by classical domain theory. As examples of the general framework we consider the various powerdomains. For the particular case of the Plotkin powerdomain the general recipe leads to a somewhat unexpected result which, however, makes sense from a Computer Science perspective. We analyze this "deviation" and show how to reobtain the original Plotkin powerdomain by imposing further conditions previously considered by R.~Heckmann and J.~Goubault-Larrecq.
1104.1471
Xiao Ma
Xiao Ma and Jia Liu and Baoming Bai
New Techniques for Upper-Bounding the ML Decoding Performance of Binary Linear Codes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, new techniques are presented to either simplify or improve most existing upper bounds on the maximum-likelihood (ML) decoding performance of the binary linear codes over additive white Gaussian noise (AWGN) channels. Firstly, the recently proposed union bound using truncated weight spectrums by Ma {\em et al} is re-derived in a detailed way based on Gallager's first bounding technique (GFBT), where the "good region" is specified by a sub-optimal list decoding algorithm. The error probability caused by the bad region can be upper-bounded by the tail-probability of a binomial distribution, while the error probability caused by the good region can be upper-bounded by most existing techniques. Secondly, we propose two techniques to tighten the union bound on the error probability caused by the good region. The first technique is based on pair-wise error probabilities, which can be further tightened by employing the independence between the error events and certain components of the received random vectors. The second technique is based on triplet-wise error probabilities, which can be upper-bounded by proving that any three bipolar vectors form a non-obtuse triangle. The proposed bounds improve the conventional union bounds but have a similar complexity since they involve only the $Q$-function. The proposed bounds can also be adapted to bit-error probabilities.
[ { "created": "Fri, 8 Apr 2011 03:06:36 GMT", "version": "v1" }, { "created": "Mon, 3 Sep 2012 06:19:40 GMT", "version": "v2" } ]
2015-03-19
[ [ "Ma", "Xiao", "" ], [ "Liu", "Jia", "" ], [ "Bai", "Baoming", "" ] ]
In this paper, new techniques are presented to either simplify or improve most existing upper bounds on the maximum-likelihood (ML) decoding performance of the binary linear codes over additive white Gaussian noise (AWGN) channels. Firstly, the recently proposed union bound using truncated weight spectrums by Ma {\em et al} is re-derived in a detailed way based on Gallager's first bounding technique (GFBT), where the "good region" is specified by a sub-optimal list decoding algorithm. The error probability caused by the bad region can be upper-bounded by the tail-probability of a binomial distribution, while the error probability caused by the good region can be upper-bounded by most existing techniques. Secondly, we propose two techniques to tighten the union bound on the error probability caused by the good region. The first technique is based on pair-wise error probabilities, which can be further tightened by employing the independence between the error events and certain components of the received random vectors. The second technique is based on triplet-wise error probabilities, which can be upper-bounded by proving that any three bipolar vectors form a non-obtuse triangle. The proposed bounds improve the conventional union bounds but have a similar complexity since they involve only the $Q$-function. The proposed bounds can also be adapted to bit-error probabilities.
1905.04084
Peng Zhang
Peng Zhang, Xiaoyu Ge, Jochen Renz
Support Relation Analysis for Objects in Multiple View RGB-D Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding physical relations between objects, especially their support relations, is crucial for robotic manipulation. There has been work on reasoning about support relations and structural stability of simple configurations in RGB-D images. In this paper, we propose a method for extracting more detailed physical knowledge from a set of RGB-D images taken from the same scene but from different views using qualitative reasoning and intuitive physical models. Rather than providing a simple contact relation graph and approximating stability over convex shapes, our method is able to provide a detailed supporting relation analysis based on a volumetric representation. Specifically, true supporting relations between objects (e.g., if an object supports another object by touching it on the side or if the object above contributes to the stability of the object below) are identified. We apply our method to real-world structures captured in warehouse scenarios and show our method works as desired.
[ { "created": "Fri, 10 May 2019 11:49:49 GMT", "version": "v1" } ]
2019-05-13
[ [ "Zhang", "Peng", "" ], [ "Ge", "Xiaoyu", "" ], [ "Renz", "Jochen", "" ] ]
Understanding physical relations between objects, especially their support relations, is crucial for robotic manipulation. There has been work on reasoning about support relations and structural stability of simple configurations in RGB-D images. In this paper, we propose a method for extracting more detailed physical knowledge from a set of RGB-D images taken from the same scene but from different views using qualitative reasoning and intuitive physical models. Rather than providing a simple contact relation graph and approximating stability over convex shapes, our method is able to provide a detailed supporting relation analysis based on a volumetric representation. Specifically, true supporting relations between objects (e.g., if an object supports another object by touching it on the side or if the object above contributes to the stability of the object below) are identified. We apply our method to real-world structures captured in warehouse scenarios and show our method works as desired.
1210.3769
Mahima Mehta
Mahima Mehta, Ranjan Bala Jain and Abhay Karandikar
Analysis of Blocking Probability in a Relay-based Cellular OFDMA Network
33 pages, 10 figures
Pre-print version (May 2012)
null
null
cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relay deployment in Orthogonal Frequency Division Multiple Access (OFDMA) based cellular networks helps in coverage extension and or capacity improvement. In OFDMA system, each user requires different number of subcarriers to meet its rate requirement. This resource requirement depends on the Signal to Interference Ratio (SIR) experienced by a user. Traditional methods to compute blocking probability cannot be used in relay based cellular OFDMA networks. In this paper, we present an approach to compute the blocking probability of such networks. We determine an expression of the probability distribution of the users resource requirement based on its experienced SIR and then classify the users into various classes depending upon their subcarrier requirement. We consider the system to be a multidimensional system with different classes and evaluate the blocking probability of system using the multi-dimensional Erlang loss formulas.
[ { "created": "Sun, 14 Oct 2012 09:14:13 GMT", "version": "v1" }, { "created": "Tue, 29 Apr 2014 15:32:20 GMT", "version": "v2" } ]
2014-04-30
[ [ "Mehta", "Mahima", "" ], [ "Jain", "Ranjan Bala", "" ], [ "Karandikar", "Abhay", "" ] ]
Relay deployment in Orthogonal Frequency Division Multiple Access (OFDMA) based cellular networks helps in coverage extension and or capacity improvement. In OFDMA system, each user requires different number of subcarriers to meet its rate requirement. This resource requirement depends on the Signal to Interference Ratio (SIR) experienced by a user. Traditional methods to compute blocking probability cannot be used in relay based cellular OFDMA networks. In this paper, we present an approach to compute the blocking probability of such networks. We determine an expression of the probability distribution of the users resource requirement based on its experienced SIR and then classify the users into various classes depending upon their subcarrier requirement. We consider the system to be a multidimensional system with different classes and evaluate the blocking probability of system using the multi-dimensional Erlang loss formulas.
2106.14421
Maxime Gasse
Maxime Gasse, Damien Grasset, Guillaume Gaudron, Pierre-Yves Oudeyer
Causal Reinforcement Learning using Observational and Interventional Data
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Learning efficiently a causal model of the environment is a key challenge of model-based RL agents operating in POMDPs. We consider here a scenario where the learning agent has the ability to collect online experiences through direct interactions with the environment (interventional data), but has also access to a large collection of offline experiences, obtained by observing another agent interacting with the environment (observational data). A key ingredient, that makes this situation non-trivial, is that we allow the observed agent to interact with the environment based on hidden information, which is not observed by the learning agent. We then ask the following questions: can the online and offline experiences be safely combined for learning a causal model ? And can we expect the offline experiences to improve the agent's performances ? To answer these questions, we import ideas from the well-established causal framework of do-calculus, and we express model-based reinforcement learning as a causal inference problem. Then, we propose a general yet simple methodology for leveraging offline data during learning. In a nutshell, the method relies on learning a latent-based causal transition model that explains both the interventional and observational regimes, and then using the recovered latent variable to infer the standard POMDP transition model via deconfounding. We prove our method is correct and efficient in the sense that it attains better generalization guarantees due to the offline data (in the asymptotic case), and we illustrate its effectiveness empirically on synthetic toy problems. Our contribution aims at bridging the gap between the fields of reinforcement learning and causality.
[ { "created": "Mon, 28 Jun 2021 06:58:20 GMT", "version": "v1" } ]
2021-06-29
[ [ "Gasse", "Maxime", "" ], [ "Grasset", "Damien", "" ], [ "Gaudron", "Guillaume", "" ], [ "Oudeyer", "Pierre-Yves", "" ] ]
Learning efficiently a causal model of the environment is a key challenge of model-based RL agents operating in POMDPs. We consider here a scenario where the learning agent has the ability to collect online experiences through direct interactions with the environment (interventional data), but has also access to a large collection of offline experiences, obtained by observing another agent interacting with the environment (observational data). A key ingredient, that makes this situation non-trivial, is that we allow the observed agent to interact with the environment based on hidden information, which is not observed by the learning agent. We then ask the following questions: can the online and offline experiences be safely combined for learning a causal model ? And can we expect the offline experiences to improve the agent's performances ? To answer these questions, we import ideas from the well-established causal framework of do-calculus, and we express model-based reinforcement learning as a causal inference problem. Then, we propose a general yet simple methodology for leveraging offline data during learning. In a nutshell, the method relies on learning a latent-based causal transition model that explains both the interventional and observational regimes, and then using the recovered latent variable to infer the standard POMDP transition model via deconfounding. We prove our method is correct and efficient in the sense that it attains better generalization guarantees due to the offline data (in the asymptotic case), and we illustrate its effectiveness empirically on synthetic toy problems. Our contribution aims at bridging the gap between the fields of reinforcement learning and causality.
2112.01914
Zheyuan Zhou
Zheyuan Zhou, Liang Du, Xiaoqing Ye, Zhikang Zou, Xiao Tan, Li Zhang, Xiangyang Xue, Jianfeng Feng
SGM3D: Stereo Guided Monocular 3D Object Detection
8 pages, 5 figures
null
10.1109/LRA.2022.3191849
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monocular 3D object detection aims to predict the object location, dimension and orientation in 3D space alongside the object category given only a monocular image. It poses a great challenge due to its ill-posed property which is critically lack of depth information in the 2D image plane. While there exist approaches leveraging off-the-shelve depth estimation or relying on LiDAR sensors to mitigate this problem, the dependence on the additional depth model or expensive equipment severely limits their scalability to generic 3D perception. In this paper, we propose a stereo-guided monocular 3D object detection framework, dubbed SGM3D, adapting the robust 3D features learned from stereo inputs to enhance the feature for monocular detection. We innovatively present a multi-granularity domain adaptation (MG-DA) mechanism to exploit the network's ability to generate stereo-mimicking features given only on monocular cues. Coarse BEV feature-level, as well as the fine anchor-level domain adaptation, are both leveraged for guidance in the monocular domain.In addition, we introduce an IoU matching-based alignment (IoU-MA) method for object-level domain adaptation between the stereo and monocular predictions to alleviate the mismatches while adopting the MG-DA. Extensive experiments demonstrate state-of-the-art results on KITTI and Lyft datasets.
[ { "created": "Fri, 3 Dec 2021 13:57:14 GMT", "version": "v1" }, { "created": "Thu, 24 Feb 2022 16:43:36 GMT", "version": "v2" } ]
2023-07-06
[ [ "Zhou", "Zheyuan", "" ], [ "Du", "Liang", "" ], [ "Ye", "Xiaoqing", "" ], [ "Zou", "Zhikang", "" ], [ "Tan", "Xiao", "" ], [ "Zhang", "Li", "" ], [ "Xue", "Xiangyang", "" ], [ "Feng", "Jianfeng", "" ] ]
Monocular 3D object detection aims to predict the object location, dimension and orientation in 3D space alongside the object category given only a monocular image. It poses a great challenge due to its ill-posed property which is critically lack of depth information in the 2D image plane. While there exist approaches leveraging off-the-shelve depth estimation or relying on LiDAR sensors to mitigate this problem, the dependence on the additional depth model or expensive equipment severely limits their scalability to generic 3D perception. In this paper, we propose a stereo-guided monocular 3D object detection framework, dubbed SGM3D, adapting the robust 3D features learned from stereo inputs to enhance the feature for monocular detection. We innovatively present a multi-granularity domain adaptation (MG-DA) mechanism to exploit the network's ability to generate stereo-mimicking features given only on monocular cues. Coarse BEV feature-level, as well as the fine anchor-level domain adaptation, are both leveraged for guidance in the monocular domain.In addition, we introduce an IoU matching-based alignment (IoU-MA) method for object-level domain adaptation between the stereo and monocular predictions to alleviate the mismatches while adopting the MG-DA. Extensive experiments demonstrate state-of-the-art results on KITTI and Lyft datasets.
2106.10905
Pashupati Hegde
Pashupati Hegde, \c{C}a\u{g}atay Y{\i}ld{\i}z, Harri L\"ahdesm\"aki, Samuel Kaski, Markus Heinonen
Variational multiple shooting for Bayesian ODEs with Gaussian processes
Camera-ready version at UAI 2022
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent machine learning advances have proposed black-box estimation of unknown continuous-time system dynamics directly from data. However, earlier works are based on approximative ODE solutions or point estimates. We propose a novel Bayesian nonparametric model that uses Gaussian processes to infer posteriors of unknown ODE systems directly from data. We derive sparse variational inference with decoupled functional sampling to represent vector field posteriors. We also introduce a probabilistic shooting augmentation to enable efficient inference from arbitrarily long trajectories. The method demonstrates the benefit of computing vector field posteriors, with predictive uncertainty scores outperforming alternative methods on multiple ODE learning tasks.
[ { "created": "Mon, 21 Jun 2021 08:09:17 GMT", "version": "v1" }, { "created": "Wed, 26 Jan 2022 08:03:04 GMT", "version": "v2" }, { "created": "Sun, 17 Jul 2022 17:16:39 GMT", "version": "v3" } ]
2022-07-19
[ [ "Hegde", "Pashupati", "" ], [ "Yıldız", "Çağatay", "" ], [ "Lähdesmäki", "Harri", "" ], [ "Kaski", "Samuel", "" ], [ "Heinonen", "Markus", "" ] ]
Recent machine learning advances have proposed black-box estimation of unknown continuous-time system dynamics directly from data. However, earlier works are based on approximative ODE solutions or point estimates. We propose a novel Bayesian nonparametric model that uses Gaussian processes to infer posteriors of unknown ODE systems directly from data. We derive sparse variational inference with decoupled functional sampling to represent vector field posteriors. We also introduce a probabilistic shooting augmentation to enable efficient inference from arbitrarily long trajectories. The method demonstrates the benefit of computing vector field posteriors, with predictive uncertainty scores outperforming alternative methods on multiple ODE learning tasks.
1607.01517
Erel Segal-Halevi
Erel Segal-Halevi, Shmuel Nitzan
Envy-Free Cake-Cutting among Families
The paper is obsolete - it is subsumed by the paper "Fair Cake-Cutting among Families", arXiv:1510.03903
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper extends the classic cake-cutting problem to a situation in which the "cake" is divided among families. Each piece of cake is owned and used simultaneously by all members of the family. A typical example of such a cake is land. We examine three ways to assess the fairness of such a division, based on the classic no-envy criterion: (a) Average envy-freeness means that for each family, the average value of its share (averaged over all family members) is weakly larger than the average value of any other share; (b) Unanimous envy-freeness means that in each family, each member values the family's share weakly more than any other share; (c) Democratic envy-freeness means that in each family, at least half the members value the family's share weakly more than any other share. We study each of these definitions from both an existential and a computational perspective.
[ { "created": "Wed, 6 Jul 2016 08:24:57 GMT", "version": "v1" }, { "created": "Fri, 9 Aug 2019 10:07:15 GMT", "version": "v2" } ]
2019-08-12
[ [ "Segal-Halevi", "Erel", "" ], [ "Nitzan", "Shmuel", "" ] ]
This paper extends the classic cake-cutting problem to a situation in which the "cake" is divided among families. Each piece of cake is owned and used simultaneously by all members of the family. A typical example of such a cake is land. We examine three ways to assess the fairness of such a division, based on the classic no-envy criterion: (a) Average envy-freeness means that for each family, the average value of its share (averaged over all family members) is weakly larger than the average value of any other share; (b) Unanimous envy-freeness means that in each family, each member values the family's share weakly more than any other share; (c) Democratic envy-freeness means that in each family, at least half the members value the family's share weakly more than any other share. We study each of these definitions from both an existential and a computational perspective.
2403.20273
Sergio Vitale
Wenyu Yang, Sergio Vitale, Hossein Aghababaei, Giampaolo Ferraioli, Vito Pascazio, Gilda Schirinzi
CATSNet: a context-aware network for Height Estimation in a Forested Area based on Pol-TomoSAR data
Submitted to IEEE TGRS, under review
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Tropical forests are a key component of the global carbon cycle. With plans for upcoming space-borne missions like BIOMASS to monitor forestry, several airborne missions, including TropiSAR and AfriSAR campaigns, have been successfully launched and experimented. Typical Synthetic Aperture Radar Tomography (TomoSAR) methods involve complex models with low accuracy and high computation costs. In recent years, deep learning methods have also gained attention in the TomoSAR framework, showing interesting performance. Recently, a solution based on a fully connected Tomographic Neural Network (TSNN) has demonstrated its effectiveness in accurately estimating forest and ground heights by exploiting the pixel-wise elements of the covariance matrix derived from TomoSAR data. This work instead goes beyond the pixel-wise approach to define a context-aware deep learning-based solution named CATSNet. A convolutional neural network is considered to leverage patch-based information and extract features from a neighborhood rather than focus on a single pixel. The training is conducted by considering TomoSAR data as the input and Light Detection and Ranging (LiDAR) values as the ground truth. The experimental results show striking advantages in both performance and generalization ability by leveraging context information within Multiple Baselines (MB) TomoSAR data across different polarimetric modalities, surpassing existing techniques.
[ { "created": "Fri, 29 Mar 2024 16:27:40 GMT", "version": "v1" } ]
2024-04-01
[ [ "Yang", "Wenyu", "" ], [ "Vitale", "Sergio", "" ], [ "Aghababaei", "Hossein", "" ], [ "Ferraioli", "Giampaolo", "" ], [ "Pascazio", "Vito", "" ], [ "Schirinzi", "Gilda", "" ] ]
Tropical forests are a key component of the global carbon cycle. With plans for upcoming space-borne missions like BIOMASS to monitor forestry, several airborne missions, including TropiSAR and AfriSAR campaigns, have been successfully launched and experimented. Typical Synthetic Aperture Radar Tomography (TomoSAR) methods involve complex models with low accuracy and high computation costs. In recent years, deep learning methods have also gained attention in the TomoSAR framework, showing interesting performance. Recently, a solution based on a fully connected Tomographic Neural Network (TSNN) has demonstrated its effectiveness in accurately estimating forest and ground heights by exploiting the pixel-wise elements of the covariance matrix derived from TomoSAR data. This work instead goes beyond the pixel-wise approach to define a context-aware deep learning-based solution named CATSNet. A convolutional neural network is considered to leverage patch-based information and extract features from a neighborhood rather than focus on a single pixel. The training is conducted by considering TomoSAR data as the input and Light Detection and Ranging (LiDAR) values as the ground truth. The experimental results show striking advantages in both performance and generalization ability by leveraging context information within Multiple Baselines (MB) TomoSAR data across different polarimetric modalities, surpassing existing techniques.
1404.0442
Kevin Carlberg
Kevin Carlberg
Adaptive $h$-refinement for reduced-order models
submitted to the International Journal for Numerical Methods in Engineering, Special Issue on Model Reduction
International Journal for Numerical Methods in Engineering, Vol. 102, No. 5, p.1192-1210 (2014)
10.1002/nme.4800
null
cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents a method to adaptively refine reduced-order models \emph{a posteriori} without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive $h$-refinement: it enriches the reduced-basis space online by `splitting' a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive $k$-means clustering of the state variables using snapshot data. The method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operations or full-order-model solves. Further, it enables the reduced-order model to satisfy \emph{any prescribed error tolerance} regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.
[ { "created": "Wed, 2 Apr 2014 03:29:43 GMT", "version": "v1" }, { "created": "Thu, 3 Apr 2014 04:12:34 GMT", "version": "v2" }, { "created": "Fri, 18 Jul 2014 01:09:10 GMT", "version": "v3" } ]
2015-04-16
[ [ "Carlberg", "Kevin", "" ] ]
This work presents a method to adaptively refine reduced-order models \emph{a posteriori} without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive $h$-refinement: it enriches the reduced-basis space online by `splitting' a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive $k$-means clustering of the state variables using snapshot data. The method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operations or full-order-model solves. Further, it enables the reduced-order model to satisfy \emph{any prescribed error tolerance} regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.
2305.10172
Yang Deng
Yang Deng, Wenxuan Zhang, Yifei Yuan, Wai Lam
Knowledge-enhanced Mixed-initiative Dialogue System for Emotional Support Conversations
Accepted by ACL 2023 main conference
null
null
null
cs.CL cs.IR
http://creativecommons.org/licenses/by/4.0/
Unlike empathetic dialogues, the system in emotional support conversations (ESC) is expected to not only convey empathy for comforting the help-seeker, but also proactively assist in exploring and addressing their problems during the conversation. In this work, we study the problem of mixed-initiative ESC where the user and system can both take the initiative in leading the conversation. Specifically, we conduct a novel analysis on mixed-initiative ESC systems with a tailor-designed schema that divides utterances into different types with speaker roles and initiative types. Four emotional support metrics are proposed to evaluate the mixed-initiative interactions. The analysis reveals the necessity and challenges of building mixed-initiative ESC systems. In the light of this, we propose a knowledge-enhanced mixed-initiative framework (KEMI) for ESC, which retrieves actual case knowledge from a large-scale mental health knowledge graph for generating mixed-initiative responses. Experimental results on two ESC datasets show the superiority of KEMI in both content-preserving evaluation and mixed initiative related analyses.
[ { "created": "Wed, 17 May 2023 12:55:52 GMT", "version": "v1" } ]
2023-05-18
[ [ "Deng", "Yang", "" ], [ "Zhang", "Wenxuan", "" ], [ "Yuan", "Yifei", "" ], [ "Lam", "Wai", "" ] ]
Unlike empathetic dialogues, the system in emotional support conversations (ESC) is expected to not only convey empathy for comforting the help-seeker, but also proactively assist in exploring and addressing their problems during the conversation. In this work, we study the problem of mixed-initiative ESC where the user and system can both take the initiative in leading the conversation. Specifically, we conduct a novel analysis on mixed-initiative ESC systems with a tailor-designed schema that divides utterances into different types with speaker roles and initiative types. Four emotional support metrics are proposed to evaluate the mixed-initiative interactions. The analysis reveals the necessity and challenges of building mixed-initiative ESC systems. In the light of this, we propose a knowledge-enhanced mixed-initiative framework (KEMI) for ESC, which retrieves actual case knowledge from a large-scale mental health knowledge graph for generating mixed-initiative responses. Experimental results on two ESC datasets show the superiority of KEMI in both content-preserving evaluation and mixed initiative related analyses.
1009.5048
S. M. Kamruzzaman
Abdul Kadar Muhammad Masum, Mohammad Mahadi Hassan, and S. M. Kamruzzaman
The Most Advantageous Bangla Keyboard Layout Using Data Mining Technique
10 Pages, International Journal
Journal of Computer Science, IBAIS University, Dkhaka, Bangladesh, Vol. 1, No. 2, Dec. 2007
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bangla alphabet has a large number of letters, for this it is complicated to type faster using Bangla keyboard. The proposed keyboard will maximize the speed of operator as they can type with both hands parallel. Association rule of data mining to distribute the Bangla characters in the keyboard is used here. The frequencies of data consisting of monograph, digraph and trigraph are analyzed, which are derived from data wire-house, and then used association rule of data mining to distribute the Bangla characters in the layout. Experimental results on several data show the effectiveness of the proposed approach with better performance. This paper presents an optimal Bangla Keyboard Layout, which distributes the load equally on both hands so that maximizing the ease and minimizing the effort.
[ { "created": "Sun, 26 Sep 2010 02:09:41 GMT", "version": "v1" } ]
2010-09-28
[ [ "Masum", "Abdul Kadar Muhammad", "" ], [ "Hassan", "Mohammad Mahadi", "" ], [ "Kamruzzaman", "S. M.", "" ] ]
Bangla alphabet has a large number of letters, for this it is complicated to type faster using Bangla keyboard. The proposed keyboard will maximize the speed of operator as they can type with both hands parallel. Association rule of data mining to distribute the Bangla characters in the keyboard is used here. The frequencies of data consisting of monograph, digraph and trigraph are analyzed, which are derived from data wire-house, and then used association rule of data mining to distribute the Bangla characters in the layout. Experimental results on several data show the effectiveness of the proposed approach with better performance. This paper presents an optimal Bangla Keyboard Layout, which distributes the load equally on both hands so that maximizing the ease and minimizing the effort.
1911.03951
Eyke H\"ullermeier
Ammar Shaker and Eyke H\"ullermeier
TSK-Streams: Learning TSK Fuzzy Systems on Data Streams
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of adaptive learning from evolving and possibly non-stationary data streams has attracted a lot of interest in machine learning in the recent past, and also stimulated research in related fields, such as computational intelligence and fuzzy systems. In particular, several rule-based methods for the incremental induction of regression models have been proposed. In this paper, we develop a method that combines the strengths of two existing approaches rooted in different learning paradigms. More concretely, our method adopts basic principles of the state-of-the-art learning algorithm AMRules and enriches them by the representational advantages of fuzzy rules. In a comprehensive experimental study, TSK-Streams is shown to be highly competitive in terms of performance.
[ { "created": "Sun, 10 Nov 2019 16:04:22 GMT", "version": "v1" } ]
2019-11-12
[ [ "Shaker", "Ammar", "" ], [ "Hüllermeier", "Eyke", "" ] ]
The problem of adaptive learning from evolving and possibly non-stationary data streams has attracted a lot of interest in machine learning in the recent past, and also stimulated research in related fields, such as computational intelligence and fuzzy systems. In particular, several rule-based methods for the incremental induction of regression models have been proposed. In this paper, we develop a method that combines the strengths of two existing approaches rooted in different learning paradigms. More concretely, our method adopts basic principles of the state-of-the-art learning algorithm AMRules and enriches them by the representational advantages of fuzzy rules. In a comprehensive experimental study, TSK-Streams is shown to be highly competitive in terms of performance.
2406.17813
Salvatore Greco
Salvatore Greco, Bartolomeo Vacchetti, Daniele Apiletti, Tania Cerquitelli
Unsupervised Concept Drift Detection from Deep Learning Representations in Real-time
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Concept Drift is a phenomenon in which the underlying data distribution and statistical properties of a target domain change over time, leading to a degradation of the model's performance. Consequently, models deployed in production require continuous monitoring through drift detection techniques. Most drift detection methods to date are supervised, i.e., based on ground-truth labels. However, true labels are usually not available in many real-world scenarios. Although recent efforts have been made to develop unsupervised methods, they often lack the required accuracy, have a complexity that makes real-time implementation in production environments difficult, or are unable to effectively characterize drift. To address these challenges, we propose DriftLens, an unsupervised real-time concept drift detection framework. It works on unstructured data by exploiting the distribution distances of deep learning representations. DriftLens can also provide drift characterization by analyzing each label separately. A comprehensive experimental evaluation is presented with multiple deep learning classifiers for text, image, and speech. Results show that (i) DriftLens performs better than previous methods in detecting drift in $11/13$ use cases; (ii) it runs at least 5 times faster; (iii) its detected drift value is very coherent with the amount of drift (correlation $\geq 0.85$); (iv) it is robust to parameter changes.
[ { "created": "Mon, 24 Jun 2024 23:41:46 GMT", "version": "v1" } ]
2024-06-27
[ [ "Greco", "Salvatore", "" ], [ "Vacchetti", "Bartolomeo", "" ], [ "Apiletti", "Daniele", "" ], [ "Cerquitelli", "Tania", "" ] ]
Concept Drift is a phenomenon in which the underlying data distribution and statistical properties of a target domain change over time, leading to a degradation of the model's performance. Consequently, models deployed in production require continuous monitoring through drift detection techniques. Most drift detection methods to date are supervised, i.e., based on ground-truth labels. However, true labels are usually not available in many real-world scenarios. Although recent efforts have been made to develop unsupervised methods, they often lack the required accuracy, have a complexity that makes real-time implementation in production environments difficult, or are unable to effectively characterize drift. To address these challenges, we propose DriftLens, an unsupervised real-time concept drift detection framework. It works on unstructured data by exploiting the distribution distances of deep learning representations. DriftLens can also provide drift characterization by analyzing each label separately. A comprehensive experimental evaluation is presented with multiple deep learning classifiers for text, image, and speech. Results show that (i) DriftLens performs better than previous methods in detecting drift in $11/13$ use cases; (ii) it runs at least 5 times faster; (iii) its detected drift value is very coherent with the amount of drift (correlation $\geq 0.85$); (iv) it is robust to parameter changes.
1601.05003
Florent Foucaud
Florent Foucaud and Ralf Klasing
Parameterized and approximation complexity of the detection pair problem in graphs
13 pages
Journal of Graph Algorithms and Applications 21(6):1039-1056, 2017
10.7155/jgaa.00449
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the complexity of the problem DETECTION PAIR. A detection pair of a graph $G$ is a pair $(W,L)$ of sets of detectors with $W\subseteq V(G)$, the watchers, and $L\subseteq V(G)$, the listeners, such that for every pair $u,v$ of vertices that are not dominated by a watcher of $W$, there is a listener of $L$ whose distances to $u$ and to $v$ are different. The goal is to minimize $|W|+|L|$. This problem generalizes the two classic problems DOMINATING SET and METRIC DIMENSION, that correspond to the restrictions $L=\emptyset$ and $W=\emptyset$, respectively. DETECTION PAIR was recently introduced by Finbow, Hartnell and Young [A. S. Finbow, B. L. Hartnell and J. R. Young. The complexity of monitoring a network with both watchers and listeners. Manuscript, 2015], who proved it to be NP-complete on trees, a surprising result given that both DOMINATING SET and METRIC DIMENSION are known to be linear-time solvable on trees. It follows from an existing reduction by Hartung and Nichterlein for METRIC DIMENSION that even on bipartite subcubic graphs of arbitrarily large girth, DETECTION PAIR is NP-hard to approximate within a sub-logarithmic factor and W[2]-hard (when parameterized by solution size). We show, using a reduction to SET COVER, that DETECTION PAIR is approximable within a factor logarithmic in the number of vertices of the input graph. Our two main results are a linear-time $2$-approximation algorithm and an FPT algorithm for DETECTION PAIR on trees.
[ { "created": "Tue, 19 Jan 2016 17:20:40 GMT", "version": "v1" }, { "created": "Tue, 30 Jan 2018 07:00:24 GMT", "version": "v2" } ]
2018-01-31
[ [ "Foucaud", "Florent", "" ], [ "Klasing", "Ralf", "" ] ]
We study the complexity of the problem DETECTION PAIR. A detection pair of a graph $G$ is a pair $(W,L)$ of sets of detectors with $W\subseteq V(G)$, the watchers, and $L\subseteq V(G)$, the listeners, such that for every pair $u,v$ of vertices that are not dominated by a watcher of $W$, there is a listener of $L$ whose distances to $u$ and to $v$ are different. The goal is to minimize $|W|+|L|$. This problem generalizes the two classic problems DOMINATING SET and METRIC DIMENSION, that correspond to the restrictions $L=\emptyset$ and $W=\emptyset$, respectively. DETECTION PAIR was recently introduced by Finbow, Hartnell and Young [A. S. Finbow, B. L. Hartnell and J. R. Young. The complexity of monitoring a network with both watchers and listeners. Manuscript, 2015], who proved it to be NP-complete on trees, a surprising result given that both DOMINATING SET and METRIC DIMENSION are known to be linear-time solvable on trees. It follows from an existing reduction by Hartung and Nichterlein for METRIC DIMENSION that even on bipartite subcubic graphs of arbitrarily large girth, DETECTION PAIR is NP-hard to approximate within a sub-logarithmic factor and W[2]-hard (when parameterized by solution size). We show, using a reduction to SET COVER, that DETECTION PAIR is approximable within a factor logarithmic in the number of vertices of the input graph. Our two main results are a linear-time $2$-approximation algorithm and an FPT algorithm for DETECTION PAIR on trees.
2303.06010
Timothy Cargan
Timothy Cargan, Dario Landa-Silva, Isaac Triguero
Local-Global Methods for Generalised Solar Irradiance Forecasting
40 pages, 11 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
As the use of solar power increases, having accurate and timely forecasts will be essential for smooth grid operators. There are many proposed methods for forecasting solar irradiance / solar power production. However, many of these methods formulate the problem as a time-series, relying on near real-time access to observations at the location of interest to generate forecasts. This requires both access to a real-time stream of data and enough historical observations for these methods to be deployed. In this paper, we propose the use of Global methods to train our models in a generalised way, enabling them to generate forecasts for unseen locations. We apply this approach to both classical ML and state of the art methods. Using data from 20 locations distributed throughout the UK and widely available weather data, we show that it is possible to build systems that do not require access to this data. We utilise and compare both satellite and ground observations (e.g. temperature, pressure) of weather data. Leveraging weather observations and measurements from other locations we show it is possible to create models capable of accurately forecasting solar irradiance at new locations. This could facilitate use planning and optimisation for both newly deployed solar farms and domestic installations from the moment they come online. Additionally, we show that training a single global model for multiple locations can produce a more robust model with more consistent and accurate results across locations.
[ { "created": "Fri, 10 Mar 2023 16:13:35 GMT", "version": "v1" }, { "created": "Mon, 10 Jul 2023 15:33:54 GMT", "version": "v2" } ]
2023-07-11
[ [ "Cargan", "Timothy", "" ], [ "Landa-Silva", "Dario", "" ], [ "Triguero", "Isaac", "" ] ]
As the use of solar power increases, having accurate and timely forecasts will be essential for smooth grid operators. There are many proposed methods for forecasting solar irradiance / solar power production. However, many of these methods formulate the problem as a time-series, relying on near real-time access to observations at the location of interest to generate forecasts. This requires both access to a real-time stream of data and enough historical observations for these methods to be deployed. In this paper, we propose the use of Global methods to train our models in a generalised way, enabling them to generate forecasts for unseen locations. We apply this approach to both classical ML and state of the art methods. Using data from 20 locations distributed throughout the UK and widely available weather data, we show that it is possible to build systems that do not require access to this data. We utilise and compare both satellite and ground observations (e.g. temperature, pressure) of weather data. Leveraging weather observations and measurements from other locations we show it is possible to create models capable of accurately forecasting solar irradiance at new locations. This could facilitate use planning and optimisation for both newly deployed solar farms and domestic installations from the moment they come online. Additionally, we show that training a single global model for multiple locations can produce a more robust model with more consistent and accurate results across locations.
1605.05172
Taraka Rama Kasicheyanula
Taraka Rama
Siamese convolutional networks based on phonetic features for cognate identification
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
In this paper, we explore the use of convolutional networks (ConvNets) for the purpose of cognate identification. We compare our architecture with binary classifiers based on string similarity measures on different language families. Our experiments show that convolutional networks achieve competitive results across concepts and across language families at the task of cognate identification.
[ { "created": "Tue, 17 May 2016 14:07:43 GMT", "version": "v1" }, { "created": "Sat, 2 Jul 2016 12:29:08 GMT", "version": "v2" } ]
2016-07-05
[ [ "Rama", "Taraka", "" ] ]
In this paper, we explore the use of convolutional networks (ConvNets) for the purpose of cognate identification. We compare our architecture with binary classifiers based on string similarity measures on different language families. Our experiments show that convolutional networks achieve competitive results across concepts and across language families at the task of cognate identification.
2206.06257
Yihua Zhang
Gaoyuan Zhang, Songtao Lu, Yihua Zhang, Xiangyi Chen, Pin-Yu Chen, Quanfu Fan, Lee Martie, Lior Horesh, Mingyi Hong, Sijia Liu
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification. To defend against such attacks, an effective and popular approach, known as adversarial training (AT), has been shown to mitigate the negative impact of adversarial attacks by virtue of a min-max robust training method. While effective, it remains unclear whether it can successfully be adapted to the distributed learning context. The power of distributed optimization over multiple machines enables us to scale up robust training over large models and datasets. Spurred by that, we propose distributed adversarial training (DAT), a large-batch adversarial training framework implemented over multiple machines. We show that DAT is general, which supports training over labeled and unlabeled data, multiple types of attack generation methods, and gradient compression operations favored for distributed optimization. Theoretically, we provide, under standard conditions in the optimization theory, the convergence rate of DAT to the first-order stationary points in general non-convex settings. Empirically, we demonstrate that DAT either matches or outperforms state-of-the-art robust accuracies and achieves a graceful training speedup (e.g., on ResNet-50 under ImageNet). Codes are available at https://github.com/dat-2022/dat.
[ { "created": "Mon, 13 Jun 2022 15:39:43 GMT", "version": "v1" }, { "created": "Wed, 7 Sep 2022 15:37:17 GMT", "version": "v2" } ]
2022-09-08
[ [ "Zhang", "Gaoyuan", "" ], [ "Lu", "Songtao", "" ], [ "Zhang", "Yihua", "" ], [ "Chen", "Xiangyi", "" ], [ "Chen", "Pin-Yu", "" ], [ "Fan", "Quanfu", "" ], [ "Martie", "Lee", "" ], [ "Horesh", "Lior", "" ], [ "Hong", "Mingyi", "" ], [ "Liu", "Sijia", "" ] ]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification. To defend against such attacks, an effective and popular approach, known as adversarial training (AT), has been shown to mitigate the negative impact of adversarial attacks by virtue of a min-max robust training method. While effective, it remains unclear whether it can successfully be adapted to the distributed learning context. The power of distributed optimization over multiple machines enables us to scale up robust training over large models and datasets. Spurred by that, we propose distributed adversarial training (DAT), a large-batch adversarial training framework implemented over multiple machines. We show that DAT is general, which supports training over labeled and unlabeled data, multiple types of attack generation methods, and gradient compression operations favored for distributed optimization. Theoretically, we provide, under standard conditions in the optimization theory, the convergence rate of DAT to the first-order stationary points in general non-convex settings. Empirically, we demonstrate that DAT either matches or outperforms state-of-the-art robust accuracies and achieves a graceful training speedup (e.g., on ResNet-50 under ImageNet). Codes are available at https://github.com/dat-2022/dat.
2210.15027
ELkebir Sarhrouni
Asma Elmaizi, Hasna Nhaila, Elkebir Sarhrouni, Ahmed Hammouch, and Chafik Nacir
A novel information gain-based approach for classification and dimensionality reduction of hyperspectral images
null
Procedia Computer Science, 2019, 148, pp. 126-134. DOI: 10.1016/j.procs.2019.01.016 - http://www.scopus.com/inward/record.url?eid=2-s2.0-85062681587&partnerID=MN8TOARS
10.1016/j.procs.2019.01.016
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recently, the hyperspectral sensors have improved our ability to monitor the earth surface with high spectral resolution. However, the high dimensionality of spectral data brings challenges for the image processing. Consequently, the dimensionality reduction is a necessary step in order to reduce the computational complexity and increase the classification accuracy. In this paper, we propose a new filter approach based on information gain for dimensionality reduction and classification of hyperspectral images. A special strategy based on hyperspectral bands selection is adopted to pick the most informative bands and discard the irrelevant and noisy ones. The algorithm evaluates the relevancy of the bands based on the information gain function with the support vector machine classifier. The proposed method is compared using two benchmark hyperspectral datasets (Indiana, Pavia) with three competing methods. The comparison results showed that the information gain filter approach outperforms the other methods on the tested datasets and could significantly reduce the computation cost while improving the classification accuracy. Keywords: Hyperspectral images; dimensionality reduction; information gain; classification accuracy. Keywords: Hyperspectral images; dimensionality reduction; information gain; classification accuracy.
[ { "created": "Wed, 26 Oct 2022 20:59:57 GMT", "version": "v1" } ]
2022-10-28
[ [ "Elmaizi", "Asma", "" ], [ "Nhaila", "Hasna", "" ], [ "Sarhrouni", "Elkebir", "" ], [ "Hammouch", "Ahmed", "" ], [ "Nacir", "Chafik", "" ] ]
Recently, the hyperspectral sensors have improved our ability to monitor the earth surface with high spectral resolution. However, the high dimensionality of spectral data brings challenges for the image processing. Consequently, the dimensionality reduction is a necessary step in order to reduce the computational complexity and increase the classification accuracy. In this paper, we propose a new filter approach based on information gain for dimensionality reduction and classification of hyperspectral images. A special strategy based on hyperspectral bands selection is adopted to pick the most informative bands and discard the irrelevant and noisy ones. The algorithm evaluates the relevancy of the bands based on the information gain function with the support vector machine classifier. The proposed method is compared using two benchmark hyperspectral datasets (Indiana, Pavia) with three competing methods. The comparison results showed that the information gain filter approach outperforms the other methods on the tested datasets and could significantly reduce the computation cost while improving the classification accuracy. Keywords: Hyperspectral images; dimensionality reduction; information gain; classification accuracy. Keywords: Hyperspectral images; dimensionality reduction; information gain; classification accuracy.
2305.01303
No\'e P\'erez-Higueras
No\'e P\'erez-Higueras and Roberto Otero and Fernando Caballero and Luis Merino
HuNavSim: A ROS 2 Human Navigation Simulator for Benchmarking Human-Aware Robot Navigation
Preprint version of the paper accepted in the RA-L Journal
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents the Human Navigation Simulator (HuNavSim), a novel open-source tool for the simulation of different human-agent navigation behaviors in scenarios with mobile robots. The tool, the first programmed under the ROS 2 framework, can be employed along with different well-known robotics simulators like Gazebo. The main goal is to ease the development and evaluation of human-aware robot navigation systems in simulation. Besides a general human-navigation model, HuNavSim includes, as a novelty, a rich set of individual and realistic human navigation behaviors and a complete set of metrics for social navigation benchmarking.
[ { "created": "Tue, 2 May 2023 10:26:51 GMT", "version": "v1" }, { "created": "Wed, 17 May 2023 14:13:47 GMT", "version": "v2" }, { "created": "Wed, 13 Sep 2023 13:15:44 GMT", "version": "v3" } ]
2023-09-14
[ [ "Pérez-Higueras", "Noé", "" ], [ "Otero", "Roberto", "" ], [ "Caballero", "Fernando", "" ], [ "Merino", "Luis", "" ] ]
This work presents the Human Navigation Simulator (HuNavSim), a novel open-source tool for the simulation of different human-agent navigation behaviors in scenarios with mobile robots. The tool, the first programmed under the ROS 2 framework, can be employed along with different well-known robotics simulators like Gazebo. The main goal is to ease the development and evaluation of human-aware robot navigation systems in simulation. Besides a general human-navigation model, HuNavSim includes, as a novelty, a rich set of individual and realistic human navigation behaviors and a complete set of metrics for social navigation benchmarking.
2305.05091
Prateek Chhikara
Prateek Chhikara, Jiarui Zhang, Filip Ilievski, Jonathan Francis and Kaixin Ma
Knowledge-enhanced Agents for Interactive Text Games
Published at K-CAP '23
null
10.1145/3587259.3627561
null
cs.CL cs.AI cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Communication via natural language is a key aspect of machine intelligence, and it requires computational models to learn and reason about world concepts, with varying levels of supervision. Significant progress has been made on fully-supervised non-interactive tasks, such as question-answering and procedural text understanding. Yet, various sequential interactive tasks, as in text-based games, have revealed limitations of existing approaches in terms of coherence, contextual awareness, and their ability to learn effectively from the environment. In this paper, we propose a knowledge-injection framework for improved functional grounding of agents in text-based games. Specifically, we consider two forms of domain knowledge that we inject into learning-based agents: memory of previous correct actions and affordances of relevant objects in the environment. Our framework supports two representative model classes: reinforcement learning agents and language model agents. Furthermore, we devise multiple injection strategies for the above domain knowledge types and agent architectures, including injection via knowledge graphs and augmentation of the existing input encoding strategies. We experiment with four models on the 10 tasks in the ScienceWorld text-based game environment, to illustrate the impact of knowledge injection on various model configurations and challenging task settings. Our findings provide crucial insights into the interplay between task properties, model architectures, and domain knowledge for interactive contexts.
[ { "created": "Mon, 8 May 2023 23:31:39 GMT", "version": "v1" }, { "created": "Sun, 17 Dec 2023 02:03:29 GMT", "version": "v2" } ]
2023-12-19
[ [ "Chhikara", "Prateek", "" ], [ "Zhang", "Jiarui", "" ], [ "Ilievski", "Filip", "" ], [ "Francis", "Jonathan", "" ], [ "Ma", "Kaixin", "" ] ]
Communication via natural language is a key aspect of machine intelligence, and it requires computational models to learn and reason about world concepts, with varying levels of supervision. Significant progress has been made on fully-supervised non-interactive tasks, such as question-answering and procedural text understanding. Yet, various sequential interactive tasks, as in text-based games, have revealed limitations of existing approaches in terms of coherence, contextual awareness, and their ability to learn effectively from the environment. In this paper, we propose a knowledge-injection framework for improved functional grounding of agents in text-based games. Specifically, we consider two forms of domain knowledge that we inject into learning-based agents: memory of previous correct actions and affordances of relevant objects in the environment. Our framework supports two representative model classes: reinforcement learning agents and language model agents. Furthermore, we devise multiple injection strategies for the above domain knowledge types and agent architectures, including injection via knowledge graphs and augmentation of the existing input encoding strategies. We experiment with four models on the 10 tasks in the ScienceWorld text-based game environment, to illustrate the impact of knowledge injection on various model configurations and challenging task settings. Our findings provide crucial insights into the interplay between task properties, model architectures, and domain knowledge for interactive contexts.
2401.06521
Junxian Mu
Yu Wang, Junxian Mu, Pengfei Zhu, Qinghua Hu
Exploring Diverse Representations for Open Set Recognition
9 pages, 4 figures. Accepted to AAAI 2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open set recognition (OSR) requires the model to classify samples that belong to closed sets while rejecting unknown samples during test. Currently, generative models often perform better than discriminative models in OSR, but recent studies show that generative models may be computationally infeasible or unstable on complex tasks. In this paper, we provide insights into OSR and find that learning supplementary representations can theoretically reduce the open space risk. Based on the analysis, we propose a new model, namely Multi-Expert Diverse Attention Fusion (MEDAF), that learns diverse representations in a discriminative way. MEDAF consists of multiple experts that are learned with an attention diversity regularization term to ensure the attention maps are mutually different. The logits learned by each expert are adaptively fused and used to identify the unknowns through the score function. We show that the differences in attention maps can lead to diverse representations so that the fused representations can well handle the open space. Extensive experiments are conducted on standard and OSR large-scale benchmarks. Results show that the proposed discriminative method can outperform existing generative models by up to 9.5% on AUROC and achieve new state-of-the-art performance with little computational cost. Our method can also seamlessly integrate existing classification models. Code is available at https://github.com/Vanixxz/MEDAF.
[ { "created": "Fri, 12 Jan 2024 11:40:22 GMT", "version": "v1" } ]
2024-01-15
[ [ "Wang", "Yu", "" ], [ "Mu", "Junxian", "" ], [ "Zhu", "Pengfei", "" ], [ "Hu", "Qinghua", "" ] ]
Open set recognition (OSR) requires the model to classify samples that belong to closed sets while rejecting unknown samples during test. Currently, generative models often perform better than discriminative models in OSR, but recent studies show that generative models may be computationally infeasible or unstable on complex tasks. In this paper, we provide insights into OSR and find that learning supplementary representations can theoretically reduce the open space risk. Based on the analysis, we propose a new model, namely Multi-Expert Diverse Attention Fusion (MEDAF), that learns diverse representations in a discriminative way. MEDAF consists of multiple experts that are learned with an attention diversity regularization term to ensure the attention maps are mutually different. The logits learned by each expert are adaptively fused and used to identify the unknowns through the score function. We show that the differences in attention maps can lead to diverse representations so that the fused representations can well handle the open space. Extensive experiments are conducted on standard and OSR large-scale benchmarks. Results show that the proposed discriminative method can outperform existing generative models by up to 9.5% on AUROC and achieve new state-of-the-art performance with little computational cost. Our method can also seamlessly integrate existing classification models. Code is available at https://github.com/Vanixxz/MEDAF.
2011.03880
Zijie Huang
Zijie Huang, Yizhou Sun, Wei Wang
Learning Continuous System Dynamics from Irregularly-Sampled Partial Observations
Neurips 2020
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many real-world systems, such as moving planets, can be considered as multi-agent dynamic systems, where objects interact with each other and co-evolve along with the time. Such dynamics is usually difficult to capture, and understanding and predicting the dynamics based on observed trajectories of objects become a critical research problem in many domains. Most existing algorithms, however, assume the observations are regularly sampled and all the objects can be fully observed at each sampling time, which is impractical for many applications. In this paper, we propose to learn system dynamics from irregularly-sampled partial observations with underlying graph structure for the first time. To tackle the above challenge, we present LG-ODE, a latent ordinary differential equation generative model for modeling multi-agent dynamic system with known graph structure. It can simultaneously learn the embedding of high dimensional trajectories and infer continuous latent system dynamics. Our model employs a novel encoder parameterized by a graph neural network that can infer initial states in an unsupervised way from irregularly-sampled partial observations of structural objects and utilizes neuralODE to infer arbitrarily complex continuous-time latent dynamics. Experiments on motion capture, spring system, and charged particle datasets demonstrate the effectiveness of our approach.
[ { "created": "Sun, 8 Nov 2020 01:02:22 GMT", "version": "v1" } ]
2020-11-10
[ [ "Huang", "Zijie", "" ], [ "Sun", "Yizhou", "" ], [ "Wang", "Wei", "" ] ]
Many real-world systems, such as moving planets, can be considered as multi-agent dynamic systems, where objects interact with each other and co-evolve along with the time. Such dynamics is usually difficult to capture, and understanding and predicting the dynamics based on observed trajectories of objects become a critical research problem in many domains. Most existing algorithms, however, assume the observations are regularly sampled and all the objects can be fully observed at each sampling time, which is impractical for many applications. In this paper, we propose to learn system dynamics from irregularly-sampled partial observations with underlying graph structure for the first time. To tackle the above challenge, we present LG-ODE, a latent ordinary differential equation generative model for modeling multi-agent dynamic system with known graph structure. It can simultaneously learn the embedding of high dimensional trajectories and infer continuous latent system dynamics. Our model employs a novel encoder parameterized by a graph neural network that can infer initial states in an unsupervised way from irregularly-sampled partial observations of structural objects and utilizes neuralODE to infer arbitrarily complex continuous-time latent dynamics. Experiments on motion capture, spring system, and charged particle datasets demonstrate the effectiveness of our approach.
2003.02446
Hongzhi Wang
Meifan Zhang and Hongzhi Wang
LAQP: Learning-based Approximate Query Processing
null
null
null
null
cs.DB cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Querying on big data is a challenging task due to the rapid growth of data amount. Approximate query processing (AQP) is a way to meet the requirement of fast response. In this paper, we propose a learning-based AQP method called the LAQP. The LAQP builds an error model learned from the historical queries to predict the sampling-based estimation error of each new query. It makes a combination of the sampling-based AQP, the pre-computed aggregations and the learned error model to provide high-accurate query estimations with a small off-line sample. The experimental results indicate that our LAQP outperforms the sampling-based AQP, the pre-aggregation-based AQP and the most recent learning-based AQP method.
[ { "created": "Thu, 5 Mar 2020 06:08:25 GMT", "version": "v1" } ]
2020-03-06
[ [ "Zhang", "Meifan", "" ], [ "Wang", "Hongzhi", "" ] ]
Querying on big data is a challenging task due to the rapid growth of data amount. Approximate query processing (AQP) is a way to meet the requirement of fast response. In this paper, we propose a learning-based AQP method called the LAQP. The LAQP builds an error model learned from the historical queries to predict the sampling-based estimation error of each new query. It makes a combination of the sampling-based AQP, the pre-computed aggregations and the learned error model to provide high-accurate query estimations with a small off-line sample. The experimental results indicate that our LAQP outperforms the sampling-based AQP, the pre-aggregation-based AQP and the most recent learning-based AQP method.
2101.08177
Ximing Qiao
Ximing Qiao, Yuhua Bai, Siping Hu, Ang Li, Yiran Chen, Hai Li
On Provable Backdoor Defense in Collaborative Learning
null
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
As collaborative learning allows joint training of a model using multiple sources of data, the security problem has been a central concern. Malicious users can upload poisoned data to prevent the model's convergence or inject hidden backdoors. The so-called backdoor attacks are especially difficult to detect since the model behaves normally on standard test data but gives wrong outputs when triggered by certain backdoor keys. Although Byzantine-tolerant training algorithms provide convergence guarantee, provable defense against backdoor attacks remains largely unsolved. Methods based on randomized smoothing can only correct a small number of corrupted pixels or labels; methods based on subset aggregation cause a severe drop in classification accuracy due to low data utilization. We propose a novel framework that generalizes existing subset aggregation methods. The framework shows that the subset selection process, a deciding factor for subset aggregation methods, can be viewed as a code design problem. We derive the theoretical bound of data utilization ratio and provide optimal code construction. Experiments on non-IID versions of MNIST and CIFAR-10 show that our method with optimal codes significantly outperforms baselines using non-overlapping partition and random selection. Additionally, integration with existing coding theory results shows that special codes can track the location of the attackers. Such capability provides new countermeasures to backdoor attacks.
[ { "created": "Tue, 19 Jan 2021 14:39:32 GMT", "version": "v1" } ]
2021-01-21
[ [ "Qiao", "Ximing", "" ], [ "Bai", "Yuhua", "" ], [ "Hu", "Siping", "" ], [ "Li", "Ang", "" ], [ "Chen", "Yiran", "" ], [ "Li", "Hai", "" ] ]
As collaborative learning allows joint training of a model using multiple sources of data, the security problem has been a central concern. Malicious users can upload poisoned data to prevent the model's convergence or inject hidden backdoors. The so-called backdoor attacks are especially difficult to detect since the model behaves normally on standard test data but gives wrong outputs when triggered by certain backdoor keys. Although Byzantine-tolerant training algorithms provide convergence guarantee, provable defense against backdoor attacks remains largely unsolved. Methods based on randomized smoothing can only correct a small number of corrupted pixels or labels; methods based on subset aggregation cause a severe drop in classification accuracy due to low data utilization. We propose a novel framework that generalizes existing subset aggregation methods. The framework shows that the subset selection process, a deciding factor for subset aggregation methods, can be viewed as a code design problem. We derive the theoretical bound of data utilization ratio and provide optimal code construction. Experiments on non-IID versions of MNIST and CIFAR-10 show that our method with optimal codes significantly outperforms baselines using non-overlapping partition and random selection. Additionally, integration with existing coding theory results shows that special codes can track the location of the attackers. Such capability provides new countermeasures to backdoor attacks.
1805.00699
Krishna Chaitanya Kosaraju
Krishna Chaitanya Kosaraju and Shravan Mohan and Ramkrishna Pasumarthy
On the primal-dual dynamics of Support Vector Machines
To appear in MTNS 2018
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this paper is to study the convergence of the primal-dual dynamics pertaining to Support Vector Machines (SVM). The optimization routine, used for determining an SVM for classification, is first formulated as a dynamical system. The dynamical system is constructed such that its equilibrium point is the solution to the SVM optimization problem. It is then shown, using passivity theory, that the dynamical system is global asymptotically stable. In other words, the dynamical system converges onto the optimal solution asymptotically, irrespective of the initial condition. Simulations and computations are provided for corroboration.
[ { "created": "Wed, 2 May 2018 09:55:21 GMT", "version": "v1" } ]
2018-05-03
[ [ "Kosaraju", "Krishna Chaitanya", "" ], [ "Mohan", "Shravan", "" ], [ "Pasumarthy", "Ramkrishna", "" ] ]
The aim of this paper is to study the convergence of the primal-dual dynamics pertaining to Support Vector Machines (SVM). The optimization routine, used for determining an SVM for classification, is first formulated as a dynamical system. The dynamical system is constructed such that its equilibrium point is the solution to the SVM optimization problem. It is then shown, using passivity theory, that the dynamical system is global asymptotically stable. In other words, the dynamical system converges onto the optimal solution asymptotically, irrespective of the initial condition. Simulations and computations are provided for corroboration.
2406.04835
Zeyu Wan
Shiyi Chen, Zeyu Wan, Shiyang Yan, Chun Zhang, Weiyi Zhang, Qiang Li, Debing Zhang, Fasih Ud Din Farrukh
SLR: Learning Quadruped Locomotion without Privileged Information
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Traditional reinforcement learning control for quadruped robots often relies on privileged information, demanding meticulous selection and precise estimation, thereby imposing constraints on the development process. This work proposes a Self-learning Latent Representation (SLR) method, which achieves high-performance control policy learning without the need for privileged information. To enhance the credibility of our proposed method's evaluation, SLR is compared with open-source code repositories of state-of-the-art algorithms, retaining the original authors' configuration parameters. Across four repositories, SLR consistently outperforms the reference results. Ultimately, the trained policy and encoder empower the quadruped robot to navigate steps, climb stairs, ascend rocks, and traverse various challenging terrains. Robot experiment videos are at https://11chens.github.io/SLR/
[ { "created": "Fri, 7 Jun 2024 11:06:26 GMT", "version": "v1" } ]
2024-06-10
[ [ "Chen", "Shiyi", "" ], [ "Wan", "Zeyu", "" ], [ "Yan", "Shiyang", "" ], [ "Zhang", "Chun", "" ], [ "Zhang", "Weiyi", "" ], [ "Li", "Qiang", "" ], [ "Zhang", "Debing", "" ], [ "Farrukh", "Fasih Ud Din", "" ] ]
Traditional reinforcement learning control for quadruped robots often relies on privileged information, demanding meticulous selection and precise estimation, thereby imposing constraints on the development process. This work proposes a Self-learning Latent Representation (SLR) method, which achieves high-performance control policy learning without the need for privileged information. To enhance the credibility of our proposed method's evaluation, SLR is compared with open-source code repositories of state-of-the-art algorithms, retaining the original authors' configuration parameters. Across four repositories, SLR consistently outperforms the reference results. Ultimately, the trained policy and encoder empower the quadruped robot to navigate steps, climb stairs, ascend rocks, and traverse various challenging terrains. Robot experiment videos are at https://11chens.github.io/SLR/
2304.01080
Alejandro Linares-Barranco A. Linares-Barranco
Antonio Rios-Navarro, Enrique Pi\~nero-Fuentes, Salvador Canas-Moreno, Aqib Javed, Jin Harkin, Alejandro Linares-Barranco
LIPSFUS: A neuromorphic dataset for audio-visual sensory fusion of lip reading
Submitted to ISCAS2023, 4 pages, plus references, github link provided
null
null
null
cs.SD cs.RO eess.AS
http://creativecommons.org/licenses/by/4.0/
This paper presents a sensory fusion neuromorphic dataset collected with precise temporal synchronization using a set of Address-Event-Representation sensors and tools. The target application is the lip reading of several keywords for different machine learning applications, such as digits, robotic commands, and auxiliary rich phonetic short words. The dataset is enlarged with a spiking version of an audio-visual lip reading dataset collected with frame-based cameras. LIPSFUS is publicly available and it has been validated with a deep learning architecture for audio and visual classification. It is intended for sensory fusion architectures based on both artificial and spiking neural network algorithms.
[ { "created": "Tue, 28 Mar 2023 12:27:43 GMT", "version": "v1" } ]
2023-04-04
[ [ "Rios-Navarro", "Antonio", "" ], [ "Piñero-Fuentes", "Enrique", "" ], [ "Canas-Moreno", "Salvador", "" ], [ "Javed", "Aqib", "" ], [ "Harkin", "Jin", "" ], [ "Linares-Barranco", "Alejandro", "" ] ]
This paper presents a sensory fusion neuromorphic dataset collected with precise temporal synchronization using a set of Address-Event-Representation sensors and tools. The target application is the lip reading of several keywords for different machine learning applications, such as digits, robotic commands, and auxiliary rich phonetic short words. The dataset is enlarged with a spiking version of an audio-visual lip reading dataset collected with frame-based cameras. LIPSFUS is publicly available and it has been validated with a deep learning architecture for audio and visual classification. It is intended for sensory fusion architectures based on both artificial and spiking neural network algorithms.
2303.10307
Jianye Yi
Jianye Yi and Xiaopin Zhong and Weixiang Liu and Wenxuan Zhu and Zongze Wu and Yuanlong Deng
Edge-aware Plug-and-play Scheme for Semantic Segmentation
8 pages, 5 figures
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semantic segmentation is a classic and fundamental computer vision problem dedicated to assigning each pixel with its corresponding class. Some recent methods introduce edge-based information for improving the segmentation performance. However these methods are specific and limited to certain network architectures, and they can not be transferred to other models or tasks. Therefore, we propose an abstract and universal edge supervision method called Edge-aware Plug-and-play Scheme (EPS), which can be easily and quickly applied to any semantic segmentation models. The core is edge-width/thickness preserving guided for semantic segmentation. The EPS first extracts the Edge Ground Truth (Edge GT) with a predefined edge thickness from the training data; and then for any network architecture, it directly copies the decoder head for the auxiliary task with the Edge GT supervision. To ensure the edge thickness preserving consistantly, we design a new boundarybased loss, called Polar Hausdorff (PH) Loss, for the auxiliary supervision. We verify the effectiveness of our EPS on the Cityscapes dataset using 22 models. The experimental results indicate that the proposed method can be seamlessly integrated into any state-of-the-art (SOTA) models with zero modification, resulting in promising enhancement of the segmentation performance.
[ { "created": "Sat, 18 Mar 2023 02:17:37 GMT", "version": "v1" } ]
2023-03-21
[ [ "Yi", "Jianye", "" ], [ "Zhong", "Xiaopin", "" ], [ "Liu", "Weixiang", "" ], [ "Zhu", "Wenxuan", "" ], [ "Wu", "Zongze", "" ], [ "Deng", "Yuanlong", "" ] ]
Semantic segmentation is a classic and fundamental computer vision problem dedicated to assigning each pixel with its corresponding class. Some recent methods introduce edge-based information for improving the segmentation performance. However these methods are specific and limited to certain network architectures, and they can not be transferred to other models or tasks. Therefore, we propose an abstract and universal edge supervision method called Edge-aware Plug-and-play Scheme (EPS), which can be easily and quickly applied to any semantic segmentation models. The core is edge-width/thickness preserving guided for semantic segmentation. The EPS first extracts the Edge Ground Truth (Edge GT) with a predefined edge thickness from the training data; and then for any network architecture, it directly copies the decoder head for the auxiliary task with the Edge GT supervision. To ensure the edge thickness preserving consistantly, we design a new boundarybased loss, called Polar Hausdorff (PH) Loss, for the auxiliary supervision. We verify the effectiveness of our EPS on the Cityscapes dataset using 22 models. The experimental results indicate that the proposed method can be seamlessly integrated into any state-of-the-art (SOTA) models with zero modification, resulting in promising enhancement of the segmentation performance.
1207.6445
Shang-Pin Sheng
Shang-Pin Sheng, Mingyan Liu
Profit Incentive In A Secondary Spectrum Market: A Contract Design Approach
12 pages, 9 figures
null
null
null
cs.CE cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we formulate a contract design problem where a primary license holder wishes to profit from its excess spectrum capacity by selling it to potential secondary users/buyers. It needs to determine how to optimally price the excess spectrum so as to maximize its profit, knowing that this excess capacity is stochastic in nature, does not come with exclusive access, and cannot provide deterministic service guarantees to a buyer. At the same time, buyers are of different {\em types}, characterized by different communication needs, tolerance for the channel uncertainty, and so on, all of which a buyer's private information. The license holder must then try to design different contracts catered to different types of buyers in order to maximize its profit. We address this problem by adopting as a reference a traditional spectrum market where the buyer can purchase exclusive access with fixed/deterministic guarantees. We fully characterize the optimal solution in the cases where there is a single buyer type, and when multiple types of buyers share the same, known channel condition as a result of the primary user activity. In the most general case we construct an algorithm that generates a set of contracts in a computationally efficient manner, and show that this set is optimal when the buyer types satisfy a monotonicity condition.
[ { "created": "Fri, 27 Jul 2012 04:25:02 GMT", "version": "v1" } ]
2012-07-30
[ [ "Sheng", "Shang-Pin", "" ], [ "Liu", "Mingyan", "" ] ]
In this paper we formulate a contract design problem where a primary license holder wishes to profit from its excess spectrum capacity by selling it to potential secondary users/buyers. It needs to determine how to optimally price the excess spectrum so as to maximize its profit, knowing that this excess capacity is stochastic in nature, does not come with exclusive access, and cannot provide deterministic service guarantees to a buyer. At the same time, buyers are of different {\em types}, characterized by different communication needs, tolerance for the channel uncertainty, and so on, all of which a buyer's private information. The license holder must then try to design different contracts catered to different types of buyers in order to maximize its profit. We address this problem by adopting as a reference a traditional spectrum market where the buyer can purchase exclusive access with fixed/deterministic guarantees. We fully characterize the optimal solution in the cases where there is a single buyer type, and when multiple types of buyers share the same, known channel condition as a result of the primary user activity. In the most general case we construct an algorithm that generates a set of contracts in a computationally efficient manner, and show that this set is optimal when the buyer types satisfy a monotonicity condition.
2007.09490
Mohammadreza Baharani
Mohammadreza Baharani, Ushma Sunil, Kaustubh Manohar, Steven Furgurson, Hamed Tabkhi
DeepDive: An Integrative Algorithm/Architecture Co-Design for Deep Separable Convolutional Neural Networks
null
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Separable Convolutional Neural Networks (DSCNNs) have become the emerging paradigm by offering modular networks with structural sparsity in order to achieve higher accuracy with relatively lower operations and parameters. However, there is a lack of customized architectures that can provide flexible solutions that fit the sparsity of the DSCNNs. This paper introduces DeepDive, which is a fully-functional, vertical co-design framework, for power-efficient implementation of DSCNNs on edge FPGAs. DeepDive's architecture supports crucial heterogeneous Compute Units (CUs) to fully support DSCNNs with various convolutional operators interconnected with structural sparsity. It offers an FPGA-aware training and online quantization combined with modular synthesizable C++ CUs, customized for DSCNNs. The execution results on Xilinx's ZCU102 FPGA board, demonstrate 47.4 and 233.3 FPS/Watt for MobileNet-V2 and a compact version of EfficientNet, respectively, as two state-of-the-art depthwise separable CNNs. These comparisons showcase how DeepDive improves FPS/Watt by 2.2$\times$ and 1.51$\times$ over Jetson Nano high and low power modes, respectively. It also enhances FPS/Watt about 2.27$\times$ and 37.25$\times$ over two other FPGA implementations. The DeepDive output for MobileNetV2 is available at https://github.com/TeCSAR-UNCC/DeepDive.
[ { "created": "Sat, 18 Jul 2020 17:50:01 GMT", "version": "v1" } ]
2020-07-21
[ [ "Baharani", "Mohammadreza", "" ], [ "Sunil", "Ushma", "" ], [ "Manohar", "Kaustubh", "" ], [ "Furgurson", "Steven", "" ], [ "Tabkhi", "Hamed", "" ] ]
Deep Separable Convolutional Neural Networks (DSCNNs) have become the emerging paradigm by offering modular networks with structural sparsity in order to achieve higher accuracy with relatively lower operations and parameters. However, there is a lack of customized architectures that can provide flexible solutions that fit the sparsity of the DSCNNs. This paper introduces DeepDive, which is a fully-functional, vertical co-design framework, for power-efficient implementation of DSCNNs on edge FPGAs. DeepDive's architecture supports crucial heterogeneous Compute Units (CUs) to fully support DSCNNs with various convolutional operators interconnected with structural sparsity. It offers an FPGA-aware training and online quantization combined with modular synthesizable C++ CUs, customized for DSCNNs. The execution results on Xilinx's ZCU102 FPGA board, demonstrate 47.4 and 233.3 FPS/Watt for MobileNet-V2 and a compact version of EfficientNet, respectively, as two state-of-the-art depthwise separable CNNs. These comparisons showcase how DeepDive improves FPS/Watt by 2.2$\times$ and 1.51$\times$ over Jetson Nano high and low power modes, respectively. It also enhances FPS/Watt about 2.27$\times$ and 37.25$\times$ over two other FPGA implementations. The DeepDive output for MobileNetV2 is available at https://github.com/TeCSAR-UNCC/DeepDive.
2111.13949
Prateek Chanda
Prateek Chanda, Malay Bhattacharya
Distributed Anomaly Detection in Edge Streams using Frequency based Sketch Datastructures
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Often logs hosted in large data centers represent network traffic data over a long period of time. For instance, such network traffic data logged via a TCP dump packet sniffer (as considered in the 1998 DARPA intrusion attack) included network packets being transmitted between computers. While an online framework is necessary for detecting any anomalous or suspicious network activities like denial of service attacks or unauthorized usage in real time, often such large data centers log data over long periods of time (e.g., TCP dump) and hence an offline framework is much more suitable in such scenarios. Given a network log history of edges from a dynamic graph, how can we assign anomaly scores to individual edges indicating suspicious events with high accuracy using only constant memory and within limited time than state-of-the-art methods? We propose MDistrib and its variants which provides (a) faster detection of anomalous events via distributed processing with GPU support compared to other approaches, (b) better false positive guarantees than state of the art methods considering fixed space and (c) with collision aware based anomaly scoring for better accuracy results than state-of-the-art approaches. We describe experiments confirming that MDistrib is more efficient than prior work.
[ { "created": "Sat, 27 Nov 2021 17:29:10 GMT", "version": "v1" } ]
2022-01-07
[ [ "Chanda", "Prateek", "" ], [ "Bhattacharya", "Malay", "" ] ]
Often logs hosted in large data centers represent network traffic data over a long period of time. For instance, such network traffic data logged via a TCP dump packet sniffer (as considered in the 1998 DARPA intrusion attack) included network packets being transmitted between computers. While an online framework is necessary for detecting any anomalous or suspicious network activities like denial of service attacks or unauthorized usage in real time, often such large data centers log data over long periods of time (e.g., TCP dump) and hence an offline framework is much more suitable in such scenarios. Given a network log history of edges from a dynamic graph, how can we assign anomaly scores to individual edges indicating suspicious events with high accuracy using only constant memory and within limited time than state-of-the-art methods? We propose MDistrib and its variants which provides (a) faster detection of anomalous events via distributed processing with GPU support compared to other approaches, (b) better false positive guarantees than state of the art methods considering fixed space and (c) with collision aware based anomaly scoring for better accuracy results than state-of-the-art approaches. We describe experiments confirming that MDistrib is more efficient than prior work.
1506.08238
Wenda Li
Wenda Li, Grant Olney Passmore and Lawrence C. Paulson
Deciding Univariate Polynomial Problems Using Untrusted Certificates in Isabelle/HOL
24 pages
Journal of Automated Reasoning, 2017
10.1007/s10817-017-9424-6
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a proof procedure for univariate real polynomial problems in Isabelle/HOL. The core mathematics of our procedure is based on univariate cylindrical algebraic decomposition. We follow the approach of untrusted certificates, separating solving from verifying: efficient external tools perform expensive real algebraic computations, producing evidence that is formally checked within Isabelle's logic. This allows us to exploit highly-tuned computer algebra systems like Mathematica to guide our procedure without impacting the correctness of its results. We present experiments demonstrating the efficacy of this approach, in many cases yielding orders of magnitude improvements over previous methods.
[ { "created": "Fri, 26 Jun 2015 23:57:46 GMT", "version": "v1" }, { "created": "Tue, 10 Apr 2018 22:19:07 GMT", "version": "v2" } ]
2018-04-12
[ [ "Li", "Wenda", "" ], [ "Passmore", "Grant Olney", "" ], [ "Paulson", "Lawrence C.", "" ] ]
We present a proof procedure for univariate real polynomial problems in Isabelle/HOL. The core mathematics of our procedure is based on univariate cylindrical algebraic decomposition. We follow the approach of untrusted certificates, separating solving from verifying: efficient external tools perform expensive real algebraic computations, producing evidence that is formally checked within Isabelle's logic. This allows us to exploit highly-tuned computer algebra systems like Mathematica to guide our procedure without impacting the correctness of its results. We present experiments demonstrating the efficacy of this approach, in many cases yielding orders of magnitude improvements over previous methods.
1403.3972
Vojt\v{e}ch Vorel
Vojt\v{e}ch Vorel
Subset Synchronization and Careful Synchronization of Binary Finite Automata
An extended version of the paper "Subset Synchronization of Transitive Automata" presented at AFL 2014
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a strongly exponential lower bound that applies both to the subset synchronization threshold for binary deterministic automata and to the careful synchronization threshold for binary partial automata. In the later form, the result finishes the research initiated by Martyugin (2013). Moreover, we show that both the thresholds remain strongly exponential even if restricted to strongly connected binary automata. In addition, we apply our methods to computational complexity. Existence of a subset reset word is known to be PSPACE-complete; we show that this holds even under the restriction to strongly connected binary automata. The results apply also to the corresponding thresholds in two more general settings: D1- and D3-directable nondeterministic automata and composition sequences over finite domains.
[ { "created": "Sun, 16 Mar 2014 23:29:14 GMT", "version": "v1" }, { "created": "Thu, 22 May 2014 02:15:35 GMT", "version": "v2" }, { "created": "Sat, 27 Sep 2014 23:06:39 GMT", "version": "v3" }, { "created": "Mon, 15 Feb 2016 01:21:17 GMT", "version": "v4" } ]
2016-02-16
[ [ "Vorel", "Vojtěch", "" ] ]
We present a strongly exponential lower bound that applies both to the subset synchronization threshold for binary deterministic automata and to the careful synchronization threshold for binary partial automata. In the later form, the result finishes the research initiated by Martyugin (2013). Moreover, we show that both the thresholds remain strongly exponential even if restricted to strongly connected binary automata. In addition, we apply our methods to computational complexity. Existence of a subset reset word is known to be PSPACE-complete; we show that this holds even under the restriction to strongly connected binary automata. The results apply also to the corresponding thresholds in two more general settings: D1- and D3-directable nondeterministic automata and composition sequences over finite domains.
2107.04393
Nikolaj Ignatieff Schwartzbach
Mathias Hall-Andersen and Nikolaj I. Schwartzbach
Game theory on the blockchain: a model for games with smart contracts
null
SAGT 2021: 14th International Symposium on Algorithmic Game Theory
10.1007/978-3-030-85947-3_11
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a model for games in which the players have shared access to a blockchain that allows them to deploy smart contracts to act on their behalf. This changes fundamental game-theoretic assumptions about rationality since a contract can commit a player to act irrationally in specific subgames, making credible otherwise non-credible threats. This is further complicated by considering the interaction between multiple contracts which can reason about each other. This changes the nature of the game in a nontrivial way as choosing which contract to play can itself be considered a move in the game. Our model generalizes known notions of equilibria, with a single contract being equivalent to a Stackelberg equilibrium, and two contracts being equivalent to a reverse Stackelberg equilibrium. We prove a number of bounds on the complexity of computing SPE in such games with smart contracts. We show that computing an SPE is $\textsf{PSPACE}$-hard in the general case. Specifically, in games with $k$ contracts, we show that computing an SPE is $\Sigma_k^\textsf{P}$-hard for games of imperfect information. We show that computing an SPE remains $\textsf{PSPACE}$-hard in games of perfect information if we allow for an unbounded number of contracts. We give an algorithm for computing an SPE in two-contract games of perfect information that runs in time $O(m\ell)$ where $m$ is the size of the game tree and $\ell$ is the number of terminal nodes. Finally, we conjecture the problem to be $\textsf{NP}$-complete for three contracts.
[ { "created": "Fri, 9 Jul 2021 12:43:04 GMT", "version": "v1" } ]
2023-04-05
[ [ "Hall-Andersen", "Mathias", "" ], [ "Schwartzbach", "Nikolaj I.", "" ] ]
We propose a model for games in which the players have shared access to a blockchain that allows them to deploy smart contracts to act on their behalf. This changes fundamental game-theoretic assumptions about rationality since a contract can commit a player to act irrationally in specific subgames, making credible otherwise non-credible threats. This is further complicated by considering the interaction between multiple contracts which can reason about each other. This changes the nature of the game in a nontrivial way as choosing which contract to play can itself be considered a move in the game. Our model generalizes known notions of equilibria, with a single contract being equivalent to a Stackelberg equilibrium, and two contracts being equivalent to a reverse Stackelberg equilibrium. We prove a number of bounds on the complexity of computing SPE in such games with smart contracts. We show that computing an SPE is $\textsf{PSPACE}$-hard in the general case. Specifically, in games with $k$ contracts, we show that computing an SPE is $\Sigma_k^\textsf{P}$-hard for games of imperfect information. We show that computing an SPE remains $\textsf{PSPACE}$-hard in games of perfect information if we allow for an unbounded number of contracts. We give an algorithm for computing an SPE in two-contract games of perfect information that runs in time $O(m\ell)$ where $m$ is the size of the game tree and $\ell$ is the number of terminal nodes. Finally, we conjecture the problem to be $\textsf{NP}$-complete for three contracts.
1102.4528
Nilo Serpa Costa
Nilo Serpa and Jose Roberto Steiner
Modelling the Dynamics of the Work-Employment System by Predator-Prey Interactions
17 pages, 11 figures and original formalism
null
null
null
cs.CE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The broad application range of the predator-prey modelling enabled us to apply it to represent the dynamics of the work-employment system. For the adopted period, we conclude that this dynamics is chaotic in the beginning of the time series and tends to less perturbed states, as time goes by, due to public policies and hidden intrinsic system features. Basic Lotka-Volterra approach was revised and adapted to the reality of the study. The final aim is to provide managers with generalized theoretical elements that allow to a more accurate understanding of the behavior of the work-employment system.
[ { "created": "Tue, 22 Feb 2011 15:01:06 GMT", "version": "v1" }, { "created": "Wed, 23 Feb 2011 11:50:29 GMT", "version": "v2" }, { "created": "Thu, 17 Mar 2011 20:19:37 GMT", "version": "v3" } ]
2011-03-21
[ [ "Serpa", "Nilo", "" ], [ "Steiner", "Jose Roberto", "" ] ]
The broad application range of the predator-prey modelling enabled us to apply it to represent the dynamics of the work-employment system. For the adopted period, we conclude that this dynamics is chaotic in the beginning of the time series and tends to less perturbed states, as time goes by, due to public policies and hidden intrinsic system features. Basic Lotka-Volterra approach was revised and adapted to the reality of the study. The final aim is to provide managers with generalized theoretical elements that allow to a more accurate understanding of the behavior of the work-employment system.
2011.11155
Hao Zhu
Hao Zhu, Yang Yuan, Guosheng Hu, Xiang Wu, Neil Robertson
Imbalance Robust Softmax for Deep Embeeding Learning
has been accepted by ACCV 2020
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Deep embedding learning is expected to learn a metric space in which features have smaller maximal intra-class distance than minimal inter-class distance. In recent years, one research focus is to solve the open-set problem by discriminative deep embedding learning in the field of face recognition (FR) and person re-identification (re-ID). Apart from open-set problem, we find that imbalanced training data is another main factor causing the performance degradation of FR and re-ID, and data imbalance widely exists in the real applications. However, very little research explores why and how data imbalance influences the performance of FR and re-ID with softmax or its variants. In this work, we deeply investigate data imbalance in the perspective of neural network optimisation and feature distribution about softmax. We find one main reason of performance degradation caused by data imbalance is that the weights (from the penultimate fully-connected layer) are far from their class centers in feature space. Based on this investigation, we propose a unified framework, Imbalance-Robust Softmax (IR-Softmax), which can simultaneously solve the open-set problem and reduce the influence of data imbalance. IR-Softmax can generalise to any softmax and its variants (which are discriminative for open-set problem) by directly setting the weights as their class centers, naturally solving the data imbalance problem. In this work, we explicitly re-formulate two discriminative softmax (A-Softmax and AM-Softmax) under the framework of IR-Softmax. We conduct extensive experiments on FR databases (LFW, MegaFace) and re-ID database (Market-1501, Duke), and IR-Softmax outperforms many state-of-the-art methods.
[ { "created": "Mon, 23 Nov 2020 00:43:07 GMT", "version": "v1" } ]
2020-11-24
[ [ "Zhu", "Hao", "" ], [ "Yuan", "Yang", "" ], [ "Hu", "Guosheng", "" ], [ "Wu", "Xiang", "" ], [ "Robertson", "Neil", "" ] ]
Deep embedding learning is expected to learn a metric space in which features have smaller maximal intra-class distance than minimal inter-class distance. In recent years, one research focus is to solve the open-set problem by discriminative deep embedding learning in the field of face recognition (FR) and person re-identification (re-ID). Apart from open-set problem, we find that imbalanced training data is another main factor causing the performance degradation of FR and re-ID, and data imbalance widely exists in the real applications. However, very little research explores why and how data imbalance influences the performance of FR and re-ID with softmax or its variants. In this work, we deeply investigate data imbalance in the perspective of neural network optimisation and feature distribution about softmax. We find one main reason of performance degradation caused by data imbalance is that the weights (from the penultimate fully-connected layer) are far from their class centers in feature space. Based on this investigation, we propose a unified framework, Imbalance-Robust Softmax (IR-Softmax), which can simultaneously solve the open-set problem and reduce the influence of data imbalance. IR-Softmax can generalise to any softmax and its variants (which are discriminative for open-set problem) by directly setting the weights as their class centers, naturally solving the data imbalance problem. In this work, we explicitly re-formulate two discriminative softmax (A-Softmax and AM-Softmax) under the framework of IR-Softmax. We conduct extensive experiments on FR databases (LFW, MegaFace) and re-ID database (Market-1501, Duke), and IR-Softmax outperforms many state-of-the-art methods.
1108.3629
EPTCS
Gabriele Fici (Laboratoire I3S, CNRS and Universit\'e de Nice-Sophia Antipolis)
A Classification of Trapezoidal Words
In Proceedings WORDS 2011, arXiv:1108.3412
EPTCS 63, 2011, pp. 129-137
10.4204/EPTCS.63.18
null
cs.FL math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trapezoidal words are finite words having at most n+1 distinct factors of length n, for every n>=0. They encompass finite Sturmian words. We distinguish trapezoidal words into two disjoint subsets: open and closed trapezoidal words. A trapezoidal word is closed if its longest repeated prefix has exactly two occurrences in the word, the second one being a suffix of the word. Otherwise it is open. We show that open trapezoidal words are all primitive and that closed trapezoidal words are all Sturmian. We then show that trapezoidal palindromes are closed (and therefore Sturmian). This allows us to characterize the special factors of Sturmian palindromes. We end with several open problems.
[ { "created": "Thu, 18 Aug 2011 03:53:45 GMT", "version": "v1" } ]
2011-08-19
[ [ "Fici", "Gabriele", "", "Laboratoire I3S, CNRS and Université de Nice-Sophia\n Antipolis" ] ]
Trapezoidal words are finite words having at most n+1 distinct factors of length n, for every n>=0. They encompass finite Sturmian words. We distinguish trapezoidal words into two disjoint subsets: open and closed trapezoidal words. A trapezoidal word is closed if its longest repeated prefix has exactly two occurrences in the word, the second one being a suffix of the word. Otherwise it is open. We show that open trapezoidal words are all primitive and that closed trapezoidal words are all Sturmian. We then show that trapezoidal palindromes are closed (and therefore Sturmian). This allows us to characterize the special factors of Sturmian palindromes. We end with several open problems.
2102.00314
Gal Vardi
Gal Vardi, Daniel Reichman, Toniann Pitassi, Ohad Shamir
Size and Depth Separation in Approximating Benign Functions with Neural Networks
Edits after review + changing the terminology from "natural functions" to "benign functions"
null
null
null
cs.LG cs.CC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When studying the expressive power of neural networks, a main challenge is to understand how the size and depth of the network affect its ability to approximate real functions. However, not all functions are interesting from a practical viewpoint: functions of interest usually have a polynomially-bounded Lipschitz constant, and can be computed efficiently. We call functions that satisfy these conditions "benign", and explore the benefits of size and depth for approximation of benign functions with ReLU networks. As we show, this problem is more challenging than the corresponding problem for non-benign functions. We give barriers to showing depth-lower-bounds: Proving existence of a benign function that cannot be approximated by polynomial-size networks of depth $4$ would settle longstanding open problems in computational complexity. It implies that beyond depth $4$ there is a barrier to showing depth-separation for benign functions, even between networks of constant depth and networks of nonconstant depth. We also study size-separation, namely, whether there are benign functions that can be approximated with networks of size $O(s(d))$, but not with networks of size $O(s'(d))$. We show a complexity-theoretic barrier to proving such results beyond size $O(d\log^2(d))$, but also show an explicit benign function, that can be approximated with networks of size $O(d)$ and not with networks of size $o(d/\log d)$. For approximation in $L_\infty$ we achieve such separation already between size $O(d)$ and size $o(d)$. Moreover, we show superpolynomial size lower bounds and barriers to such lower bounds, depending on the assumptions on the function. Our size-separation results rely on an analysis of size lower bounds for Boolean functions, which is of independent interest: We show linear size lower bounds for computing explicit Boolean functions with neural networks and threshold circuits.
[ { "created": "Sat, 30 Jan 2021 21:30:11 GMT", "version": "v1" }, { "created": "Wed, 3 Feb 2021 01:51:12 GMT", "version": "v2" }, { "created": "Mon, 28 Jun 2021 20:34:40 GMT", "version": "v3" } ]
2021-06-30
[ [ "Vardi", "Gal", "" ], [ "Reichman", "Daniel", "" ], [ "Pitassi", "Toniann", "" ], [ "Shamir", "Ohad", "" ] ]
When studying the expressive power of neural networks, a main challenge is to understand how the size and depth of the network affect its ability to approximate real functions. However, not all functions are interesting from a practical viewpoint: functions of interest usually have a polynomially-bounded Lipschitz constant, and can be computed efficiently. We call functions that satisfy these conditions "benign", and explore the benefits of size and depth for approximation of benign functions with ReLU networks. As we show, this problem is more challenging than the corresponding problem for non-benign functions. We give barriers to showing depth-lower-bounds: Proving existence of a benign function that cannot be approximated by polynomial-size networks of depth $4$ would settle longstanding open problems in computational complexity. It implies that beyond depth $4$ there is a barrier to showing depth-separation for benign functions, even between networks of constant depth and networks of nonconstant depth. We also study size-separation, namely, whether there are benign functions that can be approximated with networks of size $O(s(d))$, but not with networks of size $O(s'(d))$. We show a complexity-theoretic barrier to proving such results beyond size $O(d\log^2(d))$, but also show an explicit benign function, that can be approximated with networks of size $O(d)$ and not with networks of size $o(d/\log d)$. For approximation in $L_\infty$ we achieve such separation already between size $O(d)$ and size $o(d)$. Moreover, we show superpolynomial size lower bounds and barriers to such lower bounds, depending on the assumptions on the function. Our size-separation results rely on an analysis of size lower bounds for Boolean functions, which is of independent interest: We show linear size lower bounds for computing explicit Boolean functions with neural networks and threshold circuits.
2305.07451
Ga\"etan Staquet
V\'eronique Bruy\`ere, Guillermo A. P\'erez, Ga\"etan Staquet, Frits W. Vaandrager
Automata with Timers
35 pages, 9 figures
Formal Modeling and Analysis of Timed Systems (FORMATS) 2023 pp. 33-49
10.1007/978-3-031-42626-1_3
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we study properties of deterministic finite-state automata with timers, a subclass of timed automata proposed by Vaandrager et al. as a candidate for an efficiently learnable timed model. We first study the complexity of the configuration reachability problem for such automata and establish that it is PSPACE-complete. Then, as simultaneous timeouts (we call these, races) can occur in timed runs of such automata, we study the problem of determining whether it is possible to modify the delays between the actions in a run, in a way to avoid such races. The absence of races is important for modelling purposes and to streamline learning of automata with timers. We provide an effective characterization of when an automaton is race-avoiding and establish that the related decision problem is in 3EXP and PSPACE-hard.
[ { "created": "Fri, 12 May 2023 13:07:10 GMT", "version": "v1" } ]
2024-03-04
[ [ "Bruyère", "Véronique", "" ], [ "Pérez", "Guillermo A.", "" ], [ "Staquet", "Gaëtan", "" ], [ "Vaandrager", "Frits W.", "" ] ]
In this work, we study properties of deterministic finite-state automata with timers, a subclass of timed automata proposed by Vaandrager et al. as a candidate for an efficiently learnable timed model. We first study the complexity of the configuration reachability problem for such automata and establish that it is PSPACE-complete. Then, as simultaneous timeouts (we call these, races) can occur in timed runs of such automata, we study the problem of determining whether it is possible to modify the delays between the actions in a run, in a way to avoid such races. The absence of races is important for modelling purposes and to streamline learning of automata with timers. We provide an effective characterization of when an automaton is race-avoiding and establish that the related decision problem is in 3EXP and PSPACE-hard.
1509.02218
Mitsuo Yoshida
Mitsuo Yoshida, Yuki Arase, Takaaki Tsunoda, Mikio Yamamoto
Wikipedia Page View Reflects Web Search Trend
2 pages, 4 figures, The 2015 ACM Web Science Conference (WebSci15)
null
10.1145/2786451.2786495
null
cs.SI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The frequency of a web search keyword generally reflects the degree of public interest in a particular subject matter. Search logs are therefore useful resources for trend analysis. However, access to search logs is typically restricted to search engine providers. In this paper, we investigate whether search frequency can be estimated from a different resource such as Wikipedia page views of open data. We found frequently searched keywords to have remarkably high correlations with Wikipedia page views. This suggests that Wikipedia page views can be an effective tool for determining popular global web search trends.
[ { "created": "Mon, 7 Sep 2015 22:59:28 GMT", "version": "v1" } ]
2015-09-09
[ [ "Yoshida", "Mitsuo", "" ], [ "Arase", "Yuki", "" ], [ "Tsunoda", "Takaaki", "" ], [ "Yamamoto", "Mikio", "" ] ]
The frequency of a web search keyword generally reflects the degree of public interest in a particular subject matter. Search logs are therefore useful resources for trend analysis. However, access to search logs is typically restricted to search engine providers. In this paper, we investigate whether search frequency can be estimated from a different resource such as Wikipedia page views of open data. We found frequently searched keywords to have remarkably high correlations with Wikipedia page views. This suggests that Wikipedia page views can be an effective tool for determining popular global web search trends.
2310.08586
Haoyi Zhu
Haoyi Zhu, Honghui Yang, Xiaoyang Wu, Di Huang, Sha Zhang, Xianglong He, Hengshuang Zhao, Chunhua Shen, Yu Qiao, Tong He, Wanli Ouyang
PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm
arXiv admin note: text overlap with arXiv:2301.00157
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In contrast to numerous NLP and 2D vision foundational models, learning a 3D foundational model poses considerably greater challenges. This is primarily due to the inherent data variability and diversity of downstream tasks. In this paper, we introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation, thereby establishing a pathway to 3D foundational models. Considering that informative 3D features should encode rich geometry and appearance cues that can be utilized to render realistic images, we propose to learn 3D representations by differentiable neural rendering. We train a 3D backbone with a devised volumetric neural renderer by comparing the rendered with the real images. Notably, our approach seamlessly integrates the learned 3D encoder into various downstream tasks. These tasks encompass not only high-level challenges such as 3D detection and segmentation but also low-level objectives like 3D reconstruction and image synthesis, spanning both indoor and outdoor scenarios. Besides, we also illustrate the capability of pre-training a 2D backbone using the proposed methodology, surpassing conventional pre-training methods by a large margin. For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks, implying its effectiveness. Code and models are available at https://github.com/OpenGVLab/PonderV2.
[ { "created": "Thu, 12 Oct 2023 17:59:57 GMT", "version": "v1" }, { "created": "Fri, 13 Oct 2023 13:37:57 GMT", "version": "v2" }, { "created": "Tue, 27 Feb 2024 13:53:43 GMT", "version": "v3" } ]
2024-02-28
[ [ "Zhu", "Haoyi", "" ], [ "Yang", "Honghui", "" ], [ "Wu", "Xiaoyang", "" ], [ "Huang", "Di", "" ], [ "Zhang", "Sha", "" ], [ "He", "Xianglong", "" ], [ "Zhao", "Hengshuang", "" ], [ "Shen", "Chunhua", "" ], [ "Qiao", "Yu", "" ], [ "He", "Tong", "" ], [ "Ouyang", "Wanli", "" ] ]
In contrast to numerous NLP and 2D vision foundational models, learning a 3D foundational model poses considerably greater challenges. This is primarily due to the inherent data variability and diversity of downstream tasks. In this paper, we introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation, thereby establishing a pathway to 3D foundational models. Considering that informative 3D features should encode rich geometry and appearance cues that can be utilized to render realistic images, we propose to learn 3D representations by differentiable neural rendering. We train a 3D backbone with a devised volumetric neural renderer by comparing the rendered with the real images. Notably, our approach seamlessly integrates the learned 3D encoder into various downstream tasks. These tasks encompass not only high-level challenges such as 3D detection and segmentation but also low-level objectives like 3D reconstruction and image synthesis, spanning both indoor and outdoor scenarios. Besides, we also illustrate the capability of pre-training a 2D backbone using the proposed methodology, surpassing conventional pre-training methods by a large margin. For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks, implying its effectiveness. Code and models are available at https://github.com/OpenGVLab/PonderV2.
1701.00749
Christophe Van Gysel
Christophe Van Gysel and Evangelos Kanoulas and Maarten de Rijke
Pyndri: a Python Interface to the Indri Search Engine
ECIR2017. Proceedings of the 39th European Conference on Information Retrieval. 2017. The final publication will be available at Springer
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce pyndri, a Python interface to the Indri search engine. Pyndri allows to access Indri indexes from Python at two levels: (1) dictionary and tokenized document collection, (2) evaluating queries on the index. We hope that with the release of pyndri, we will stimulate reproducible, open and fast-paced IR research.
[ { "created": "Tue, 3 Jan 2017 17:17:34 GMT", "version": "v1" } ]
2017-01-04
[ [ "Van Gysel", "Christophe", "" ], [ "Kanoulas", "Evangelos", "" ], [ "de Rijke", "Maarten", "" ] ]
We introduce pyndri, a Python interface to the Indri search engine. Pyndri allows to access Indri indexes from Python at two levels: (1) dictionary and tokenized document collection, (2) evaluating queries on the index. We hope that with the release of pyndri, we will stimulate reproducible, open and fast-paced IR research.
1509.04438
Tobias Strau{\ss}
Tobias Strau{\ss}, Gundram Leifert, Tobias Gr\"uning, and Roger Labahn
Regular expressions for decoding of neural network outputs
21 pages, 8 (+2) figures, 2 tables
null
10.1016/j.neunet.2016.03.003
NN3600
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article proposes a convenient tool for decoding the output of neural networks trained by Connectionist Temporal Classification (CTC) for handwritten text recognition. We use regular expressions to describe the complex structures expected in the writing. The corresponding finite automata are employed to build a decoder. We analyze theoretically which calculations are relevant and which can be avoided. A great speed-up results from an approximation. We conclude that the approximation most likely fails if the regular expression does not match the ground truth which is not harmful for many applications since the low probability will be even underestimated. The proposed decoder is very efficient compared to other decoding methods. The variety of applications reaches from information retrieval to full text recognition. We refer to applications where we integrated the proposed decoder successfully.
[ { "created": "Tue, 15 Sep 2015 08:24:37 GMT", "version": "v1" }, { "created": "Mon, 22 Feb 2016 09:35:53 GMT", "version": "v2" } ]
2016-03-31
[ [ "Strauß", "Tobias", "" ], [ "Leifert", "Gundram", "" ], [ "Grüning", "Tobias", "" ], [ "Labahn", "Roger", "" ] ]
This article proposes a convenient tool for decoding the output of neural networks trained by Connectionist Temporal Classification (CTC) for handwritten text recognition. We use regular expressions to describe the complex structures expected in the writing. The corresponding finite automata are employed to build a decoder. We analyze theoretically which calculations are relevant and which can be avoided. A great speed-up results from an approximation. We conclude that the approximation most likely fails if the regular expression does not match the ground truth which is not harmful for many applications since the low probability will be even underestimated. The proposed decoder is very efficient compared to other decoding methods. The variety of applications reaches from information retrieval to full text recognition. We refer to applications where we integrated the proposed decoder successfully.
1211.4771
Ganesh Sundaramoorthi
Ganesh Sundaramoorthi and Yanchao Yang
Matching Through Features and Features Through Matching
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses how to construct features for the problem of image correspondence, in particular, the paper addresses how to construct features so as to maintain the right level of invariance versus discriminability. We show that without additional prior knowledge of the 3D scene, the right tradeoff cannot be established in a pre-processing step of the images as is typically done in most feature-based matching methods. However, given knowledge of the second image to match, the tradeoff between invariance and discriminability of features in the first image is less ambiguous. This suggests to setup the problem of feature extraction and matching as a joint estimation problem. We develop a possible mathematical framework, a possible computational algorithm, and we give example demonstration on finding correspondence on images related by a scene that undergoes large 3D deformation of non-planar objects and camera viewpoint change.
[ { "created": "Tue, 20 Nov 2012 15:15:56 GMT", "version": "v1" } ]
2012-11-21
[ [ "Sundaramoorthi", "Ganesh", "" ], [ "Yang", "Yanchao", "" ] ]
This paper addresses how to construct features for the problem of image correspondence, in particular, the paper addresses how to construct features so as to maintain the right level of invariance versus discriminability. We show that without additional prior knowledge of the 3D scene, the right tradeoff cannot be established in a pre-processing step of the images as is typically done in most feature-based matching methods. However, given knowledge of the second image to match, the tradeoff between invariance and discriminability of features in the first image is less ambiguous. This suggests to setup the problem of feature extraction and matching as a joint estimation problem. We develop a possible mathematical framework, a possible computational algorithm, and we give example demonstration on finding correspondence on images related by a scene that undergoes large 3D deformation of non-planar objects and camera viewpoint change.
2012.02544
Jos\'e Carlos Aradillas Jaramillo
Jos\'e Carlos Aradillas, Juan Jos\'e Murillo-Fuentes, Pablo M. Olmos
Boosting offline handwritten text recognition in historical documents with few labeled lines
null
null
10.1109/ACCESS.2021.3082689
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we face the problem of offline handwritten text recognition (HTR) in historical documents when few labeled samples are available and some of them contain errors in the train set. Three main contributions are developed. First we analyze how to perform transfer learning (TL) from a massive database to a smaller historical database, analyzing which layers of the model need a fine-tuning process. Second, we analyze methods to efficiently combine TL and data augmentation (DA). Finally, an algorithm to mitigate the effects of incorrect labelings in the training set is proposed. The methods are analyzed over the ICFHR 2018 competition database, Washington and Parzival. Combining all these techniques, we demonstrate a remarkable reduction of CER (up to 6% in some cases) in the test set with little complexity overhead.
[ { "created": "Fri, 4 Dec 2020 11:59:35 GMT", "version": "v1" } ]
2021-05-25
[ [ "Aradillas", "José Carlos", "" ], [ "Murillo-Fuentes", "Juan José", "" ], [ "Olmos", "Pablo M.", "" ] ]
In this paper, we face the problem of offline handwritten text recognition (HTR) in historical documents when few labeled samples are available and some of them contain errors in the train set. Three main contributions are developed. First we analyze how to perform transfer learning (TL) from a massive database to a smaller historical database, analyzing which layers of the model need a fine-tuning process. Second, we analyze methods to efficiently combine TL and data augmentation (DA). Finally, an algorithm to mitigate the effects of incorrect labelings in the training set is proposed. The methods are analyzed over the ICFHR 2018 competition database, Washington and Parzival. Combining all these techniques, we demonstrate a remarkable reduction of CER (up to 6% in some cases) in the test set with little complexity overhead.
1307.5838
Masoumeh Vali
Masoumeh Vali
Rotational Mutation Genetic Algorithm on optimization Problems
arXiv admin note: text overlap with arXiv:1307.5534, arXiv:1307.5679, arXiv:1307.5840
null
null
null
cs.NE math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimization problem, nowadays, have more application in all major but they have problem in computation. Calculation of the optimum point in the spaces with the above dimensions is very time consuming. In this paper, there is presented a new approach for the optimization of continuous functions with rotational mutation that is called RM. The proposed algorithm starts from the point which has best fitness value by elitism mechanism. Then, method of rotational mutation is used to reach optimal point. In this paper, RM algorithm is implemented by GA(Briefly RMGA) and is compared with other well- known algorithms: DE, PGA, Grefensstette and Eshelman [15, 16] and numerical and simulation results show that RMGA achieve global optimal point with more decision by smaller generations.
[ { "created": "Mon, 22 Jul 2013 12:09:59 GMT", "version": "v1" } ]
2013-07-24
[ [ "Vali", "Masoumeh", "" ] ]
Optimization problem, nowadays, have more application in all major but they have problem in computation. Calculation of the optimum point in the spaces with the above dimensions is very time consuming. In this paper, there is presented a new approach for the optimization of continuous functions with rotational mutation that is called RM. The proposed algorithm starts from the point which has best fitness value by elitism mechanism. Then, method of rotational mutation is used to reach optimal point. In this paper, RM algorithm is implemented by GA(Briefly RMGA) and is compared with other well- known algorithms: DE, PGA, Grefensstette and Eshelman [15, 16] and numerical and simulation results show that RMGA achieve global optimal point with more decision by smaller generations.
2302.07309
Hongyan Gu
Hongyan Gu, Chunxu Yang, Mohammad Haeri, Jing Wang, Shirley Tang, Wenzhong Yan, Shujin He, Christopher Kazu Williams, Shino Magaki, Xiang 'Anthony' Chen
Augmenting Pathologists with NaviPath: Design and Evaluation of a Human-AI Collaborative Navigation System
Accepted ACM CHI Conference on Human Factors in Computing Systems (CHI '23)
null
10.1145/3544548.3580694
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Artificial Intelligence (AI) brings advancements to support pathologists in navigating high-resolution tumor images to search for pathology patterns of interest. However, existing AI-assisted tools have not realized this promised potential due to a lack of insight into pathology and HCI considerations for pathologists' navigation workflows in practice. We first conducted a formative study with six medical professionals in pathology to capture their navigation strategies. By incorporating our observations along with the pathologists' domain knowledge, we designed NaviPath -- a human-AI collaborative navigation system. An evaluation study with 15 medical professionals in pathology indicated that: (i) compared to the manual navigation, participants saw more than twice the number of pathological patterns in unit time with NaviPath, and (ii) participants achieved higher precision and recall against the AI and the manual navigation on average. Further qualitative analysis revealed that navigation was more consistent with NaviPath, which can improve the overall examination quality.
[ { "created": "Tue, 14 Feb 2023 19:50:02 GMT", "version": "v1" } ]
2023-02-16
[ [ "Gu", "Hongyan", "" ], [ "Yang", "Chunxu", "" ], [ "Haeri", "Mohammad", "" ], [ "Wang", "Jing", "" ], [ "Tang", "Shirley", "" ], [ "Yan", "Wenzhong", "" ], [ "He", "Shujin", "" ], [ "Williams", "Christopher Kazu", "" ], [ "Magaki", "Shino", "" ], [ "Chen", "Xiang 'Anthony'", "" ] ]
Artificial Intelligence (AI) brings advancements to support pathologists in navigating high-resolution tumor images to search for pathology patterns of interest. However, existing AI-assisted tools have not realized this promised potential due to a lack of insight into pathology and HCI considerations for pathologists' navigation workflows in practice. We first conducted a formative study with six medical professionals in pathology to capture their navigation strategies. By incorporating our observations along with the pathologists' domain knowledge, we designed NaviPath -- a human-AI collaborative navigation system. An evaluation study with 15 medical professionals in pathology indicated that: (i) compared to the manual navigation, participants saw more than twice the number of pathological patterns in unit time with NaviPath, and (ii) participants achieved higher precision and recall against the AI and the manual navigation on average. Further qualitative analysis revealed that navigation was more consistent with NaviPath, which can improve the overall examination quality.
2102.06427
Hung P. Hoang
Bernd G\"artner, Sebastian Haslebacher, Hung P. Hoang
A Subexponential Algorithm for ARRIVAL
13 pages, 1 figure Added a reference
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
The ARRIVAL problem is to decide the fate of a train moving along the edges of a directed graph, according to a simple (deterministic) pseudorandom walk. The problem is in $NP \cap coNP$ but not known to be in $P$. The currently best algorithms have runtime $2^{\Theta(n)}$ where $n$ is the number of vertices. This is not much better than just performing the pseudorandom walk. We develop a subexponential algorithm with runtime $2^{O(\sqrt{n}\log n)}$. We also give a polynomial-time algorithm if the graph is almost acyclic. Both results are derived from a new general approach to solve ARRIVAL instances.
[ { "created": "Fri, 12 Feb 2021 10:14:23 GMT", "version": "v1" }, { "created": "Tue, 23 Feb 2021 10:41:48 GMT", "version": "v2" }, { "created": "Fri, 9 Apr 2021 14:11:24 GMT", "version": "v3" } ]
2021-04-12
[ [ "Gärtner", "Bernd", "" ], [ "Haslebacher", "Sebastian", "" ], [ "Hoang", "Hung P.", "" ] ]
The ARRIVAL problem is to decide the fate of a train moving along the edges of a directed graph, according to a simple (deterministic) pseudorandom walk. The problem is in $NP \cap coNP$ but not known to be in $P$. The currently best algorithms have runtime $2^{\Theta(n)}$ where $n$ is the number of vertices. This is not much better than just performing the pseudorandom walk. We develop a subexponential algorithm with runtime $2^{O(\sqrt{n}\log n)}$. We also give a polynomial-time algorithm if the graph is almost acyclic. Both results are derived from a new general approach to solve ARRIVAL instances.
2303.10635
Shiwei Cheng
Song Zhao, Shiwei Cheng, Chenshuang Zhu
3D Gaze Vis: Sharing Eye Tracking Data Visualization for Collaborative Work in VR Environment
null
null
null
null
cs.HC cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Conducting collaborative tasks, e.g., multi-user game, in virtual reality (VR) could enable us to explore more immersive and effective experience. However, for current VR systems, users cannot communicate properly with each other via their gaze points, and this would interfere with users' mutual understanding of the intention. In this study, we aimed to find the optimal eye tracking data visualization , which minimized the cognitive interference and improved the understanding of the visual attention and intention between users. We designed three different eye tracking data visualizations: gaze cursor, gaze spotlight and gaze trajectory in VR scene for a course of human heart , and found that gaze cursor from doctors could help students learn complex 3D heart models more effectively. To further explore, two students as a pair were asked to finish a quiz in VR environment, with sharing gaze cursors with each other, and obtained more efficiency and scores. It indicated that sharing eye tracking data visualization could improve the quality and efficiency of collaborative work in the VR environment.
[ { "created": "Sun, 19 Mar 2023 12:00:53 GMT", "version": "v1" } ]
2023-03-21
[ [ "Zhao", "Song", "" ], [ "Cheng", "Shiwei", "" ], [ "Zhu", "Chenshuang", "" ] ]
Conducting collaborative tasks, e.g., multi-user game, in virtual reality (VR) could enable us to explore more immersive and effective experience. However, for current VR systems, users cannot communicate properly with each other via their gaze points, and this would interfere with users' mutual understanding of the intention. In this study, we aimed to find the optimal eye tracking data visualization , which minimized the cognitive interference and improved the understanding of the visual attention and intention between users. We designed three different eye tracking data visualizations: gaze cursor, gaze spotlight and gaze trajectory in VR scene for a course of human heart , and found that gaze cursor from doctors could help students learn complex 3D heart models more effectively. To further explore, two students as a pair were asked to finish a quiz in VR environment, with sharing gaze cursors with each other, and obtained more efficiency and scores. It indicated that sharing eye tracking data visualization could improve the quality and efficiency of collaborative work in the VR environment.
2305.02986
Joshua Kavner
Hadi Hosseini, Joshua Kavner, Tomasz W\k{a}s, Lirong Xia
Distribution of Chores with Information Asymmetry
21 pages, 1 figure
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fair distribution of indivisible tasks with non-positive valuations (aka chores) has given rise to a large body of work in recent years. A popular approximate fairness notion is envy-freeness up to one item (EF1), which requires that any pairwise envy can be eliminated by the removal of a single item. While an EF1 and Pareto optimal (PO) allocation of goods always exists and can be computed via several well-known algorithms, even the existence of such solutions for chores remains open, to date. We take an epistemic approach utilizing information asymmetry by introducing dubious chores -- items that inflict no cost on receiving agents, but are perceived costly by others. On a technical level, dubious chores provide a more fine-grained approximation of envy-freeness -- compared to relaxations such as EF1 -- which enables progress towards addressing open problems on the existence and computation of EF1 and PO. In particular, we show that finding allocations with optimal number of dubious chores is computationally hard even for highly restricted classes of valuations. Nonetheless, we prove the existence of envy-free and PO allocations for $n$ agents with only $2n-2$ dubious chores and strengthen it to $n-1$ dubious chores in four special classes of valuations. Our experimental analysis demonstrate that baseline algorithms only require a relatively small number of dubious chores to achieve envy-freeness in practice.
[ { "created": "Thu, 4 May 2023 16:51:46 GMT", "version": "v1" }, { "created": "Fri, 5 May 2023 20:20:44 GMT", "version": "v2" } ]
2023-05-09
[ [ "Hosseini", "Hadi", "" ], [ "Kavner", "Joshua", "" ], [ "Wąs", "Tomasz", "" ], [ "Xia", "Lirong", "" ] ]
Fair distribution of indivisible tasks with non-positive valuations (aka chores) has given rise to a large body of work in recent years. A popular approximate fairness notion is envy-freeness up to one item (EF1), which requires that any pairwise envy can be eliminated by the removal of a single item. While an EF1 and Pareto optimal (PO) allocation of goods always exists and can be computed via several well-known algorithms, even the existence of such solutions for chores remains open, to date. We take an epistemic approach utilizing information asymmetry by introducing dubious chores -- items that inflict no cost on receiving agents, but are perceived costly by others. On a technical level, dubious chores provide a more fine-grained approximation of envy-freeness -- compared to relaxations such as EF1 -- which enables progress towards addressing open problems on the existence and computation of EF1 and PO. In particular, we show that finding allocations with optimal number of dubious chores is computationally hard even for highly restricted classes of valuations. Nonetheless, we prove the existence of envy-free and PO allocations for $n$ agents with only $2n-2$ dubious chores and strengthen it to $n-1$ dubious chores in four special classes of valuations. Our experimental analysis demonstrate that baseline algorithms only require a relatively small number of dubious chores to achieve envy-freeness in practice.
2302.13840
Kaijie He
Kaijie He, Canlong Zhang, Sheng Xie, Zhixin Li, Zhiwen Wang
Target-Aware Tracking with Long-term Context Attention
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most deep trackers still follow the guidance of the siamese paradigms and use a template that contains only the target without any contextual information, which makes it difficult for the tracker to cope with large appearance changes, rapid target movement, and attraction from similar objects. To alleviate the above problem, we propose a long-term context attention (LCA) module that can perform extensive information fusion on the target and its context from long-term frames, and calculate the target correlation while enhancing target features. The complete contextual information contains the location of the target as well as the state around the target. LCA uses the target state from the previous frame to exclude the interference of similar objects and complex backgrounds, thus accurately locating the target and enabling the tracker to obtain higher robustness and regression accuracy. By embedding the LCA module in Transformer, we build a powerful online tracker with a target-aware backbone, termed as TATrack. In addition, we propose a dynamic online update algorithm based on the classification confidence of historical information without additional calculation burden. Our tracker achieves state-of-the-art performance on multiple benchmarks, with 71.1\% AUC, 89.3\% NP, and 73.0\% AO on LaSOT, TrackingNet, and GOT-10k. The code and trained models are available on https://github.com/hekaijie123/TATrack.
[ { "created": "Mon, 27 Feb 2023 14:40:58 GMT", "version": "v1" } ]
2023-02-28
[ [ "He", "Kaijie", "" ], [ "Zhang", "Canlong", "" ], [ "Xie", "Sheng", "" ], [ "Li", "Zhixin", "" ], [ "Wang", "Zhiwen", "" ] ]
Most deep trackers still follow the guidance of the siamese paradigms and use a template that contains only the target without any contextual information, which makes it difficult for the tracker to cope with large appearance changes, rapid target movement, and attraction from similar objects. To alleviate the above problem, we propose a long-term context attention (LCA) module that can perform extensive information fusion on the target and its context from long-term frames, and calculate the target correlation while enhancing target features. The complete contextual information contains the location of the target as well as the state around the target. LCA uses the target state from the previous frame to exclude the interference of similar objects and complex backgrounds, thus accurately locating the target and enabling the tracker to obtain higher robustness and regression accuracy. By embedding the LCA module in Transformer, we build a powerful online tracker with a target-aware backbone, termed as TATrack. In addition, we propose a dynamic online update algorithm based on the classification confidence of historical information without additional calculation burden. Our tracker achieves state-of-the-art performance on multiple benchmarks, with 71.1\% AUC, 89.3\% NP, and 73.0\% AO on LaSOT, TrackingNet, and GOT-10k. The code and trained models are available on https://github.com/hekaijie123/TATrack.
2309.01850
Jamiu Idowu
Jamiu Idowu and Ahmed Almasoud
Uncertainty in AI: Evaluating Deep Neural Networks on Out-of-Distribution Images
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
As AI models are increasingly deployed in critical applications, ensuring the consistent performance of models when exposed to unusual situations such as out-of-distribution (OOD) or perturbed data, is important. Therefore, this paper investigates the uncertainty of various deep neural networks, including ResNet-50, VGG16, DenseNet121, AlexNet, and GoogleNet, when dealing with such data. Our approach includes three experiments. First, we used the pretrained models to classify OOD images generated via DALL-E to assess their performance. Second, we built an ensemble from the models' predictions using probabilistic averaging for consensus due to its advantages over plurality or majority voting. The ensemble's uncertainty was quantified using average probabilities, variance, and entropy metrics. Our results showed that while ResNet-50 was the most accurate single model for OOD images, the ensemble performed even better, correctly classifying all images. Third, we tested model robustness by adding perturbations (filters, rotations, etc.) to new epistemic images from DALL-E or real-world captures. ResNet-50 was chosen for this being the best performing model. While it classified 4 out of 5 unperturbed images correctly, it misclassified all of them post-perturbation, indicating a significant vulnerability. These misclassifications, which are clear to human observers, highlight AI models' limitations. Using saliency maps, we identified regions of the images that the model considered important for their decisions.
[ { "created": "Mon, 4 Sep 2023 22:46:59 GMT", "version": "v1" } ]
2023-09-06
[ [ "Idowu", "Jamiu", "" ], [ "Almasoud", "Ahmed", "" ] ]
As AI models are increasingly deployed in critical applications, ensuring the consistent performance of models when exposed to unusual situations such as out-of-distribution (OOD) or perturbed data, is important. Therefore, this paper investigates the uncertainty of various deep neural networks, including ResNet-50, VGG16, DenseNet121, AlexNet, and GoogleNet, when dealing with such data. Our approach includes three experiments. First, we used the pretrained models to classify OOD images generated via DALL-E to assess their performance. Second, we built an ensemble from the models' predictions using probabilistic averaging for consensus due to its advantages over plurality or majority voting. The ensemble's uncertainty was quantified using average probabilities, variance, and entropy metrics. Our results showed that while ResNet-50 was the most accurate single model for OOD images, the ensemble performed even better, correctly classifying all images. Third, we tested model robustness by adding perturbations (filters, rotations, etc.) to new epistemic images from DALL-E or real-world captures. ResNet-50 was chosen for this being the best performing model. While it classified 4 out of 5 unperturbed images correctly, it misclassified all of them post-perturbation, indicating a significant vulnerability. These misclassifications, which are clear to human observers, highlight AI models' limitations. Using saliency maps, we identified regions of the images that the model considered important for their decisions.
2010.01003
Awni Hannun
Awni Hannun, Vineel Pratap, Jacob Kahn, Wei-Ning Hsu
Differentiable Weighted Finite-State Transducers
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a framework for automatic differentiation with weighted finite-state transducers (WFSTs) allowing them to be used dynamically at training time. Through the separation of graphs from operations on graphs, this framework enables the exploration of new structured loss functions which in turn eases the encoding of prior knowledge into learning algorithms. We show how the framework can combine pruning and back-off in transition models with various sequence-level loss functions. We also show how to learn over the latent decomposition of phrases into word pieces. Finally, to demonstrate that WFSTs can be used in the interior of a deep neural network, we propose a convolutional WFST layer which maps lower-level representations to higher-level representations and can be used as a drop-in replacement for a traditional convolution. We validate these algorithms with experiments in handwriting recognition and speech recognition.
[ { "created": "Fri, 2 Oct 2020 13:52:24 GMT", "version": "v1" } ]
2020-10-05
[ [ "Hannun", "Awni", "" ], [ "Pratap", "Vineel", "" ], [ "Kahn", "Jacob", "" ], [ "Hsu", "Wei-Ning", "" ] ]
We introduce a framework for automatic differentiation with weighted finite-state transducers (WFSTs) allowing them to be used dynamically at training time. Through the separation of graphs from operations on graphs, this framework enables the exploration of new structured loss functions which in turn eases the encoding of prior knowledge into learning algorithms. We show how the framework can combine pruning and back-off in transition models with various sequence-level loss functions. We also show how to learn over the latent decomposition of phrases into word pieces. Finally, to demonstrate that WFSTs can be used in the interior of a deep neural network, we propose a convolutional WFST layer which maps lower-level representations to higher-level representations and can be used as a drop-in replacement for a traditional convolution. We validate these algorithms with experiments in handwriting recognition and speech recognition.
2309.16163
Juhyeon Kim
Juhyeon Kim, Wojciech Jarosz, Ioannis Gkioulekas, Adithya Pediredla
Doppler Time-of-Flight Rendering
18 pages, 28 Figures, SIGGRAPH Asia 2023
null
10.1145/3618335
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Doppler time-of-flight (D-ToF) rendering, an extension of ToF rendering for dynamic scenes, with applications in simulating D-ToF cameras. D-ToF cameras use high-frequency modulation of illumination and exposure, and measure the Doppler frequency shift to compute the radial velocity of dynamic objects. The time-varying scene geometry and high-frequency modulation functions used in such cameras make it challenging to accurately and efficiently simulate their measurements with existing ToF rendering algorithms. We overcome these challenges in a twofold manner: To achieve accuracy, we derive path integral expressions for D-ToF measurements under global illumination and form unbiased Monte Carlo estimates of these integrals. To achieve efficiency, we develop a tailored time-path sampling technique that combines antithetic time sampling with correlated path sampling. We show experimentally that our sampling technique achieves up to two orders of magnitude lower variance compared to naive time-path sampling. We provide an open-source simulator that serves as a digital twin for D-ToF imaging systems, allowing imaging researchers, for the first time, to investigate the impact of modulation functions, material properties, and global illumination on D-ToF imaging performance.
[ { "created": "Thu, 28 Sep 2023 04:30:51 GMT", "version": "v1" }, { "created": "Fri, 29 Sep 2023 02:59:28 GMT", "version": "v2" }, { "created": "Thu, 5 Oct 2023 16:13:34 GMT", "version": "v3" } ]
2023-10-06
[ [ "Kim", "Juhyeon", "" ], [ "Jarosz", "Wojciech", "" ], [ "Gkioulekas", "Ioannis", "" ], [ "Pediredla", "Adithya", "" ] ]
We introduce Doppler time-of-flight (D-ToF) rendering, an extension of ToF rendering for dynamic scenes, with applications in simulating D-ToF cameras. D-ToF cameras use high-frequency modulation of illumination and exposure, and measure the Doppler frequency shift to compute the radial velocity of dynamic objects. The time-varying scene geometry and high-frequency modulation functions used in such cameras make it challenging to accurately and efficiently simulate their measurements with existing ToF rendering algorithms. We overcome these challenges in a twofold manner: To achieve accuracy, we derive path integral expressions for D-ToF measurements under global illumination and form unbiased Monte Carlo estimates of these integrals. To achieve efficiency, we develop a tailored time-path sampling technique that combines antithetic time sampling with correlated path sampling. We show experimentally that our sampling technique achieves up to two orders of magnitude lower variance compared to naive time-path sampling. We provide an open-source simulator that serves as a digital twin for D-ToF imaging systems, allowing imaging researchers, for the first time, to investigate the impact of modulation functions, material properties, and global illumination on D-ToF imaging performance.
0912.4117
Stefan G\"oller
Stefan G\"oller, Markus Lohrey
Branching-time model checking of one-counter processes
null
null
null
STACS 2010
cs.LO cs.CC
http://creativecommons.org/licenses/by/3.0/
One-counter processes (OCPs) are pushdown processes which operate only on a unary stack alphabet. We study the computational complexity of model checking computation tree logic (CTL) over OCPs. A PSPACE upper bound is inherited from the modal mu-calculus for this problem. First, we analyze the periodic behaviour of CTL over OCPs and derive a model checking algorithm whose running time is exponential only in the number of control locations and a syntactic notion of the formula that we call leftward until depth. Thus, model checking fixed OCPs against CTL formulas with a fixed leftward until depth is in P. This generalizes a result of the first author, Mayr, and To for the expression complexity of CTL's fragment EF. Second, we prove that already over some fixed OCP, CTL model checking is PSPACE-hard. Third, we show that there already exists a fixed CTL formula for which model checking of OCPs is PSPACE-hard. For the latter, we employ two results from complexity theory: (i) Converting a natural number in Chinese remainder presentation into binary presentation is in logspace-uniform NC^1 and (ii) PSPACE is AC^0-serializable. We demonstrate that our approach can be used to answer further open questions.
[ { "created": "Mon, 21 Dec 2009 09:45:23 GMT", "version": "v1" }, { "created": "Wed, 3 Feb 2010 11:38:53 GMT", "version": "v2" } ]
2010-02-03
[ [ "Göller", "Stefan", "" ], [ "Lohrey", "Markus", "" ] ]
One-counter processes (OCPs) are pushdown processes which operate only on a unary stack alphabet. We study the computational complexity of model checking computation tree logic (CTL) over OCPs. A PSPACE upper bound is inherited from the modal mu-calculus for this problem. First, we analyze the periodic behaviour of CTL over OCPs and derive a model checking algorithm whose running time is exponential only in the number of control locations and a syntactic notion of the formula that we call leftward until depth. Thus, model checking fixed OCPs against CTL formulas with a fixed leftward until depth is in P. This generalizes a result of the first author, Mayr, and To for the expression complexity of CTL's fragment EF. Second, we prove that already over some fixed OCP, CTL model checking is PSPACE-hard. Third, we show that there already exists a fixed CTL formula for which model checking of OCPs is PSPACE-hard. For the latter, we employ two results from complexity theory: (i) Converting a natural number in Chinese remainder presentation into binary presentation is in logspace-uniform NC^1 and (ii) PSPACE is AC^0-serializable. We demonstrate that our approach can be used to answer further open questions.
2407.07995
Jaeyeul Kim
Jaeyeul Kim, Jungwan Woo, Ukcheol Shin, Jean Oh, Sunghoon Im
Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation
8 pages, 4 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the motion states of the surrounding environment is critical for safe autonomous driving. These motion states can be accurately derived from scene flow, which captures the three-dimensional motion field of points. Existing LiDAR scene flow methods extract spatial features from each point cloud and then fuse them channel-wise, resulting in the implicit extraction of spatio-temporal features. Furthermore, they utilize 2D Bird's Eye View and process only two frames, missing crucial spatial information along the Z-axis and the broader temporal context, leading to suboptimal performance. To address these limitations, we propose Flow4D, which temporally fuses multiple point clouds after the 3D intra-voxel feature encoder, enabling more explicit extraction of spatio-temporal features through a 4D voxel network. However, while using 4D convolution improves performance, it significantly increases the computational load. For further efficiency, we introduce the Spatio-Temporal Decomposition Block (STDB), which combines 3D and 1D convolutions instead of using heavy 4D convolution. In addition, Flow4D further improves performance by using five frames to take advantage of richer temporal information. As a result, the proposed method achieves a 45.9% higher performance compared to the state-of-the-art while running in real-time, and won 1st place in the 2024 Argoverse 2 Scene Flow Challenge. The code is available at https://github.com/dgist-cvlab/Flow4D.
[ { "created": "Wed, 10 Jul 2024 18:55:43 GMT", "version": "v1" } ]
2024-07-12
[ [ "Kim", "Jaeyeul", "" ], [ "Woo", "Jungwan", "" ], [ "Shin", "Ukcheol", "" ], [ "Oh", "Jean", "" ], [ "Im", "Sunghoon", "" ] ]
Understanding the motion states of the surrounding environment is critical for safe autonomous driving. These motion states can be accurately derived from scene flow, which captures the three-dimensional motion field of points. Existing LiDAR scene flow methods extract spatial features from each point cloud and then fuse them channel-wise, resulting in the implicit extraction of spatio-temporal features. Furthermore, they utilize 2D Bird's Eye View and process only two frames, missing crucial spatial information along the Z-axis and the broader temporal context, leading to suboptimal performance. To address these limitations, we propose Flow4D, which temporally fuses multiple point clouds after the 3D intra-voxel feature encoder, enabling more explicit extraction of spatio-temporal features through a 4D voxel network. However, while using 4D convolution improves performance, it significantly increases the computational load. For further efficiency, we introduce the Spatio-Temporal Decomposition Block (STDB), which combines 3D and 1D convolutions instead of using heavy 4D convolution. In addition, Flow4D further improves performance by using five frames to take advantage of richer temporal information. As a result, the proposed method achieves a 45.9% higher performance compared to the state-of-the-art while running in real-time, and won 1st place in the 2024 Argoverse 2 Scene Flow Challenge. The code is available at https://github.com/dgist-cvlab/Flow4D.
2403.19517
Guangyu Wang
Guangyu Wang, Jinzhi Zhang, Fan Wang, Ruqi Huang, Lu Fang
XScale-NVS: Cross-Scale Novel View Synthesis with Hash Featurized Manifold
Accepted to CVPR 2024. Project page: xscalenvs.github.io/
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
We propose XScale-NVS for high-fidelity cross-scale novel view synthesis of real-world large-scale scenes. Existing representations based on explicit surface suffer from discretization resolution or UV distortion, while implicit volumetric representations lack scalability for large scenes due to the dispersed weight distribution and surface ambiguity. In light of the above challenges, we introduce hash featurized manifold, a novel hash-based featurization coupled with a deferred neural rendering framework. This approach fully unlocks the expressivity of the representation by explicitly concentrating the hash entries on the 2D manifold, thus effectively representing highly detailed contents independent of the discretization resolution. We also introduce a novel dataset, namely GigaNVS, to benchmark cross-scale, high-resolution novel view synthesis of realworld large-scale scenes. Our method significantly outperforms competing baselines on various real-world scenes, yielding an average LPIPS that is 40% lower than prior state-of-the-art on the challenging GigaNVS benchmark. Please see our project page at: xscalenvs.github.io.
[ { "created": "Thu, 28 Mar 2024 15:48:16 GMT", "version": "v1" } ]
2024-03-29
[ [ "Wang", "Guangyu", "" ], [ "Zhang", "Jinzhi", "" ], [ "Wang", "Fan", "" ], [ "Huang", "Ruqi", "" ], [ "Fang", "Lu", "" ] ]
We propose XScale-NVS for high-fidelity cross-scale novel view synthesis of real-world large-scale scenes. Existing representations based on explicit surface suffer from discretization resolution or UV distortion, while implicit volumetric representations lack scalability for large scenes due to the dispersed weight distribution and surface ambiguity. In light of the above challenges, we introduce hash featurized manifold, a novel hash-based featurization coupled with a deferred neural rendering framework. This approach fully unlocks the expressivity of the representation by explicitly concentrating the hash entries on the 2D manifold, thus effectively representing highly detailed contents independent of the discretization resolution. We also introduce a novel dataset, namely GigaNVS, to benchmark cross-scale, high-resolution novel view synthesis of realworld large-scale scenes. Our method significantly outperforms competing baselines on various real-world scenes, yielding an average LPIPS that is 40% lower than prior state-of-the-art on the challenging GigaNVS benchmark. Please see our project page at: xscalenvs.github.io.
1811.09885
Linan Zhang
Linan Zhang and Hayden Schaeffer
Forward Stability of ResNet and Its Variants
35 pages, 8 figures, 5 tables
null
null
null
cs.CV math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The residual neural network (ResNet) is a popular deep network architecture which has the ability to obtain high-accuracy results on several image processing problems. In order to analyze the behavior and structure of ResNet, recent work has been on establishing connections between ResNets and continuous-time optimal control problems. In this work, we show that the post-activation ResNet is related to an optimal control problem with differential inclusions, and provide continuous-time stability results for the differential inclusion associated with ResNet. Motivated by the stability conditions, we show that alterations of either the architecture or the optimization problem can generate variants of ResNet which improve the theoretical stability bounds. In addition, we establish stability bounds for the full (discrete) network associated with two variants of ResNet, in particular, bounds on the growth of the features and a measure of the sensitivity of the features with respect to perturbations. These results also help to show the relationship between the depth, regularization, and stability of the feature space. Computational experiments on the proposed variants show that the accuracy of ResNet is preserved and that the accuracy seems to be monotone with respect to the depth and various corruptions.
[ { "created": "Sat, 24 Nov 2018 19:43:22 GMT", "version": "v1" } ]
2018-11-27
[ [ "Zhang", "Linan", "" ], [ "Schaeffer", "Hayden", "" ] ]
The residual neural network (ResNet) is a popular deep network architecture which has the ability to obtain high-accuracy results on several image processing problems. In order to analyze the behavior and structure of ResNet, recent work has been on establishing connections between ResNets and continuous-time optimal control problems. In this work, we show that the post-activation ResNet is related to an optimal control problem with differential inclusions, and provide continuous-time stability results for the differential inclusion associated with ResNet. Motivated by the stability conditions, we show that alterations of either the architecture or the optimization problem can generate variants of ResNet which improve the theoretical stability bounds. In addition, we establish stability bounds for the full (discrete) network associated with two variants of ResNet, in particular, bounds on the growth of the features and a measure of the sensitivity of the features with respect to perturbations. These results also help to show the relationship between the depth, regularization, and stability of the feature space. Computational experiments on the proposed variants show that the accuracy of ResNet is preserved and that the accuracy seems to be monotone with respect to the depth and various corruptions.
1811.10400
Eric Rothstein-Morris
Eric Rothstein-Morris and Sun Jun
Quantifying Attacker Capability Via Model Checking Multiple Properties (Extended Version)
null
null
null
null
cs.LO cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work aims to solve a practical problem, i.e., how to quantify the risk brought upon a system by different attackers. The answer is useful for optimising resource allocation for system defence. Given a set of safety requirements, we quantify the attacker capability in terms of the set of safety requirements an attacker can compromise. Given a system (in the presence of an attacker), model checking it against each safety requirement one by one is expensive and wasteful since the same state space is explored many times. We thus propose model checking multiple properties efficiently by means of coalgebraic model checking using enhanced coinduction techniques. We apply the proposed technique to a real-world water treatment system and the results show that our approach can effectively reduce the effort required for model checking.
[ { "created": "Fri, 16 Nov 2018 15:19:09 GMT", "version": "v1" } ]
2018-11-27
[ [ "Rothstein-Morris", "Eric", "" ], [ "Jun", "Sun", "" ] ]
This work aims to solve a practical problem, i.e., how to quantify the risk brought upon a system by different attackers. The answer is useful for optimising resource allocation for system defence. Given a set of safety requirements, we quantify the attacker capability in terms of the set of safety requirements an attacker can compromise. Given a system (in the presence of an attacker), model checking it against each safety requirement one by one is expensive and wasteful since the same state space is explored many times. We thus propose model checking multiple properties efficiently by means of coalgebraic model checking using enhanced coinduction techniques. We apply the proposed technique to a real-world water treatment system and the results show that our approach can effectively reduce the effort required for model checking.
2407.07086
Logan Cross
Logan Cross, Violet Xiang, Agam Bhatia, Daniel LK Yamins, Nick Haber
Hypothetical Minds: Scaffolding Theory of Mind for Multi-Agent Tasks with Large Language Models
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Multi-agent reinforcement learning (MARL) methods struggle with the non-stationarity of multi-agent systems and fail to adaptively learn online when tested with novel agents. Here, we leverage large language models (LLMs) to create an autonomous agent that can handle these challenges. Our agent, Hypothetical Minds, consists of a cognitively-inspired architecture, featuring modular components for perception, memory, and hierarchical planning over two levels of abstraction. We introduce the Theory of Mind module that scaffolds the high-level planning process by generating hypotheses about other agents' strategies in natural language. It then evaluates and iteratively refines these hypotheses by reinforcing hypotheses that make correct predictions about the other agents' behavior. Hypothetical Minds significantly improves performance over previous LLM-agent and RL baselines on a range of competitive, mixed motive, and collaborative domains in the Melting Pot benchmark, including both dyadic and population-based environments. Additionally, comparisons against LLM-agent baselines and ablations reveal the importance of hypothesis evaluation and refinement for succeeding on complex scenarios.
[ { "created": "Tue, 9 Jul 2024 17:57:15 GMT", "version": "v1" } ]
2024-07-10
[ [ "Cross", "Logan", "" ], [ "Xiang", "Violet", "" ], [ "Bhatia", "Agam", "" ], [ "Yamins", "Daniel LK", "" ], [ "Haber", "Nick", "" ] ]
Multi-agent reinforcement learning (MARL) methods struggle with the non-stationarity of multi-agent systems and fail to adaptively learn online when tested with novel agents. Here, we leverage large language models (LLMs) to create an autonomous agent that can handle these challenges. Our agent, Hypothetical Minds, consists of a cognitively-inspired architecture, featuring modular components for perception, memory, and hierarchical planning over two levels of abstraction. We introduce the Theory of Mind module that scaffolds the high-level planning process by generating hypotheses about other agents' strategies in natural language. It then evaluates and iteratively refines these hypotheses by reinforcing hypotheses that make correct predictions about the other agents' behavior. Hypothetical Minds significantly improves performance over previous LLM-agent and RL baselines on a range of competitive, mixed motive, and collaborative domains in the Melting Pot benchmark, including both dyadic and population-based environments. Additionally, comparisons against LLM-agent baselines and ablations reveal the importance of hypothesis evaluation and refinement for succeeding on complex scenarios.