id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1412.8339
Charith Perera
Charith Perera, Rajiv Ranjan, Lizhe Wang, Samee U. Khan, and Albert Y. Zomaya
Big Data Privacy in the Internet of Things Era
Accepted to be published in IEEE IT Professional Magazine: Special Issue Internet of Anything 2015
null
10.1109/MITP.2015.34
null
cs.CY cs.DB cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the last few years, we have seen a plethora of Internet of Things (IoT) solutions, products and services, making their way into the industry's market-place. All such solution will capture a large amount of data pertaining to the environment, as well as their users. The objective of the IoT is to learn more and to serve better the system users. Some of these solutions may store the data locally on the devices ('things'), and others may store in the Cloud. The real value of collecting data comes through data processing and aggregation in large-scale where new knowledge can be extracted. However, such procedures can also lead to user privacy issues. This article discusses some of the main challenges of privacy in IoT, and opportunities for research and innovation. We also introduce some of the ongoing research efforts that address IoT privacy issues.
[ { "created": "Mon, 29 Dec 2014 13:45:13 GMT", "version": "v1" }, { "created": "Mon, 8 Jun 2015 09:18:53 GMT", "version": "v2" } ]
2016-11-17
[ [ "Perera", "Charith", "" ], [ "Ranjan", "Rajiv", "" ], [ "Wang", "Lizhe", "" ], [ "Khan", "Samee U.", "" ], [ "Zomaya", "Albert Y.", "" ] ]
Over the last few years, we have seen a plethora of Internet of Things (IoT) solutions, products and services, making their way into the industry's market-place. All such solution will capture a large amount of data pertaining to the environment, as well as their users. The objective of the IoT is to learn more and to serve better the system users. Some of these solutions may store the data locally on the devices ('things'), and others may store in the Cloud. The real value of collecting data comes through data processing and aggregation in large-scale where new knowledge can be extracted. However, such procedures can also lead to user privacy issues. This article discusses some of the main challenges of privacy in IoT, and opportunities for research and innovation. We also introduce some of the ongoing research efforts that address IoT privacy issues.
2301.00252
Keke Chen
Sagar Sharma and Yuechun Gu and Keke Chen
A Comparative Study of Image Disguising Methods for Confidential Outsourced Learning
null
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Large training data and expensive model tweaking are standard features of deep learning for images. As a result, data owners often utilize cloud resources to develop large-scale complex models, which raises privacy concerns. Existing solutions are either too expensive to be practical or do not sufficiently protect the confidentiality of data and models. In this paper, we study and compare novel \emph{image disguising} mechanisms, DisguisedNets and InstaHide, aiming to achieve a better trade-off among the level of protection for outsourced DNN model training, the expenses, and the utility of data. DisguisedNets are novel combinations of image blocktization, block-level random permutation, and two block-level secure transformations: random multidimensional projection (RMT) and AES pixel-level encryption (AES). InstaHide is an image mixup and random pixel flipping technique \cite{huang20}. We have analyzed and evaluated them under a multi-level threat model. RMT provides a better security guarantee than InstaHide, under the Level-1 adversarial knowledge with well-preserved model quality. In contrast, AES provides a security guarantee under the Level-2 adversarial knowledge, but it may affect model quality more. The unique features of image disguising also help us to protect models from model-targeted attacks. We have done an extensive experimental evaluation to understand how these methods work in different settings for different datasets.
[ { "created": "Sat, 31 Dec 2022 16:59:54 GMT", "version": "v1" } ]
2023-01-03
[ [ "Sharma", "Sagar", "" ], [ "Gu", "Yuechun", "" ], [ "Chen", "Keke", "" ] ]
Large training data and expensive model tweaking are standard features of deep learning for images. As a result, data owners often utilize cloud resources to develop large-scale complex models, which raises privacy concerns. Existing solutions are either too expensive to be practical or do not sufficiently protect the confidentiality of data and models. In this paper, we study and compare novel \emph{image disguising} mechanisms, DisguisedNets and InstaHide, aiming to achieve a better trade-off among the level of protection for outsourced DNN model training, the expenses, and the utility of data. DisguisedNets are novel combinations of image blocktization, block-level random permutation, and two block-level secure transformations: random multidimensional projection (RMT) and AES pixel-level encryption (AES). InstaHide is an image mixup and random pixel flipping technique \cite{huang20}. We have analyzed and evaluated them under a multi-level threat model. RMT provides a better security guarantee than InstaHide, under the Level-1 adversarial knowledge with well-preserved model quality. In contrast, AES provides a security guarantee under the Level-2 adversarial knowledge, but it may affect model quality more. The unique features of image disguising also help us to protect models from model-targeted attacks. We have done an extensive experimental evaluation to understand how these methods work in different settings for different datasets.
1803.06015
Sara Cohen
Sara Cohen and Aviv Zohar
Database Perspectives on Blockchains
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern blockchain systems are a fresh look at the paradigm of distributed computing, applied under assumptions of large-scale public networks. They can be used to store and share information without a trusted central party. There has been much effort to develop blockchain systems for a myriad of uses, ranging from cryptocurrencies to identity control, supply chain management, etc. None of this work has directly studied the fundamental database issues that arise when using blockchains as the underlying infrastructure to manage data. The key difference between using blockchains to store data and centrally controlled databases is that transactions are accepted to a blockchain via a consensus mechanism. Hence, once a user has issued a transaction, she cannot be certain if it will be accepted. Moreover, a yet unaccepted transaction cannot be retracted by the user, and may be appended to the blockchain in the future. This causes difficulties as the user may wish to reissue a transaction, if it was not accepted. Yet this data may then become appended twice to the blockchain. In this paper we present a database perspective on blockchains by introducing formal foundations for blockchains as a storage layer that underlies a database. The main issue that we tackle is uncertainty in transaction appending that is a result of the consensus mechanism. We study two flavors of transaction appending problems: (1) the complexity of determining whether it is possible for a denial constraint to be contradicted, given the state of the blockchain, pending transactions, and integrity constraints and (2) the complexity of generating transactions that are mutually (in)consistent with given subsets of pending transactions. Solving these problems is critical to ensure that users can issue transactions consistent with their intentions. Finally, we chart important directions for future work.
[ { "created": "Thu, 15 Mar 2018 21:58:39 GMT", "version": "v1" } ]
2018-03-19
[ [ "Cohen", "Sara", "" ], [ "Zohar", "Aviv", "" ] ]
Modern blockchain systems are a fresh look at the paradigm of distributed computing, applied under assumptions of large-scale public networks. They can be used to store and share information without a trusted central party. There has been much effort to develop blockchain systems for a myriad of uses, ranging from cryptocurrencies to identity control, supply chain management, etc. None of this work has directly studied the fundamental database issues that arise when using blockchains as the underlying infrastructure to manage data. The key difference between using blockchains to store data and centrally controlled databases is that transactions are accepted to a blockchain via a consensus mechanism. Hence, once a user has issued a transaction, she cannot be certain if it will be accepted. Moreover, a yet unaccepted transaction cannot be retracted by the user, and may be appended to the blockchain in the future. This causes difficulties as the user may wish to reissue a transaction, if it was not accepted. Yet this data may then become appended twice to the blockchain. In this paper we present a database perspective on blockchains by introducing formal foundations for blockchains as a storage layer that underlies a database. The main issue that we tackle is uncertainty in transaction appending that is a result of the consensus mechanism. We study two flavors of transaction appending problems: (1) the complexity of determining whether it is possible for a denial constraint to be contradicted, given the state of the blockchain, pending transactions, and integrity constraints and (2) the complexity of generating transactions that are mutually (in)consistent with given subsets of pending transactions. Solving these problems is critical to ensure that users can issue transactions consistent with their intentions. Finally, we chart important directions for future work.
2403.10778
Zheng Shuchen
Shibiao Xu, ShuChen Zheng, Wenhao Xu, Rongtao Xu, Changwei Wang, Jiguang Zhang, Xiaoqiang Teng, Ao Li, Li Guo
HCF-Net: Hierarchical Context Fusion Network for Infrared Small Object Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Infrared small object detection is an important computer vision task involving the recognition and localization of tiny objects in infrared images, which usually contain only a few pixels. However, it encounters difficulties due to the diminutive size of the objects and the generally complex backgrounds in infrared images. In this paper, we propose a deep learning method, HCF-Net, that significantly improves infrared small object detection performance through multiple practical modules. Specifically, it includes the parallelized patch-aware attention (PPA) module, dimension-aware selective integration (DASI) module, and multi-dilated channel refiner (MDCR) module. The PPA module uses a multi-branch feature extraction strategy to capture feature information at different scales and levels. The DASI module enables adaptive channel selection and fusion. The MDCR module captures spatial features of different receptive field ranges through multiple depth-separable convolutional layers. Extensive experimental results on the SIRST infrared single-frame image dataset show that the proposed HCF-Net performs well, surpassing other traditional and deep learning models. Code is available at https://github.com/zhengshuchen/HCFNet.
[ { "created": "Sat, 16 Mar 2024 02:45:42 GMT", "version": "v1" } ]
2024-03-19
[ [ "Xu", "Shibiao", "" ], [ "Zheng", "ShuChen", "" ], [ "Xu", "Wenhao", "" ], [ "Xu", "Rongtao", "" ], [ "Wang", "Changwei", "" ], [ "Zhang", "Jiguang", "" ], [ "Teng", "Xiaoqiang", "" ], [ "Li", "Ao", "" ], [ "Guo", "Li", "" ] ]
Infrared small object detection is an important computer vision task involving the recognition and localization of tiny objects in infrared images, which usually contain only a few pixels. However, it encounters difficulties due to the diminutive size of the objects and the generally complex backgrounds in infrared images. In this paper, we propose a deep learning method, HCF-Net, that significantly improves infrared small object detection performance through multiple practical modules. Specifically, it includes the parallelized patch-aware attention (PPA) module, dimension-aware selective integration (DASI) module, and multi-dilated channel refiner (MDCR) module. The PPA module uses a multi-branch feature extraction strategy to capture feature information at different scales and levels. The DASI module enables adaptive channel selection and fusion. The MDCR module captures spatial features of different receptive field ranges through multiple depth-separable convolutional layers. Extensive experimental results on the SIRST infrared single-frame image dataset show that the proposed HCF-Net performs well, surpassing other traditional and deep learning models. Code is available at https://github.com/zhengshuchen/HCFNet.
2001.03844
Jinlan Fu
Jinlan Fu, Pengfei Liu, Qi Zhang, Xuanjing Huang
Rethinking Generalization of Neural Models: A Named Entity Recognition Case Study
Accepted by AAAI 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While neural network-based models have achieved impressive performance on a large body of NLP tasks, the generalization behavior of different models remains poorly understood: Does this excellent performance imply a perfect generalization model, or are there still some limitations? In this paper, we take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives and characterize the differences of their generalization abilities through the lens of our proposed measures, which guides us to better design models and training methods. Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models in terms of breakdown performance analysis, annotation errors, dataset bias, and category relationships, which suggest directions for improvement. We have released the datasets: (ReCoNLL, PLONER) for the future research at our project page: http://pfliu.com/InterpretNER/. As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers and classifies them into different research topics: https://github.com/pfliu-nlp/Named-Entity-Recognition-NER-Papers.
[ { "created": "Sun, 12 Jan 2020 04:33:53 GMT", "version": "v1" } ]
2020-01-14
[ [ "Fu", "Jinlan", "" ], [ "Liu", "Pengfei", "" ], [ "Zhang", "Qi", "" ], [ "Huang", "Xuanjing", "" ] ]
While neural network-based models have achieved impressive performance on a large body of NLP tasks, the generalization behavior of different models remains poorly understood: Does this excellent performance imply a perfect generalization model, or are there still some limitations? In this paper, we take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives and characterize the differences of their generalization abilities through the lens of our proposed measures, which guides us to better design models and training methods. Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models in terms of breakdown performance analysis, annotation errors, dataset bias, and category relationships, which suggest directions for improvement. We have released the datasets: (ReCoNLL, PLONER) for the future research at our project page: http://pfliu.com/InterpretNER/. As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers and classifies them into different research topics: https://github.com/pfliu-nlp/Named-Entity-Recognition-NER-Papers.
1711.07657
Peng Yin
Peng Yin, Yuqing He, Na Liu and Jianda Han
Condition directed Multi-domain Adversarial Learning for Loop Closure Detection
7 pages, 11 figures, 3 tables, submitted to ICRA 2018
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Loop closure detection (LCD) is the key module in appearance based simultaneously localization and mapping (SLAM). However, in the real life, the appearance of visual inputs are usually affected by the illumination changes and texture changes under different weather conditions. Traditional methods in LCD usually rely on handcraft features, however, such methods are unable to capture the common descriptions under different weather conditions, such as rainy, foggy and sunny. Furthermore, traditional handcraft features could not capture the highly level understanding for the local scenes. In this paper, we proposed a novel condition directed multi-domain adversarial learning method, where we use the weather condition as the direction for feature inference. Based on the generative adversarial networks (GANs) and a classification networks, the proposed method could extract the high-level weather-invariant features directly from the raw data. The only labels required here are the weather condition of each visual input. Experiments are conducted in the GTAV game simulator, which could generated lifelike outdoor scenes under different weather conditions. The performance of LCD results shows that our method outperforms the state-of-arts significantly.
[ { "created": "Tue, 21 Nov 2017 07:28:36 GMT", "version": "v1" } ]
2017-11-22
[ [ "Yin", "Peng", "" ], [ "He", "Yuqing", "" ], [ "Liu", "Na", "" ], [ "Han", "Jianda", "" ] ]
Loop closure detection (LCD) is the key module in appearance based simultaneously localization and mapping (SLAM). However, in the real life, the appearance of visual inputs are usually affected by the illumination changes and texture changes under different weather conditions. Traditional methods in LCD usually rely on handcraft features, however, such methods are unable to capture the common descriptions under different weather conditions, such as rainy, foggy and sunny. Furthermore, traditional handcraft features could not capture the highly level understanding for the local scenes. In this paper, we proposed a novel condition directed multi-domain adversarial learning method, where we use the weather condition as the direction for feature inference. Based on the generative adversarial networks (GANs) and a classification networks, the proposed method could extract the high-level weather-invariant features directly from the raw data. The only labels required here are the weather condition of each visual input. Experiments are conducted in the GTAV game simulator, which could generated lifelike outdoor scenes under different weather conditions. The performance of LCD results shows that our method outperforms the state-of-arts significantly.
2301.01420
Yingqiang Qiu
Yingqiang Qiu, Wanli Peng, Xiaodan Lin, Huanqiang Zeng, Zhenxing Qian
Improved CNN Prediction Based Reversible Data Hiding
null
null
null
null
cs.MM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This letter proposes an improved CNN predictor (ICNNP) for reversible data hiding (RDH) in images, which consists of a feature extraction module, a pixel prediction module, and a complexity prediction module. Due to predicting the complexity of each pixel with the ICNNP during the embedding process, the proposed method can achieve superior performance than the CNN predictor-based method. Specifically, an input image does be first split into two different sub-images, i.e., the "Dot" image and the "Cross" image. Meanwhile, each sub-image is applied to predict another one. Then, the prediction errors of pixels are sorted with the predicted pixel complexities. In light of this, some sorted prediction errors with less complexity are selected to be efficiently used for low-distortion data embedding with a traditional histogram shift scheme. Experimental results demonstrate that the proposed method can achieve better embedding performance than that of the CNN predictor with the same histogram shifting strategy.
[ { "created": "Wed, 4 Jan 2023 03:15:21 GMT", "version": "v1" } ]
2023-01-05
[ [ "Qiu", "Yingqiang", "" ], [ "Peng", "Wanli", "" ], [ "Lin", "Xiaodan", "" ], [ "Zeng", "Huanqiang", "" ], [ "Qian", "Zhenxing", "" ] ]
This letter proposes an improved CNN predictor (ICNNP) for reversible data hiding (RDH) in images, which consists of a feature extraction module, a pixel prediction module, and a complexity prediction module. Due to predicting the complexity of each pixel with the ICNNP during the embedding process, the proposed method can achieve superior performance than the CNN predictor-based method. Specifically, an input image does be first split into two different sub-images, i.e., the "Dot" image and the "Cross" image. Meanwhile, each sub-image is applied to predict another one. Then, the prediction errors of pixels are sorted with the predicted pixel complexities. In light of this, some sorted prediction errors with less complexity are selected to be efficiently used for low-distortion data embedding with a traditional histogram shift scheme. Experimental results demonstrate that the proposed method can achieve better embedding performance than that of the CNN predictor with the same histogram shifting strategy.
1211.6496
Naushad UzZaman Naushad UzZaman
Naushad UzZaman, Roi Blanco, Michael Matthews
TwitterPaul: Extracting and Aggregating Twitter Predictions
Check out the blog post with a summary and Prediction Retrieval information here: http://bitly.com/TwitterPaul
null
null
null
cs.SI cs.AI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces TwitterPaul, a system designed to make use of Social Media data to help to predict game outcomes for the 2010 FIFA World Cup tournament. To this end, we extracted over 538K mentions to football games from a large sample of tweets that occurred during the World Cup, and we classified into different types with a precision of up to 88%. The different mentions were aggregated in order to make predictions about the outcomes of the actual games. We attempt to learn which Twitter users are accurate predictors and explore several techniques in order to exploit this information to make more accurate predictions. We compare our results to strong baselines and against the betting line (prediction market) and found that the quality of extractions is more important than the quantity, suggesting that high precision methods working on a medium-sized dataset are preferable over low precision methods that use a larger amount of data. Finally, by aggregating some classes of predictions, the system performance is close to the one of the betting line. Furthermore, we believe that this domain independent framework can help to predict other sports, elections, product release dates and other future events that people talk about in social media.
[ { "created": "Wed, 28 Nov 2012 01:33:21 GMT", "version": "v1" }, { "created": "Fri, 30 Nov 2012 16:55:53 GMT", "version": "v2" } ]
2015-03-13
[ [ "UzZaman", "Naushad", "" ], [ "Blanco", "Roi", "" ], [ "Matthews", "Michael", "" ] ]
This paper introduces TwitterPaul, a system designed to make use of Social Media data to help to predict game outcomes for the 2010 FIFA World Cup tournament. To this end, we extracted over 538K mentions to football games from a large sample of tweets that occurred during the World Cup, and we classified into different types with a precision of up to 88%. The different mentions were aggregated in order to make predictions about the outcomes of the actual games. We attempt to learn which Twitter users are accurate predictors and explore several techniques in order to exploit this information to make more accurate predictions. We compare our results to strong baselines and against the betting line (prediction market) and found that the quality of extractions is more important than the quantity, suggesting that high precision methods working on a medium-sized dataset are preferable over low precision methods that use a larger amount of data. Finally, by aggregating some classes of predictions, the system performance is close to the one of the betting line. Furthermore, we believe that this domain independent framework can help to predict other sports, elections, product release dates and other future events that people talk about in social media.
1805.06741
Xin Wei
Xin Wei, Hui Wang, Bryan Scotney and Huan Wan
Minimum Margin Loss for Deep Face Recognition
null
null
10.1016/j.patcog.2019.107012
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face recognition has achieved great progress owing to the fast development of the deep neural network in the past a few years. As an important part of deep neural networks, a number of the loss functions have been proposed which significantly improve the state-of-the-art methods. In this paper, we proposed a new loss function called Minimum Margin Loss (MML) which aims at enlarging the margin of those overclose class centre pairs so as to enhance the discriminative ability of the deep features. MML supervises the training process together with the Softmax Loss and the Centre Loss, and also makes up the defect of Softmax + Centre Loss. The experimental results on MegaFace, LFW and YTF datasets show that the proposed method achieves the state-of-the-art performance, which demonstrates the effectiveness of the proposed MML.
[ { "created": "Thu, 17 May 2018 13:02:23 GMT", "version": "v1" }, { "created": "Wed, 23 May 2018 09:46:43 GMT", "version": "v2" }, { "created": "Mon, 2 Jul 2018 09:28:28 GMT", "version": "v3" }, { "created": "Mon, 20 Aug 2018 15:27:51 GMT", "version": "v4" } ]
2020-02-05
[ [ "Wei", "Xin", "" ], [ "Wang", "Hui", "" ], [ "Scotney", "Bryan", "" ], [ "Wan", "Huan", "" ] ]
Face recognition has achieved great progress owing to the fast development of the deep neural network in the past a few years. As an important part of deep neural networks, a number of the loss functions have been proposed which significantly improve the state-of-the-art methods. In this paper, we proposed a new loss function called Minimum Margin Loss (MML) which aims at enlarging the margin of those overclose class centre pairs so as to enhance the discriminative ability of the deep features. MML supervises the training process together with the Softmax Loss and the Centre Loss, and also makes up the defect of Softmax + Centre Loss. The experimental results on MegaFace, LFW and YTF datasets show that the proposed method achieves the state-of-the-art performance, which demonstrates the effectiveness of the proposed MML.
1706.06813
Jindan Xu
Jindan Xu, Wei Xu, Fengkui Gong
On Performance of Quantized Transceiver in Multiuser Massive MIMO Downlinks
4 pages, 4 figures, submitted to IEEE Wireless Communications Letters
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Low-resolution digital-to-analog converters (DACs) and analog-to-digital converters (ADCs) are considered to reduce cost and power consumption in multiuser massive multiple-input multiple-output (MIMO). Using the Bussgang theorem, we derive the asymptotic downlink achievable rate w.r.t the resolutions of both DACs and ADCs, i.e., $b_{DA}$ and $b_{AD}$, under the assumption of large antenna number, $N$, and fixed user load ratio, $\beta$. We characterize the rate loss caused by finite-bit-resolution converters and reveal that the quantization distortion is ignorable at low signal-to-noise ratio (SNR) even with low-resolution converters at both sides. While for maintaining the same rate loss at high SNR, it is discovered that one-more-bit DAC resolution is needed when more users are scheduled with $\beta$ increased by four times. More specifically for one-bit rate loss requirement, $b_{DA}$ can be set by $\left\lceil b_{AD}+\frac{1}{2}\log\beta \right\rceil$ given $b_{AD}$. Similar observations on ADCs are also obtained with numerical verifications.
[ { "created": "Wed, 21 Jun 2017 10:04:35 GMT", "version": "v1" } ]
2017-06-22
[ [ "Xu", "Jindan", "" ], [ "Xu", "Wei", "" ], [ "Gong", "Fengkui", "" ] ]
Low-resolution digital-to-analog converters (DACs) and analog-to-digital converters (ADCs) are considered to reduce cost and power consumption in multiuser massive multiple-input multiple-output (MIMO). Using the Bussgang theorem, we derive the asymptotic downlink achievable rate w.r.t the resolutions of both DACs and ADCs, i.e., $b_{DA}$ and $b_{AD}$, under the assumption of large antenna number, $N$, and fixed user load ratio, $\beta$. We characterize the rate loss caused by finite-bit-resolution converters and reveal that the quantization distortion is ignorable at low signal-to-noise ratio (SNR) even with low-resolution converters at both sides. While for maintaining the same rate loss at high SNR, it is discovered that one-more-bit DAC resolution is needed when more users are scheduled with $\beta$ increased by four times. More specifically for one-bit rate loss requirement, $b_{DA}$ can be set by $\left\lceil b_{AD}+\frac{1}{2}\log\beta \right\rceil$ given $b_{AD}$. Similar observations on ADCs are also obtained with numerical verifications.
1305.5827
Monica Shekhar
Monica Shekhar and Saravanaguru RA. K
Semantic Web Search based on Ontology Modeling using Protege Reasoner
null
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Semantic Web works on the existing Web which presents the meaning of information as well-defined vocabularies understood by the people. Semantic Search, at the same time, works on improving the accuracy if a search by understanding the intent of the search and providing contextually relevant results. This paper describes a semantic approach toward web search through a PHP application. The goal was to parse through a user's browsing history and return semantically relevant web pages for the search query provided.
[ { "created": "Fri, 24 May 2013 19:02:59 GMT", "version": "v1" } ]
2013-05-27
[ [ "Shekhar", "Monica", "" ], [ "K", "Saravanaguru RA.", "" ] ]
The Semantic Web works on the existing Web which presents the meaning of information as well-defined vocabularies understood by the people. Semantic Search, at the same time, works on improving the accuracy if a search by understanding the intent of the search and providing contextually relevant results. This paper describes a semantic approach toward web search through a PHP application. The goal was to parse through a user's browsing history and return semantically relevant web pages for the search query provided.
2402.15089
Yifei Li
Yifei Li, Xiang Yue, Zeyi Liao, Huan Sun
AttributionBench: How Hard is Automatic Attribution Evaluation?
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Modern generative search engines enhance the reliability of large language model (LLM) responses by providing cited evidence. However, evaluating the answer's attribution, i.e., whether every claim within the generated responses is fully supported by its cited evidence, remains an open problem. This verification, traditionally dependent on costly human evaluation, underscores the urgent need for automatic attribution evaluation methods. To bridge the gap in the absence of standardized benchmarks for these methods, we present AttributionBench, a comprehensive benchmark compiled from various existing attribution datasets. Our extensive experiments on AttributionBench reveal the challenges of automatic attribution evaluation, even for state-of-the-art LLMs. Specifically, our findings show that even a fine-tuned GPT-3.5 only achieves around 80% macro-F1 under a binary classification formulation. A detailed analysis of more than 300 error cases indicates that a majority of failures stem from the model's inability to process nuanced information, and the discrepancy between the information the model has access to and that human annotators do.
[ { "created": "Fri, 23 Feb 2024 04:23:33 GMT", "version": "v1" } ]
2024-02-26
[ [ "Li", "Yifei", "" ], [ "Yue", "Xiang", "" ], [ "Liao", "Zeyi", "" ], [ "Sun", "Huan", "" ] ]
Modern generative search engines enhance the reliability of large language model (LLM) responses by providing cited evidence. However, evaluating the answer's attribution, i.e., whether every claim within the generated responses is fully supported by its cited evidence, remains an open problem. This verification, traditionally dependent on costly human evaluation, underscores the urgent need for automatic attribution evaluation methods. To bridge the gap in the absence of standardized benchmarks for these methods, we present AttributionBench, a comprehensive benchmark compiled from various existing attribution datasets. Our extensive experiments on AttributionBench reveal the challenges of automatic attribution evaluation, even for state-of-the-art LLMs. Specifically, our findings show that even a fine-tuned GPT-3.5 only achieves around 80% macro-F1 under a binary classification formulation. A detailed analysis of more than 300 error cases indicates that a majority of failures stem from the model's inability to process nuanced information, and the discrepancy between the information the model has access to and that human annotators do.
2106.02775
Justin Yang
Justin Yang and Judith E. Fan
Visual communication of object concepts at different levels of abstraction
To appear in Proceedings of the 43rd Annual Meeting of the Cognitive Science Society. 7 pages, 5 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
People can produce drawings of specific entities (e.g., Garfield), as well as general categories (e.g., "cat"). What explains this ability to produce such varied drawings of even highly familiar object concepts? We hypothesized that drawing objects at different levels of abstraction depends on both sensory information and representational goals, such that drawings intended to portray a recently seen object preserve more detail than those intended to represent a category. Participants drew objects cued either with a photo or a category label. For each cue type, half the participants aimed to draw a specific exemplar; the other half aimed to draw the category. We found that label-cued category drawings were the most recognizable at the basic level, whereas photo-cued exemplar drawings were the least recognizable. Together, these findings highlight the importance of task context for explaining how people use drawings to communicate visual concepts in different ways.
[ { "created": "Sat, 5 Jun 2021 02:13:31 GMT", "version": "v1" } ]
2021-06-08
[ [ "Yang", "Justin", "" ], [ "Fan", "Judith E.", "" ] ]
People can produce drawings of specific entities (e.g., Garfield), as well as general categories (e.g., "cat"). What explains this ability to produce such varied drawings of even highly familiar object concepts? We hypothesized that drawing objects at different levels of abstraction depends on both sensory information and representational goals, such that drawings intended to portray a recently seen object preserve more detail than those intended to represent a category. Participants drew objects cued either with a photo or a category label. For each cue type, half the participants aimed to draw a specific exemplar; the other half aimed to draw the category. We found that label-cued category drawings were the most recognizable at the basic level, whereas photo-cued exemplar drawings were the least recognizable. Together, these findings highlight the importance of task context for explaining how people use drawings to communicate visual concepts in different ways.
1207.2243
Marina Yashina
Alexei Yu. Uteshev and Marina V. Yashina
Metric Problems for Quadrics in Multidimensional Space
21 pages, 1 figure
null
null
null
cs.SC math.AG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given the equations of the first and the second order surfaces in multidimensional space, our goal is to construct a univariate polynomial one of the zeros of which coincides with the square of the distance between these surfaces. To achieve this goal we employ Elimination Theory methods. The proposed approach is also extended for the case of parameter dependent surfaces.
[ { "created": "Tue, 10 Jul 2012 07:03:36 GMT", "version": "v1" } ]
2012-07-11
[ [ "Uteshev", "Alexei Yu.", "" ], [ "Yashina", "Marina V.", "" ] ]
Given the equations of the first and the second order surfaces in multidimensional space, our goal is to construct a univariate polynomial one of the zeros of which coincides with the square of the distance between these surfaces. To achieve this goal we employ Elimination Theory methods. The proposed approach is also extended for the case of parameter dependent surfaces.
2405.15005
Julia Di
Tony G. Chen, Julia Di, Stephanie Newdick, Mathieu Lapotre, Marco Pavone, Mark R. Cutkosky
ReachBot Field Tests in a Mojave Desert Lava Tube as a Martian Analog
Accepted to the IEEE ICRA Workshop on Field Robotics 2024; 4 pages
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
ReachBot is a robot concept for the planetary exploration of caves and lava tubes, which are often inaccessible with traditional robot locomotion methods. It uses extendable booms as appendages, with grippers mounted at the end, to grasp irregular rock surfaces and traverse these difficult terrains. We have built a partial ReachBot prototype consisting of a single boom and gripper, mounted on a tripod. We present the details on the design and field test of this partial ReachBot prototype in a lava tube in the Mojave Desert. The technical requirements of the field testing, implementation details, and grasp performance results are discussed. The planning and preparation of the field test and lessons learned are also given.
[ { "created": "Thu, 23 May 2024 19:22:59 GMT", "version": "v1" } ]
2024-05-27
[ [ "Chen", "Tony G.", "" ], [ "Di", "Julia", "" ], [ "Newdick", "Stephanie", "" ], [ "Lapotre", "Mathieu", "" ], [ "Pavone", "Marco", "" ], [ "Cutkosky", "Mark R.", "" ] ]
ReachBot is a robot concept for the planetary exploration of caves and lava tubes, which are often inaccessible with traditional robot locomotion methods. It uses extendable booms as appendages, with grippers mounted at the end, to grasp irregular rock surfaces and traverse these difficult terrains. We have built a partial ReachBot prototype consisting of a single boom and gripper, mounted on a tripod. We present the details on the design and field test of this partial ReachBot prototype in a lava tube in the Mojave Desert. The technical requirements of the field testing, implementation details, and grasp performance results are discussed. The planning and preparation of the field test and lessons learned are also given.
2309.14735
Shubham Kumar Nigam
Shubham Kumar Nigam, Shubham Kumar Mishra, Ayush Kumar Mishra, Noel Shallum and Arnab Bhattacharya
Legal Question-Answering in the Indian Context: Efficacy, Challenges, and Potential of Modern AI Models
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Legal QA platforms bear the promise to metamorphose the manner in which legal experts engage with jurisprudential documents. In this exposition, we embark on a comparative exploration of contemporary AI frameworks, gauging their adeptness in catering to the unique demands of the Indian legal milieu, with a keen emphasis on Indian Legal Question Answering (AILQA). Our discourse zeroes in on an array of retrieval and QA mechanisms, positioning the OpenAI GPT model as a reference point. The findings underscore the proficiency of prevailing AILQA paradigms in decoding natural language prompts and churning out precise responses. The ambit of this study is tethered to the Indian criminal legal landscape, distinguished by its intricate nature and associated logistical constraints. To ensure a holistic evaluation, we juxtapose empirical metrics with insights garnered from seasoned legal practitioners, thereby painting a comprehensive picture of AI's potential and challenges within the realm of Indian legal QA.
[ { "created": "Tue, 26 Sep 2023 07:56:55 GMT", "version": "v1" }, { "created": "Mon, 16 Oct 2023 04:40:18 GMT", "version": "v2" } ]
2023-10-17
[ [ "Nigam", "Shubham Kumar", "" ], [ "Mishra", "Shubham Kumar", "" ], [ "Mishra", "Ayush Kumar", "" ], [ "Shallum", "Noel", "" ], [ "Bhattacharya", "Arnab", "" ] ]
Legal QA platforms bear the promise to metamorphose the manner in which legal experts engage with jurisprudential documents. In this exposition, we embark on a comparative exploration of contemporary AI frameworks, gauging their adeptness in catering to the unique demands of the Indian legal milieu, with a keen emphasis on Indian Legal Question Answering (AILQA). Our discourse zeroes in on an array of retrieval and QA mechanisms, positioning the OpenAI GPT model as a reference point. The findings underscore the proficiency of prevailing AILQA paradigms in decoding natural language prompts and churning out precise responses. The ambit of this study is tethered to the Indian criminal legal landscape, distinguished by its intricate nature and associated logistical constraints. To ensure a holistic evaluation, we juxtapose empirical metrics with insights garnered from seasoned legal practitioners, thereby painting a comprehensive picture of AI's potential and challenges within the realm of Indian legal QA.
2211.13515
Yueqing Sun
Yueqing Sun, Yu Zhang, Le Qi, Qi Shi
TSGP: Two-Stage Generative Prompting for Unsupervised Commonsense Question Answering
Findings of EMNLP2022
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised commonsense question answering requires mining effective commonsense knowledge without the rely on the labeled task data. Previous methods typically retrieved from traditional knowledge bases or used pre-trained language models (PrLMs) to generate fixed types of knowledge, which have poor generalization ability. In this paper, we aim to address the above limitation by leveraging the implicit knowledge stored in PrLMs and propose a two-stage prompt-based unsupervised commonsense question answering framework (TSGP). Specifically, we first use knowledge generation prompts to generate the knowledge required for questions with unlimited types and possible candidate answers independent of specified choices. Then, we further utilize answer generation prompts to generate possible candidate answers independent of specified choices. Experimental results and analysis on three different commonsense reasoning tasks, CommonsenseQA, OpenBookQA, and SocialIQA, demonstrate that TSGP significantly improves the reasoning ability of language models in unsupervised settings. Our code is available at: https://github.com/Yueqing-Sun/TSGP.
[ { "created": "Thu, 24 Nov 2022 10:19:24 GMT", "version": "v1" } ]
2022-11-28
[ [ "Sun", "Yueqing", "" ], [ "Zhang", "Yu", "" ], [ "Qi", "Le", "" ], [ "Shi", "Qi", "" ] ]
Unsupervised commonsense question answering requires mining effective commonsense knowledge without the rely on the labeled task data. Previous methods typically retrieved from traditional knowledge bases or used pre-trained language models (PrLMs) to generate fixed types of knowledge, which have poor generalization ability. In this paper, we aim to address the above limitation by leveraging the implicit knowledge stored in PrLMs and propose a two-stage prompt-based unsupervised commonsense question answering framework (TSGP). Specifically, we first use knowledge generation prompts to generate the knowledge required for questions with unlimited types and possible candidate answers independent of specified choices. Then, we further utilize answer generation prompts to generate possible candidate answers independent of specified choices. Experimental results and analysis on three different commonsense reasoning tasks, CommonsenseQA, OpenBookQA, and SocialIQA, demonstrate that TSGP significantly improves the reasoning ability of language models in unsupervised settings. Our code is available at: https://github.com/Yueqing-Sun/TSGP.
2204.04914
Han Wu
Han Wu, Haochen Tan, Kun Xu, Shuqi Liu, Lianwei Wu and Linqi Song
Zero-shot Cross-lingual Conversational Semantic Role Labeling
NAACL 2022 findings
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
While conversational semantic role labeling (CSRL) has shown its usefulness on Chinese conversational tasks, it is still under-explored in non-Chinese languages due to the lack of multilingual CSRL annotations for the parser training. To avoid expensive data collection and error-propagation of translation-based methods, we present a simple but effective approach to perform zero-shot cross-lingual CSRL. Our model implicitly learns language-agnostic, conversational structure-aware and semantically rich representations with the hierarchical encoders and elaborately designed pre-training objectives. Experimental results show that our model outperforms all baselines by large margins on two newly collected English CSRL test sets. More importantly, we confirm the usefulness of CSRL to non-Chinese conversational tasks such as the question-in-context rewriting task in English and the multi-turn dialogue response generation tasks in English, German and Japanese by incorporating the CSRL information into the downstream conversation-based models. We believe this finding is significant and will facilitate the research of non-Chinese dialogue tasks which suffer the problems of ellipsis and anaphora.
[ { "created": "Mon, 11 Apr 2022 07:29:39 GMT", "version": "v1" } ]
2022-04-12
[ [ "Wu", "Han", "" ], [ "Tan", "Haochen", "" ], [ "Xu", "Kun", "" ], [ "Liu", "Shuqi", "" ], [ "Wu", "Lianwei", "" ], [ "Song", "Linqi", "" ] ]
While conversational semantic role labeling (CSRL) has shown its usefulness on Chinese conversational tasks, it is still under-explored in non-Chinese languages due to the lack of multilingual CSRL annotations for the parser training. To avoid expensive data collection and error-propagation of translation-based methods, we present a simple but effective approach to perform zero-shot cross-lingual CSRL. Our model implicitly learns language-agnostic, conversational structure-aware and semantically rich representations with the hierarchical encoders and elaborately designed pre-training objectives. Experimental results show that our model outperforms all baselines by large margins on two newly collected English CSRL test sets. More importantly, we confirm the usefulness of CSRL to non-Chinese conversational tasks such as the question-in-context rewriting task in English and the multi-turn dialogue response generation tasks in English, German and Japanese by incorporating the CSRL information into the downstream conversation-based models. We believe this finding is significant and will facilitate the research of non-Chinese dialogue tasks which suffer the problems of ellipsis and anaphora.
2402.00881
Nour Kouzayha
Adilya Bakambekova, Nour Kouzayha and Tareq Al-Naffouri
On the Interplay of Artificial Intelligence and Space-Air-Ground Integrated Networks: A Survey
null
null
null
null
cs.NI cs.AI
http://creativecommons.org/licenses/by/4.0/
Space-Air-Ground Integrated Networks (SAGINs), which incorporate space and aerial networks with terrestrial wireless systems, are vital enablers of the emerging sixth-generation (6G) wireless networks. Besides bringing significant benefits to various applications and services, SAGINs are envisioned to extend high-speed broadband coverage to remote areas, such as small towns or mining sites, or areas where terrestrial infrastructure cannot reach, such as airplanes or maritime use cases. However, due to the limited power and storage resources, as well as other constraints introduced by the design of terrestrial networks, SAGINs must be intelligently configured and controlled to satisfy the envisioned requirements. Meanwhile, Artificial Intelligence (AI) is another critical enabler of 6G. Due to massive amounts of available data, AI has been leveraged to address pressing challenges of current and future wireless networks. By adding AI and facilitating the decision-making and prediction procedures, SAGINs can effectively adapt to their surrounding environment, thus enhancing the performance of various metrics. In this work, we aim to investigate the interplay of AI and SAGINs by providing a holistic overview of state-of-the-art research in AI-enabled SAGINs. Specifically, we present a comprehensive overview of some potential applications of AI in SAGINs. We also cover open issues in employing AI and detail the contributions of SAGINs in the development of AI. Finally, we highlight some limitations of the existing research works and outline potential future research directions.
[ { "created": "Sat, 20 Jan 2024 16:10:31 GMT", "version": "v1" } ]
2024-02-05
[ [ "Bakambekova", "Adilya", "" ], [ "Kouzayha", "Nour", "" ], [ "Al-Naffouri", "Tareq", "" ] ]
Space-Air-Ground Integrated Networks (SAGINs), which incorporate space and aerial networks with terrestrial wireless systems, are vital enablers of the emerging sixth-generation (6G) wireless networks. Besides bringing significant benefits to various applications and services, SAGINs are envisioned to extend high-speed broadband coverage to remote areas, such as small towns or mining sites, or areas where terrestrial infrastructure cannot reach, such as airplanes or maritime use cases. However, due to the limited power and storage resources, as well as other constraints introduced by the design of terrestrial networks, SAGINs must be intelligently configured and controlled to satisfy the envisioned requirements. Meanwhile, Artificial Intelligence (AI) is another critical enabler of 6G. Due to massive amounts of available data, AI has been leveraged to address pressing challenges of current and future wireless networks. By adding AI and facilitating the decision-making and prediction procedures, SAGINs can effectively adapt to their surrounding environment, thus enhancing the performance of various metrics. In this work, we aim to investigate the interplay of AI and SAGINs by providing a holistic overview of state-of-the-art research in AI-enabled SAGINs. Specifically, we present a comprehensive overview of some potential applications of AI in SAGINs. We also cover open issues in employing AI and detail the contributions of SAGINs in the development of AI. Finally, we highlight some limitations of the existing research works and outline potential future research directions.
2312.01531
Benran Hu
Yichen Liu, Benran Hu, Chi-Keung Tang, Yu-Wing Tai
SANeRF-HQ: Segment Anything for NeRF in High Quality
Accepted to CVPR 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recently, the Segment Anything Model (SAM) has showcased remarkable capabilities of zero-shot segmentation, while NeRF (Neural Radiance Fields) has gained popularity as a method for various 3D problems beyond novel view synthesis. Though there exist initial attempts to incorporate these two methods into 3D segmentation, they face the challenge of accurately and consistently segmenting objects in complex scenarios. In this paper, we introduce the Segment Anything for NeRF in High Quality (SANeRF-HQ) to achieve high-quality 3D segmentation of any target object in a given scene. SANeRF-HQ utilizes SAM for open-world object segmentation guided by user-supplied prompts, while leveraging NeRF to aggregate information from different viewpoints. To overcome the aforementioned challenges, we employ density field and RGB similarity to enhance the accuracy of segmentation boundary during the aggregation. Emphasizing on segmentation accuracy, we evaluate our method on multiple NeRF datasets where high-quality ground-truths are available or manually annotated. SANeRF-HQ shows a significant quality improvement over state-of-the-art methods in NeRF object segmentation, provides higher flexibility for object localization, and enables more consistent object segmentation across multiple views. Results and code are available at the project site: https://lyclyc52.github.io/SANeRF-HQ/.
[ { "created": "Sun, 3 Dec 2023 23:09:38 GMT", "version": "v1" }, { "created": "Sat, 6 Apr 2024 07:04:29 GMT", "version": "v2" } ]
2024-04-09
[ [ "Liu", "Yichen", "" ], [ "Hu", "Benran", "" ], [ "Tang", "Chi-Keung", "" ], [ "Tai", "Yu-Wing", "" ] ]
Recently, the Segment Anything Model (SAM) has showcased remarkable capabilities of zero-shot segmentation, while NeRF (Neural Radiance Fields) has gained popularity as a method for various 3D problems beyond novel view synthesis. Though there exist initial attempts to incorporate these two methods into 3D segmentation, they face the challenge of accurately and consistently segmenting objects in complex scenarios. In this paper, we introduce the Segment Anything for NeRF in High Quality (SANeRF-HQ) to achieve high-quality 3D segmentation of any target object in a given scene. SANeRF-HQ utilizes SAM for open-world object segmentation guided by user-supplied prompts, while leveraging NeRF to aggregate information from different viewpoints. To overcome the aforementioned challenges, we employ density field and RGB similarity to enhance the accuracy of segmentation boundary during the aggregation. Emphasizing on segmentation accuracy, we evaluate our method on multiple NeRF datasets where high-quality ground-truths are available or manually annotated. SANeRF-HQ shows a significant quality improvement over state-of-the-art methods in NeRF object segmentation, provides higher flexibility for object localization, and enables more consistent object segmentation across multiple views. Results and code are available at the project site: https://lyclyc52.github.io/SANeRF-HQ/.
2103.03912
Albert Dulian
Albert Dulian, John C. Murray
Multi-modal anticipation of stochastic trajectories in a dynamic environment with Conditional Variational Autoencoders
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Forecasting short-term motion of nearby vehicles presents an inherently challenging issue as the space of their possible future movements is not strictly limited to a set of single trajectories. Recently proposed techniques that demonstrate plausible results concentrate primarily on forecasting a fixed number of deterministic predictions, or on classifying over a wide variety of trajectories that were previously generated using e.g. dynamic model. This paper focuses on addressing the uncertainty associated with the discussed task by utilising the stochastic nature of generative models in order to produce a diverse set of plausible paths with regards to tracked vehicles. More specifically, we propose to account for the multi-modality of the problem with use of Conditional Variational Autoencoder (C-VAE) conditioned on an agent's past motion as well as a rasterised scene context encoded with Capsule Network (CapsNet). In addition, we demonstrate advantages of employing the Minimum over N (MoN) cost function which measures the distance between ground truth and N generated samples and tries to minimise the loss with respect to the closest sample, effectively leading to more diverse predictions. We examine our network on a publicly available dataset against recent state-of-the-art methods and show that our approach outperforms these techniques in numerous scenarios whilst significantly reducing the number of trainable parameters as well as allowing to sample an arbitrary amount of diverse trajectories.
[ { "created": "Fri, 5 Mar 2021 19:38:26 GMT", "version": "v1" } ]
2021-03-09
[ [ "Dulian", "Albert", "" ], [ "Murray", "John C.", "" ] ]
Forecasting short-term motion of nearby vehicles presents an inherently challenging issue as the space of their possible future movements is not strictly limited to a set of single trajectories. Recently proposed techniques that demonstrate plausible results concentrate primarily on forecasting a fixed number of deterministic predictions, or on classifying over a wide variety of trajectories that were previously generated using e.g. dynamic model. This paper focuses on addressing the uncertainty associated with the discussed task by utilising the stochastic nature of generative models in order to produce a diverse set of plausible paths with regards to tracked vehicles. More specifically, we propose to account for the multi-modality of the problem with use of Conditional Variational Autoencoder (C-VAE) conditioned on an agent's past motion as well as a rasterised scene context encoded with Capsule Network (CapsNet). In addition, we demonstrate advantages of employing the Minimum over N (MoN) cost function which measures the distance between ground truth and N generated samples and tries to minimise the loss with respect to the closest sample, effectively leading to more diverse predictions. We examine our network on a publicly available dataset against recent state-of-the-art methods and show that our approach outperforms these techniques in numerous scenarios whilst significantly reducing the number of trainable parameters as well as allowing to sample an arbitrary amount of diverse trajectories.
2211.07716
David Biesner
David Biesner, Maren Pielka, Rajkumar Ramamurthy, Tim Dilmaghani, Bernd Kliem, R\"udiger Loitz, Rafet Sifa
Zero-Shot Text Matching for Automated Auditing using Sentence Transformers
To be published in proceedings of IEEE International Conference on Machine Learning Applications IEEE ICMLA 2022
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Natural language processing methods have several applications in automated auditing, including document or passage classification, information retrieval, and question answering. However, training such models requires a large amount of annotated data which is scarce in industrial settings. At the same time, techniques like zero-shot and unsupervised learning allow for application of models pre-trained using general domain data to unseen domains. In this work, we study the efficiency of unsupervised text matching using Sentence-Bert, a transformer-based model, by applying it to the semantic similarity of financial passages. Experimental results show that this model is robust to documents from in- and out-of-domain data.
[ { "created": "Fri, 28 Oct 2022 11:52:16 GMT", "version": "v1" } ]
2022-11-16
[ [ "Biesner", "David", "" ], [ "Pielka", "Maren", "" ], [ "Ramamurthy", "Rajkumar", "" ], [ "Dilmaghani", "Tim", "" ], [ "Kliem", "Bernd", "" ], [ "Loitz", "Rüdiger", "" ], [ "Sifa", "Rafet", "" ] ]
Natural language processing methods have several applications in automated auditing, including document or passage classification, information retrieval, and question answering. However, training such models requires a large amount of annotated data which is scarce in industrial settings. At the same time, techniques like zero-shot and unsupervised learning allow for application of models pre-trained using general domain data to unseen domains. In this work, we study the efficiency of unsupervised text matching using Sentence-Bert, a transformer-based model, by applying it to the semantic similarity of financial passages. Experimental results show that this model is robust to documents from in- and out-of-domain data.
2208.02541
Chenjie Cao
Chenjie Cao, Xinlin Ren, Yanwei Fu
MVSFormer: Multi-View Stereo by Learning Robust Image Features and Temperature-based Depth
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Feature representation learning is the key recipe for learning-based Multi-View Stereo (MVS). As the common feature extractor of learning-based MVS, vanilla Feature Pyramid Networks (FPNs) suffer from discouraged feature representations for reflection and texture-less areas, which limits the generalization of MVS. Even FPNs worked with pre-trained Convolutional Neural Networks (CNNs) fail to tackle these issues. On the other hand, Vision Transformers (ViTs) have achieved prominent success in many 2D vision tasks. Thus we ask whether ViTs can facilitate feature learning in MVS? In this paper, we propose a pre-trained ViT enhanced MVS network called MVSFormer, which can learn more reliable feature representations benefited by informative priors from ViT. The finetuned MVSFormer with hierarchical ViTs of efficient attention mechanisms can achieve prominent improvement based on FPNs. Besides, the alternative MVSFormer with frozen ViT weights is further proposed. This largely alleviates the training cost with competitive performance strengthened by the attention map from the self-distillation pre-training. MVSFormer can be generalized to various input resolutions with efficient multi-scale training strengthened by gradient accumulation. Moreover, we discuss the merits and drawbacks of classification and regression-based MVS methods, and further propose to unify them with a temperature-based strategy. MVSFormer achieves state-of-the-art performance on the DTU dataset. Particularly, MVSFormer ranks as Top-1 on both intermediate and advanced sets of the highly competitive Tanks-and-Temples leaderboard.
[ { "created": "Thu, 4 Aug 2022 09:17:30 GMT", "version": "v1" }, { "created": "Mon, 8 Aug 2022 16:49:16 GMT", "version": "v2" }, { "created": "Fri, 16 Dec 2022 13:42:24 GMT", "version": "v3" } ]
2022-12-19
[ [ "Cao", "Chenjie", "" ], [ "Ren", "Xinlin", "" ], [ "Fu", "Yanwei", "" ] ]
Feature representation learning is the key recipe for learning-based Multi-View Stereo (MVS). As the common feature extractor of learning-based MVS, vanilla Feature Pyramid Networks (FPNs) suffer from discouraged feature representations for reflection and texture-less areas, which limits the generalization of MVS. Even FPNs worked with pre-trained Convolutional Neural Networks (CNNs) fail to tackle these issues. On the other hand, Vision Transformers (ViTs) have achieved prominent success in many 2D vision tasks. Thus we ask whether ViTs can facilitate feature learning in MVS? In this paper, we propose a pre-trained ViT enhanced MVS network called MVSFormer, which can learn more reliable feature representations benefited by informative priors from ViT. The finetuned MVSFormer with hierarchical ViTs of efficient attention mechanisms can achieve prominent improvement based on FPNs. Besides, the alternative MVSFormer with frozen ViT weights is further proposed. This largely alleviates the training cost with competitive performance strengthened by the attention map from the self-distillation pre-training. MVSFormer can be generalized to various input resolutions with efficient multi-scale training strengthened by gradient accumulation. Moreover, we discuss the merits and drawbacks of classification and regression-based MVS methods, and further propose to unify them with a temperature-based strategy. MVSFormer achieves state-of-the-art performance on the DTU dataset. Particularly, MVSFormer ranks as Top-1 on both intermediate and advanced sets of the highly competitive Tanks-and-Temples leaderboard.
1707.08101
Andreas Eitel
Andreas Eitel, Nico Hauff and Wolfram Burgard
Learning to Singulate Objects using a Push Proposal Network
International Symposium on Robotics Research (ISRR) 2017, videos: http://robotpush.cs.uni-freiburg.de
null
null
null
cs.RO cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning to act in unstructured environments, such as cluttered piles of objects, poses a substantial challenge for manipulation robots. We present a novel neural network-based approach that separates unknown objects in clutter by selecting favourable push actions. Our network is trained from data collected through autonomous interaction of a PR2 robot with randomly organized tabletop scenes. The model is designed to propose meaningful push actions based on over-segmented RGB-D images. We evaluate our approach by singulating up to 8 unknown objects in clutter. We demonstrate that our method enables the robot to perform the task with a high success rate and a low number of required push actions. Our results based on real-world experiments show that our network is able to generalize to novel objects of various sizes and shapes, as well as to arbitrary object configurations. Videos of our experiments can be viewed at http://robotpush.cs.uni-freiburg.de
[ { "created": "Tue, 25 Jul 2017 17:36:36 GMT", "version": "v1" }, { "created": "Mon, 5 Feb 2018 18:42:35 GMT", "version": "v2" } ]
2018-02-06
[ [ "Eitel", "Andreas", "" ], [ "Hauff", "Nico", "" ], [ "Burgard", "Wolfram", "" ] ]
Learning to act in unstructured environments, such as cluttered piles of objects, poses a substantial challenge for manipulation robots. We present a novel neural network-based approach that separates unknown objects in clutter by selecting favourable push actions. Our network is trained from data collected through autonomous interaction of a PR2 robot with randomly organized tabletop scenes. The model is designed to propose meaningful push actions based on over-segmented RGB-D images. We evaluate our approach by singulating up to 8 unknown objects in clutter. We demonstrate that our method enables the robot to perform the task with a high success rate and a low number of required push actions. Our results based on real-world experiments show that our network is able to generalize to novel objects of various sizes and shapes, as well as to arbitrary object configurations. Videos of our experiments can be viewed at http://robotpush.cs.uni-freiburg.de
1203.4900
Michael Kapralov
Ashish Goel and Michael Kapralov and Ian Post
Single pass sparsification in the streaming model with edge deletions
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we give a construction of cut sparsifiers of Benczur and Karger in the {\em dynamic} streaming setting in a single pass over the data stream. Previous constructions either required multiple passes or were unable to handle edge deletions. We use $\tilde{O}(1/\e^2)$ time for each stream update and $\tilde{O}(n/\e^2)$ time to construct a sparsifier. Our $\e$-sparsifiers have $O(n\log^3 n/\e^2)$ edges. The main tools behind our result are an application of sketching techniques of Ahn et al.[SODA'12] to estimate edge connectivity together with a novel application of sampling with limited independence and sparse recovery to produce the edges of the sparsifier.
[ { "created": "Thu, 22 Mar 2012 07:45:13 GMT", "version": "v1" } ]
2012-03-23
[ [ "Goel", "Ashish", "" ], [ "Kapralov", "Michael", "" ], [ "Post", "Ian", "" ] ]
In this paper we give a construction of cut sparsifiers of Benczur and Karger in the {\em dynamic} streaming setting in a single pass over the data stream. Previous constructions either required multiple passes or were unable to handle edge deletions. We use $\tilde{O}(1/\e^2)$ time for each stream update and $\tilde{O}(n/\e^2)$ time to construct a sparsifier. Our $\e$-sparsifiers have $O(n\log^3 n/\e^2)$ edges. The main tools behind our result are an application of sketching techniques of Ahn et al.[SODA'12] to estimate edge connectivity together with a novel application of sampling with limited independence and sparse recovery to produce the edges of the sparsifier.
1608.06014
Yun Wang
Yun Wang and Peter J. Ramadge
The Symmetry of a Simple Optimization Problem in Lasso Screening
null
null
null
null
cs.LG cs.AI cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently dictionary screening has been proposed as an effective way to improve the computational efficiency of solving the lasso problem, which is one of the most commonly used method for learning sparse representations. To address today's ever increasing large dataset, effective screening relies on a tight region bound on the solution to the dual lasso. Typical region bounds are in the form of an intersection of a sphere and multiple half spaces. One way to tighten the region bound is using more half spaces, which however, adds to the overhead of solving the high dimensional optimization problem in lasso screening. This paper reveals the interesting property that the optimization problem only depends on the projection of features onto the subspace spanned by the normals of the half spaces. This property converts an optimization problem in high dimension to much lower dimension, and thus sheds light on reducing the computation overhead of lasso screening based on tighter region bounds.
[ { "created": "Sun, 21 Aug 2016 23:48:43 GMT", "version": "v1" }, { "created": "Thu, 25 Aug 2016 22:05:24 GMT", "version": "v2" } ]
2016-08-29
[ [ "Wang", "Yun", "" ], [ "Ramadge", "Peter J.", "" ] ]
Recently dictionary screening has been proposed as an effective way to improve the computational efficiency of solving the lasso problem, which is one of the most commonly used method for learning sparse representations. To address today's ever increasing large dataset, effective screening relies on a tight region bound on the solution to the dual lasso. Typical region bounds are in the form of an intersection of a sphere and multiple half spaces. One way to tighten the region bound is using more half spaces, which however, adds to the overhead of solving the high dimensional optimization problem in lasso screening. This paper reveals the interesting property that the optimization problem only depends on the projection of features onto the subspace spanned by the normals of the half spaces. This property converts an optimization problem in high dimension to much lower dimension, and thus sheds light on reducing the computation overhead of lasso screening based on tighter region bounds.
2402.01111
Dan Qiao
Dan Qiao, Yu-Xiang Wang
Near-Optimal Reinforcement Learning with Self-Play under Adaptivity Constraints
null
null
null
null
cs.LG cs.AI cs.MA stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of multi-agent reinforcement learning (MARL) with adaptivity constraints -- a new problem motivated by real-world applications where deployments of new policies are costly and the number of policy updates must be minimized. For two-player zero-sum Markov Games, we design a (policy) elimination based algorithm that achieves a regret of $\widetilde{O}(\sqrt{H^3 S^2 ABK})$, while the batch complexity is only $O(H+\log\log K)$. In the above, $S$ denotes the number of states, $A,B$ are the number of actions for the two players respectively, $H$ is the horizon and $K$ is the number of episodes. Furthermore, we prove a batch complexity lower bound $\Omega(\frac{H}{\log_{A}K}+\log\log K)$ for all algorithms with $\widetilde{O}(\sqrt{K})$ regret bound, which matches our upper bound up to logarithmic factors. As a byproduct, our techniques naturally extend to learning bandit games and reward-free MARL within near optimal batch complexity. To the best of our knowledge, these are the first line of results towards understanding MARL with low adaptivity.
[ { "created": "Fri, 2 Feb 2024 03:00:40 GMT", "version": "v1" } ]
2024-02-05
[ [ "Qiao", "Dan", "" ], [ "Wang", "Yu-Xiang", "" ] ]
We study the problem of multi-agent reinforcement learning (MARL) with adaptivity constraints -- a new problem motivated by real-world applications where deployments of new policies are costly and the number of policy updates must be minimized. For two-player zero-sum Markov Games, we design a (policy) elimination based algorithm that achieves a regret of $\widetilde{O}(\sqrt{H^3 S^2 ABK})$, while the batch complexity is only $O(H+\log\log K)$. In the above, $S$ denotes the number of states, $A,B$ are the number of actions for the two players respectively, $H$ is the horizon and $K$ is the number of episodes. Furthermore, we prove a batch complexity lower bound $\Omega(\frac{H}{\log_{A}K}+\log\log K)$ for all algorithms with $\widetilde{O}(\sqrt{K})$ regret bound, which matches our upper bound up to logarithmic factors. As a byproduct, our techniques naturally extend to learning bandit games and reward-free MARL within near optimal batch complexity. To the best of our knowledge, these are the first line of results towards understanding MARL with low adaptivity.
2012.11260
Mengshi Qi
Mengshi Qi, Edoardo Remelli, Mathieu Salzmann, Pascal Fua
Unsupervised Domain Adaptation with Temporal-Consistent Self-Training for 3D Hand-Object Joint Reconstruction
In submission
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Deep learning-solutions for hand-object 3D pose and shape estimation are now very effective when an annotated dataset is available to train them to handle the scenarios and lighting conditions they will encounter at test time. Unfortunately, this is not always the case, and one often has to resort to training them on synthetic data, which does not guarantee that they will work well in real situations. In this paper, we introduce an effective approach to addressing this challenge by exploiting 3D geometric constraints within a cycle generative adversarial network (CycleGAN) to perform domain adaptation. Furthermore, in contrast to most existing works, which fail to leverage the rich temporal information available in unlabeled real videos as a source of supervision, we propose to enforce short- and long-term temporal consistency to fine-tune the domain-adapted model in a self-supervised fashion. We will demonstrate that our approach outperforms state-of-the-art 3D hand-object joint reconstruction methods on three widely-used benchmarks and will make our code publicly available.
[ { "created": "Mon, 21 Dec 2020 11:27:56 GMT", "version": "v1" } ]
2020-12-22
[ [ "Qi", "Mengshi", "" ], [ "Remelli", "Edoardo", "" ], [ "Salzmann", "Mathieu", "" ], [ "Fua", "Pascal", "" ] ]
Deep learning-solutions for hand-object 3D pose and shape estimation are now very effective when an annotated dataset is available to train them to handle the scenarios and lighting conditions they will encounter at test time. Unfortunately, this is not always the case, and one often has to resort to training them on synthetic data, which does not guarantee that they will work well in real situations. In this paper, we introduce an effective approach to addressing this challenge by exploiting 3D geometric constraints within a cycle generative adversarial network (CycleGAN) to perform domain adaptation. Furthermore, in contrast to most existing works, which fail to leverage the rich temporal information available in unlabeled real videos as a source of supervision, we propose to enforce short- and long-term temporal consistency to fine-tune the domain-adapted model in a self-supervised fashion. We will demonstrate that our approach outperforms state-of-the-art 3D hand-object joint reconstruction methods on three widely-used benchmarks and will make our code publicly available.
2105.13810
Benjamin Maschler
Benjamin Lindemann, Benjamin Maschler, Nada Sahlab, and Michael Weyrich
A Survey on Anomaly Detection for Technical Systems using LSTM Networks
14 pages, 6 figures, 4 tables. Accepted for publication by Computers in Industry
null
10.1016/j.compind.2021.103498
null
cs.LG cs.AI stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
Anomalies represent deviations from the intended system operation and can lead to decreased efficiency as well as partial or complete system failure. As the causes of anomalies are often unknown due to complex system dynamics, efficient anomaly detection is necessary. Conventional detection approaches rely on statistical and time-invariant methods that fail to address the complex and dynamic nature of anomalies. With advances in artificial intelligence and increasing importance for anomaly detection and prevention in various domains, artificial neural network approaches enable the detection of more complex anomaly types while considering temporal and contextual characteristics. In this article, a survey on state-of-the-art anomaly detection using deep neural and especially long short-term memory networks is conducted. The investigated approaches are evaluated based on the application scenario, data and anomaly types as well as further metrics. To highlight the potential of upcoming anomaly detection techniques, graph-based and transfer learning approaches are also included in the survey, enabling the analysis of heterogeneous data as well as compensating for its shortage and improving the handling of dynamic processes.
[ { "created": "Fri, 28 May 2021 13:24:40 GMT", "version": "v1" } ]
2021-08-31
[ [ "Lindemann", "Benjamin", "" ], [ "Maschler", "Benjamin", "" ], [ "Sahlab", "Nada", "" ], [ "Weyrich", "Michael", "" ] ]
Anomalies represent deviations from the intended system operation and can lead to decreased efficiency as well as partial or complete system failure. As the causes of anomalies are often unknown due to complex system dynamics, efficient anomaly detection is necessary. Conventional detection approaches rely on statistical and time-invariant methods that fail to address the complex and dynamic nature of anomalies. With advances in artificial intelligence and increasing importance for anomaly detection and prevention in various domains, artificial neural network approaches enable the detection of more complex anomaly types while considering temporal and contextual characteristics. In this article, a survey on state-of-the-art anomaly detection using deep neural and especially long short-term memory networks is conducted. The investigated approaches are evaluated based on the application scenario, data and anomaly types as well as further metrics. To highlight the potential of upcoming anomaly detection techniques, graph-based and transfer learning approaches are also included in the survey, enabling the analysis of heterogeneous data as well as compensating for its shortage and improving the handling of dynamic processes.
2112.09169
Weihao Tan
Weihao Tan, David Koleczek, Siddhant Pradhan, Nicholas Perello, Vivek Chettiar, Vishal Rohra, Aaslesha Rajaram, Soundararajan Srinivasan, H M Sajjad Hossain, Yash Chandak
On Optimizing Interventions in Shared Autonomy
Accepted by AAAI2022
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Shared autonomy refers to approaches for enabling an autonomous agent to collaborate with a human with the aim of improving human performance. However, besides improving performance, it may often also be beneficial that the agent concurrently accounts for preserving the user's experience or satisfaction of collaboration. In order to address this additional goal, we examine approaches for improving the user experience by constraining the number of interventions by the autonomous agent. We propose two model-free reinforcement learning methods that can account for both hard and soft constraints on the number of interventions. We show that not only does our method outperform the existing baseline, but also eliminates the need to manually tune a black-box hyperparameter for controlling the level of assistance. We also provide an in-depth analysis of intervention scenarios in order to further illuminate system understanding.
[ { "created": "Thu, 16 Dec 2021 19:37:28 GMT", "version": "v1" }, { "created": "Sat, 1 Jan 2022 00:51:46 GMT", "version": "v2" } ]
2022-01-04
[ [ "Tan", "Weihao", "" ], [ "Koleczek", "David", "" ], [ "Pradhan", "Siddhant", "" ], [ "Perello", "Nicholas", "" ], [ "Chettiar", "Vivek", "" ], [ "Rohra", "Vishal", "" ], [ "Rajaram", "Aaslesha", "" ], [ "Srinivasan", "Soundararajan", "" ], [ "Hossain", "H M Sajjad", "" ], [ "Chandak", "Yash", "" ] ]
Shared autonomy refers to approaches for enabling an autonomous agent to collaborate with a human with the aim of improving human performance. However, besides improving performance, it may often also be beneficial that the agent concurrently accounts for preserving the user's experience or satisfaction of collaboration. In order to address this additional goal, we examine approaches for improving the user experience by constraining the number of interventions by the autonomous agent. We propose two model-free reinforcement learning methods that can account for both hard and soft constraints on the number of interventions. We show that not only does our method outperform the existing baseline, but also eliminates the need to manually tune a black-box hyperparameter for controlling the level of assistance. We also provide an in-depth analysis of intervention scenarios in order to further illuminate system understanding.
1103.2690
Valentin Savin
Lam Pham Sy, Valentin Savin, David Declercq, Nghia Pham
Scheduled-PEG construction of LDPC codes for Upper-Layer FEC
WCC 2011
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Progressive Edge Growth (PEG) algorithm is one of the most widely-used method for constructing finite length LDPC codes. In this paper we consider the PEG algorithm together with a scheduling distribution, which specifies the order in which edges are established in the graph. The goal is to find a scheduling distribution that yields "the best" performance in terms of decoding overhead, performance metric specific to erasure codes and widely used for upper-layer forward error correction (UL-FEC). We rigorously formulate this optimization problem, and we show that it can be addressed by using genetic optimization algorithms. We also exhibit PEG codes with optimized scheduling distribution, whose decoding overhead is less than half of the decoding overhead of their classical-PEG counterparts.
[ { "created": "Mon, 14 Mar 2011 15:29:02 GMT", "version": "v1" } ]
2011-03-15
[ [ "Sy", "Lam Pham", "" ], [ "Savin", "Valentin", "" ], [ "Declercq", "David", "" ], [ "Pham", "Nghia", "" ] ]
The Progressive Edge Growth (PEG) algorithm is one of the most widely-used method for constructing finite length LDPC codes. In this paper we consider the PEG algorithm together with a scheduling distribution, which specifies the order in which edges are established in the graph. The goal is to find a scheduling distribution that yields "the best" performance in terms of decoding overhead, performance metric specific to erasure codes and widely used for upper-layer forward error correction (UL-FEC). We rigorously formulate this optimization problem, and we show that it can be addressed by using genetic optimization algorithms. We also exhibit PEG codes with optimized scheduling distribution, whose decoding overhead is less than half of the decoding overhead of their classical-PEG counterparts.
2305.11882
Andrew Katz
Andrew Katz, Siqing Wei, Gaurav Nanda, Christopher Brinton, Matthew Ohland
Exploring the Efficacy of ChatGPT in Analyzing Student Teamwork Feedback with an Existing Taxonomy
22 pages, 7 tables, 1 figure
null
null
null
cs.HC cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Teamwork is a critical component of many academic and professional settings. In those contexts, feedback between team members is an important element to facilitate successful and sustainable teamwork. However, in the classroom, as the number of teams and team members and frequency of evaluation increase, the volume of comments can become overwhelming for an instructor to read and track, making it difficult to identify patterns and areas for student improvement. To address this challenge, we explored the use of generative AI models, specifically ChatGPT, to analyze student comments in team based learning contexts. Our study aimed to evaluate ChatGPT's ability to accurately identify topics in student comments based on an existing framework consisting of positive and negative comments. Our results suggest that ChatGPT can achieve over 90\% accuracy in labeling student comments, providing a potentially valuable tool for analyzing feedback in team projects. This study contributes to the growing body of research on the use of AI models in educational contexts and highlights the potential of ChatGPT for facilitating analysis of student comments.
[ { "created": "Tue, 9 May 2023 19:55:50 GMT", "version": "v1" } ]
2023-05-23
[ [ "Katz", "Andrew", "" ], [ "Wei", "Siqing", "" ], [ "Nanda", "Gaurav", "" ], [ "Brinton", "Christopher", "" ], [ "Ohland", "Matthew", "" ] ]
Teamwork is a critical component of many academic and professional settings. In those contexts, feedback between team members is an important element to facilitate successful and sustainable teamwork. However, in the classroom, as the number of teams and team members and frequency of evaluation increase, the volume of comments can become overwhelming for an instructor to read and track, making it difficult to identify patterns and areas for student improvement. To address this challenge, we explored the use of generative AI models, specifically ChatGPT, to analyze student comments in team based learning contexts. Our study aimed to evaluate ChatGPT's ability to accurately identify topics in student comments based on an existing framework consisting of positive and negative comments. Our results suggest that ChatGPT can achieve over 90\% accuracy in labeling student comments, providing a potentially valuable tool for analyzing feedback in team projects. This study contributes to the growing body of research on the use of AI models in educational contexts and highlights the potential of ChatGPT for facilitating analysis of student comments.
2305.14588
Daniel Simig
Sebastian Cadavid-Sanchez, Khalil Kacem, Rafael Aparecido Martins Frade, Johannes Boehm, Thomas Chaney, Danial Lashkari, Daniel Simig
Evaluating end-to-end entity linking on domain-specific knowledge bases: Learning about ancient technologies from museum collections
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
To study social, economic, and historical questions, researchers in the social sciences and humanities have started to use increasingly large unstructured textual datasets. While recent advances in NLP provide many tools to efficiently process such data, most existing approaches rely on generic solutions whose performance and suitability for domain-specific tasks is not well understood. This work presents an attempt to bridge this domain gap by exploring the use of modern Entity Linking approaches for the enrichment of museum collection data. We collect a dataset comprising of more than 1700 texts annotated with 7,510 mention-entity pairs, evaluate some off-the-shelf solutions in detail using this dataset and finally fine-tune a recent end-to-end EL model on this data. We show that our fine-tuned model significantly outperforms other approaches currently available in this domain and present a proof-of-concept use case of this model. We release our dataset and our best model.
[ { "created": "Tue, 23 May 2023 23:53:58 GMT", "version": "v1" } ]
2023-05-25
[ [ "Cadavid-Sanchez", "Sebastian", "" ], [ "Kacem", "Khalil", "" ], [ "Frade", "Rafael Aparecido Martins", "" ], [ "Boehm", "Johannes", "" ], [ "Chaney", "Thomas", "" ], [ "Lashkari", "Danial", "" ], [ "Simig", "Daniel", "" ] ]
To study social, economic, and historical questions, researchers in the social sciences and humanities have started to use increasingly large unstructured textual datasets. While recent advances in NLP provide many tools to efficiently process such data, most existing approaches rely on generic solutions whose performance and suitability for domain-specific tasks is not well understood. This work presents an attempt to bridge this domain gap by exploring the use of modern Entity Linking approaches for the enrichment of museum collection data. We collect a dataset comprising of more than 1700 texts annotated with 7,510 mention-entity pairs, evaluate some off-the-shelf solutions in detail using this dataset and finally fine-tune a recent end-to-end EL model on this data. We show that our fine-tuned model significantly outperforms other approaches currently available in this domain and present a proof-of-concept use case of this model. We release our dataset and our best model.
2212.02323
Alexander Razborov
Alexander Razborov
Improved Convergence Guarantees for Shallow Neural Networks
55 pages, 5 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We continue a long line of research aimed at proving convergence of depth 2 neural networks, trained via gradient descent, to a global minimum. Like in many previous works, our model has the following features: regression with quadratic loss function, fully connected feedforward architecture, RelU activations, Gaussian data instances and network initialization, adversarial labels. It is more general in the sense that we allow both layers to be trained simultaneously and at {\em different} rates. Our results improve on state-of-the-art [Oymak Soltanolkotabi 20] (training the first layer only) and [Nguyen 21, Section 3.2] (training both layers with Le Cun's initialization). We also report several simple experiments with synthetic data. They strongly suggest that, at least in our model, the convergence phenomenon extends well beyond the ``NTK regime''.
[ { "created": "Mon, 5 Dec 2022 14:47:52 GMT", "version": "v1" } ]
2022-12-06
[ [ "Razborov", "Alexander", "" ] ]
We continue a long line of research aimed at proving convergence of depth 2 neural networks, trained via gradient descent, to a global minimum. Like in many previous works, our model has the following features: regression with quadratic loss function, fully connected feedforward architecture, RelU activations, Gaussian data instances and network initialization, adversarial labels. It is more general in the sense that we allow both layers to be trained simultaneously and at {\em different} rates. Our results improve on state-of-the-art [Oymak Soltanolkotabi 20] (training the first layer only) and [Nguyen 21, Section 3.2] (training both layers with Le Cun's initialization). We also report several simple experiments with synthetic data. They strongly suggest that, at least in our model, the convergence phenomenon extends well beyond the ``NTK regime''.
2107.06400
Sergio Rojas-Galeano
Sergio Rojas-Galeano
Using BERT Encoding to Tackle the Mad-lib Attack in SMS Spam Detection
null
null
null
null
cs.CL cs.IR cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
One of the stratagems used to deceive spam filters is to substitute vocables with synonyms or similar words that turn the message unrecognisable by the detection algorithms. In this paper we investigate whether the recent development of language models sensitive to the semantics and context of words, such as Google's BERT, may be useful to overcome this adversarial attack (called "Mad-lib" as per the word substitution game). Using a dataset of 5572 SMS spam messages, we first established a baseline of detection performance using widely known document representation models (BoW and TFIDF) and the novel BERT model, coupled with a variety of classification algorithms (Decision Tree, kNN, SVM, Logistic Regression, Naive Bayes, Multilayer Perceptron). Then, we built a thesaurus of the vocabulary contained in these messages, and set up a Mad-lib attack experiment in which we modified each message of a held out subset of data (not used in the baseline experiment) with different rates of substitution of original words with synonyms from the thesaurus. Lastly, we evaluated the detection performance of the three representation models (BoW, TFIDF and BERT) coupled with the best classifier from the baseline experiment (SVM). We found that the classic models achieved a 94% Balanced Accuracy (BA) in the original dataset, whereas the BERT model obtained 96%. On the other hand, the Mad-lib attack experiment showed that BERT encodings manage to maintain a similar BA performance of 96% with an average substitution rate of 1.82 words per message, and 95% with 3.34 words substituted per message. In contrast, the BA performance of the BoW and TFIDF encoders dropped to chance. These results hint at the potential advantage of BERT models to combat these type of ingenious attacks, offsetting to some extent for the inappropriate use of semantic relationships in language.
[ { "created": "Tue, 13 Jul 2021 21:17:57 GMT", "version": "v1" } ]
2021-07-16
[ [ "Rojas-Galeano", "Sergio", "" ] ]
One of the stratagems used to deceive spam filters is to substitute vocables with synonyms or similar words that turn the message unrecognisable by the detection algorithms. In this paper we investigate whether the recent development of language models sensitive to the semantics and context of words, such as Google's BERT, may be useful to overcome this adversarial attack (called "Mad-lib" as per the word substitution game). Using a dataset of 5572 SMS spam messages, we first established a baseline of detection performance using widely known document representation models (BoW and TFIDF) and the novel BERT model, coupled with a variety of classification algorithms (Decision Tree, kNN, SVM, Logistic Regression, Naive Bayes, Multilayer Perceptron). Then, we built a thesaurus of the vocabulary contained in these messages, and set up a Mad-lib attack experiment in which we modified each message of a held out subset of data (not used in the baseline experiment) with different rates of substitution of original words with synonyms from the thesaurus. Lastly, we evaluated the detection performance of the three representation models (BoW, TFIDF and BERT) coupled with the best classifier from the baseline experiment (SVM). We found that the classic models achieved a 94% Balanced Accuracy (BA) in the original dataset, whereas the BERT model obtained 96%. On the other hand, the Mad-lib attack experiment showed that BERT encodings manage to maintain a similar BA performance of 96% with an average substitution rate of 1.82 words per message, and 95% with 3.34 words substituted per message. In contrast, the BA performance of the BoW and TFIDF encoders dropped to chance. These results hint at the potential advantage of BERT models to combat these type of ingenious attacks, offsetting to some extent for the inappropriate use of semantic relationships in language.
2012.04780
Sarthak Dash
Sarthak Dash, Gaetano Rossiello, Nandana Mihindukulasooriya, Sugato Bagchi, Alfio Gliozzo
Open Knowledge Graphs Canonicalization using Variational Autoencoders
Accepted to EMNLP 2021
null
null
null
cs.CL cs.AI cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
Noun phrases and Relation phrases in open knowledge graphs are not canonicalized, leading to an explosion of redundant and ambiguous subject-relation-object triples. Existing approaches to solve this problem take a two-step approach. First, they generate embedding representations for both noun and relation phrases, then a clustering algorithm is used to group them using the embeddings as features. In this work, we propose Canonicalizing Using Variational Autoencoders (CUVA), a joint model to learn both embeddings and cluster assignments in an end-to-end approach, which leads to a better vector representation for the noun and relation phrases. Our evaluation over multiple benchmarks shows that CUVA outperforms the existing state-of-the-art approaches. Moreover, we introduce CanonicNell, a novel dataset to evaluate entity canonicalization systems.
[ { "created": "Tue, 8 Dec 2020 22:58:30 GMT", "version": "v1" }, { "created": "Mon, 27 Sep 2021 20:36:00 GMT", "version": "v2" } ]
2021-09-29
[ [ "Dash", "Sarthak", "" ], [ "Rossiello", "Gaetano", "" ], [ "Mihindukulasooriya", "Nandana", "" ], [ "Bagchi", "Sugato", "" ], [ "Gliozzo", "Alfio", "" ] ]
Noun phrases and Relation phrases in open knowledge graphs are not canonicalized, leading to an explosion of redundant and ambiguous subject-relation-object triples. Existing approaches to solve this problem take a two-step approach. First, they generate embedding representations for both noun and relation phrases, then a clustering algorithm is used to group them using the embeddings as features. In this work, we propose Canonicalizing Using Variational Autoencoders (CUVA), a joint model to learn both embeddings and cluster assignments in an end-to-end approach, which leads to a better vector representation for the noun and relation phrases. Our evaluation over multiple benchmarks shows that CUVA outperforms the existing state-of-the-art approaches. Moreover, we introduce CanonicNell, a novel dataset to evaluate entity canonicalization systems.
1609.04135
Junmo Sung
Junmo Sung and Brian L. Evans
Real-time testbed for diversity in powerline and wireless smart grid communications
6 pages, 5 figures, submitted to ICC 2018
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two-way communication is a key feature in a smart grid. It is enabled by either powerline communication or wireless communication technologies. Utilizing both technologies can potentially enhance communication reliability, and many diversity combining schemes have been proposed. In this paper, we propose a flexible real-time testbed to evaluate diversity combining schemes over physical channels. The testbed provides essential parts of physical layers on which both powerline and wireless communications operate. The contributions of this paper are 1) design and implementation of a real-time testbed for diversity of simultaneous powerline and wireless communications, 2) release of the setup information and complete source code for the testbed, and 3) performance evaluation of maximal ratio combining (MRC) on the testbed. As initial results, we show that performance of MRC from measurements obtained on the testbed over physical channels is very close to that in simulations in various test cases under a controlled laboratory environment.
[ { "created": "Wed, 14 Sep 2016 04:39:50 GMT", "version": "v1" }, { "created": "Thu, 30 Mar 2017 22:29:51 GMT", "version": "v2" }, { "created": "Sun, 29 Oct 2017 18:38:02 GMT", "version": "v3" } ]
2017-10-31
[ [ "Sung", "Junmo", "" ], [ "Evans", "Brian L.", "" ] ]
Two-way communication is a key feature in a smart grid. It is enabled by either powerline communication or wireless communication technologies. Utilizing both technologies can potentially enhance communication reliability, and many diversity combining schemes have been proposed. In this paper, we propose a flexible real-time testbed to evaluate diversity combining schemes over physical channels. The testbed provides essential parts of physical layers on which both powerline and wireless communications operate. The contributions of this paper are 1) design and implementation of a real-time testbed for diversity of simultaneous powerline and wireless communications, 2) release of the setup information and complete source code for the testbed, and 3) performance evaluation of maximal ratio combining (MRC) on the testbed. As initial results, we show that performance of MRC from measurements obtained on the testbed over physical channels is very close to that in simulations in various test cases under a controlled laboratory environment.
2105.06131
Mingyu Xiao
Junqiang Peng and Mingyu Xiao
Further Improvements for SAT in Terms of Formula Length
An initial version of this paper with a weaker result was presented at SAT 2021
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we prove that the general CNF satisfiability problem can be solved in $O^*(1.0638^L)$ time, where $L$ is the length of the input CNF-formula (i.e., the total number of literals in the formula), which improves the previous result of $O^*(1.0652^L)$ obtained in 2009. Our algorithm was analyzed by using the measure-and-conquer method. Our improvements are mainly attributed to the following two points: we carefully design branching rules to deal with degree-5 and degree-4 variables to avoid previous bottlenecks; we show that some worst cases will not always happen, and then we can use an amortized technique to get further improvements. In our analyses, we provide some general frameworks for analysis and several lower bounds on the decreasing of the measure to simplify the arguments. These techniques may be used to analyze more algorithms based on the measure-and-conquer method.
[ { "created": "Thu, 13 May 2021 08:15:56 GMT", "version": "v1" }, { "created": "Wed, 17 Aug 2022 15:02:02 GMT", "version": "v2" } ]
2022-08-18
[ [ "Peng", "Junqiang", "" ], [ "Xiao", "Mingyu", "" ] ]
In this paper, we prove that the general CNF satisfiability problem can be solved in $O^*(1.0638^L)$ time, where $L$ is the length of the input CNF-formula (i.e., the total number of literals in the formula), which improves the previous result of $O^*(1.0652^L)$ obtained in 2009. Our algorithm was analyzed by using the measure-and-conquer method. Our improvements are mainly attributed to the following two points: we carefully design branching rules to deal with degree-5 and degree-4 variables to avoid previous bottlenecks; we show that some worst cases will not always happen, and then we can use an amortized technique to get further improvements. In our analyses, we provide some general frameworks for analysis and several lower bounds on the decreasing of the measure to simplify the arguments. These techniques may be used to analyze more algorithms based on the measure-and-conquer method.
cs/0611165
Valmir Barbosa
Ricardo C. Correa, Valmir C. Barbosa
Partially ordered distributed computations on asynchronous point-to-point networks
null
Parallel Computing 35 (2009), 12-28
10.1016/j.parco.2008.09.011
null
cs.DC
null
Asynchronous executions of a distributed algorithm differ from each other due to the nondeterminism in the order in which the messages exchanged are handled. In many situations of interest, the asynchronous executions induced by restricting nondeterminism are more efficient, in an application-specific sense, than the others. In this work, we define partially ordered executions of a distributed algorithm as the executions satisfying some restricted orders of their actions in two different frameworks, those of the so-called event- and pulse-driven computations. The aim of these restrictions is to characterize asynchronous executions that are likely to be more efficient for some important classes of applications. Also, an asynchronous algorithm that ensures the occurrence of partially ordered executions is given for each case. Two of the applications that we believe may benefit from the restricted nondeterminism are backtrack search, in the event-driven case, and iterative algorithms for systems of linear equations, in the pulse-driven case.
[ { "created": "Thu, 30 Nov 2006 13:01:36 GMT", "version": "v1" } ]
2016-11-11
[ [ "Correa", "Ricardo C.", "" ], [ "Barbosa", "Valmir C.", "" ] ]
Asynchronous executions of a distributed algorithm differ from each other due to the nondeterminism in the order in which the messages exchanged are handled. In many situations of interest, the asynchronous executions induced by restricting nondeterminism are more efficient, in an application-specific sense, than the others. In this work, we define partially ordered executions of a distributed algorithm as the executions satisfying some restricted orders of their actions in two different frameworks, those of the so-called event- and pulse-driven computations. The aim of these restrictions is to characterize asynchronous executions that are likely to be more efficient for some important classes of applications. Also, an asynchronous algorithm that ensures the occurrence of partially ordered executions is given for each case. Two of the applications that we believe may benefit from the restricted nondeterminism are backtrack search, in the event-driven case, and iterative algorithms for systems of linear equations, in the pulse-driven case.
1802.06852
Zeeshan Bhatti
Zeeshan Bhatti, Ahsan Abro, Abdul Rehman Gillal, Mostafa Karbasi
Be-Educated: Multimedia Learning through 3D Animation
10 pages, 32 figures
INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND EMERGING TECHNOLOGIES,(IJCET)- VOL1(1) DECEMBER 2017- 13-22
null
null
cs.GR cs.MM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Multimedia learning tools and techniques are placing its importance with large scale in education sector. With the help of multimedia learning, various complex phenomenon and theories can be explained and taught easily and conveniently. This project aims to teach and spread the importance of education and respecting the tools of education: pen, paper, pencil, rubber. To achieve this cognitive learning, a 3D animated movie has been developed using principles of multimedia learning with 3D cartoon characters resembling the actual educational objects, where the buildings have also been modelled to resemble real books and diaries. For modelling and animation of these characters, polygon mesh tools are used in 3D Studio Max. Additionally, the final composition of video and audio is performed in adobe premiere. This 3D animated video aims to highlight a message of importance for education and stationary. The Moral of movie is that do not waste your stationary material, use your Pen and Paper for the purpose they are made for. To be a good citizen you have to Be-Educated yourself and for that you need to give value to Pen. The final rendered and composited 3D animated video reflects this moral and portrays the intended message with very vibrant visuals
[ { "created": "Mon, 19 Feb 2018 21:08:50 GMT", "version": "v1" } ]
2018-02-21
[ [ "Bhatti", "Zeeshan", "" ], [ "Abro", "Ahsan", "" ], [ "Gillal", "Abdul Rehman", "" ], [ "Karbasi", "Mostafa", "" ] ]
Multimedia learning tools and techniques are placing its importance with large scale in education sector. With the help of multimedia learning, various complex phenomenon and theories can be explained and taught easily and conveniently. This project aims to teach and spread the importance of education and respecting the tools of education: pen, paper, pencil, rubber. To achieve this cognitive learning, a 3D animated movie has been developed using principles of multimedia learning with 3D cartoon characters resembling the actual educational objects, where the buildings have also been modelled to resemble real books and diaries. For modelling and animation of these characters, polygon mesh tools are used in 3D Studio Max. Additionally, the final composition of video and audio is performed in adobe premiere. This 3D animated video aims to highlight a message of importance for education and stationary. The Moral of movie is that do not waste your stationary material, use your Pen and Paper for the purpose they are made for. To be a good citizen you have to Be-Educated yourself and for that you need to give value to Pen. The final rendered and composited 3D animated video reflects this moral and portrays the intended message with very vibrant visuals
2005.01317
Jicong Fan
Jicong Fan, Chengrun Yang, Madeleine Udell
Robust Non-Linear Matrix Factorization for Dictionary Learning, Denoising, and Clustering
null
IEEE Transactions on Signal Processing 69, 1755-1770 (2021)
10.1109/TSP.2021.3062988
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Low dimensional nonlinear structure abounds in datasets across computer vision and machine learning. Kernelized matrix factorization techniques have recently been proposed to learn these nonlinear structures for denoising, classification, dictionary learning, and missing data imputation, by observing that the image of the matrix in a sufficiently large feature space is low-rank. However, these nonlinear methods fail in the presence of sparse noise or outliers. In this work, we propose a new robust nonlinear factorization method called Robust Non-Linear Matrix Factorization (RNLMF). RNLMF constructs a dictionary for the data space by factoring a kernelized feature space; a noisy matrix can then be decomposed as the sum of a sparse noise matrix and a clean data matrix that lies in a low dimensional nonlinear manifold. RNLMF is robust to sparse noise and outliers and scales to matrices with thousands of rows and columns. Empirically, RNLMF achieves noticeable improvements over baseline methods in denoising and clustering.
[ { "created": "Mon, 4 May 2020 08:32:21 GMT", "version": "v1" }, { "created": "Wed, 2 Dec 2020 08:51:54 GMT", "version": "v2" } ]
2021-06-01
[ [ "Fan", "Jicong", "" ], [ "Yang", "Chengrun", "" ], [ "Udell", "Madeleine", "" ] ]
Low dimensional nonlinear structure abounds in datasets across computer vision and machine learning. Kernelized matrix factorization techniques have recently been proposed to learn these nonlinear structures for denoising, classification, dictionary learning, and missing data imputation, by observing that the image of the matrix in a sufficiently large feature space is low-rank. However, these nonlinear methods fail in the presence of sparse noise or outliers. In this work, we propose a new robust nonlinear factorization method called Robust Non-Linear Matrix Factorization (RNLMF). RNLMF constructs a dictionary for the data space by factoring a kernelized feature space; a noisy matrix can then be decomposed as the sum of a sparse noise matrix and a clean data matrix that lies in a low dimensional nonlinear manifold. RNLMF is robust to sparse noise and outliers and scales to matrices with thousands of rows and columns. Empirically, RNLMF achieves noticeable improvements over baseline methods in denoising and clustering.
1105.5459
T. Hogg
T. Hogg
Solving Highly Constrained Search Problems with Quantum Computers
null
Journal Of Artificial Intelligence Research, Volume 10, pages 39-66, 1999
10.1613/jair.574
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A previously developed quantum search algorithm for solving 1-SAT problems in a single step is generalized to apply to a range of highly constrained k-SAT problems. We identify a bound on the number of clauses in satisfiability problems for which the generalized algorithm can find a solution in a constant number of steps as the number of variables increases. This performance contrasts with the linear growth in the number of steps required by the best classical algorithms, and the exponential number required by classical and quantum methods that ignore the problem structure. In some cases, the algorithm can also guarantee that insoluble problems in fact have no solutions, unlike previously proposed quantum search algorithms.
[ { "created": "Fri, 27 May 2011 01:52:46 GMT", "version": "v1" } ]
2011-05-30
[ [ "Hogg", "T.", "" ] ]
A previously developed quantum search algorithm for solving 1-SAT problems in a single step is generalized to apply to a range of highly constrained k-SAT problems. We identify a bound on the number of clauses in satisfiability problems for which the generalized algorithm can find a solution in a constant number of steps as the number of variables increases. This performance contrasts with the linear growth in the number of steps required by the best classical algorithms, and the exponential number required by classical and quantum methods that ignore the problem structure. In some cases, the algorithm can also guarantee that insoluble problems in fact have no solutions, unlike previously proposed quantum search algorithms.
2401.09628
Leello Dadi
Leello Dadi, Ioannis Panageas, Stratis Skoulakis, Luca Viano, and Volkan Cevher
Polynomial Convergence of Bandit No-Regret Dynamics in Congestion Games
null
null
null
null
cs.GT
http://creativecommons.org/licenses/by/4.0/
We introduce an online learning algorithm in the bandit feedback model that, once adopted by all agents of a congestion game, results in game-dynamics that converge to an $\epsilon$-approximate Nash Equilibrium in a polynomial number of rounds with respect to $1/\epsilon$, the number of players and the number of available resources. The proposed algorithm also guarantees sublinear regret to any agent adopting it. As a result, our work answers an open question from arXiv:2206.01880 and extends the recent results of arXiv:2306.15543 to the bandit feedback model. We additionally establish that our online learning algorithm can be implemented in polynomial time for the important special case of Network Congestion Games on Directed Acyclic Graphs (DAG) by constructing an exact $1$-barycentric spanner for DAGs.
[ { "created": "Wed, 17 Jan 2024 22:37:31 GMT", "version": "v1" } ]
2024-01-19
[ [ "Dadi", "Leello", "" ], [ "Panageas", "Ioannis", "" ], [ "Skoulakis", "Stratis", "" ], [ "Viano", "Luca", "" ], [ "Cevher", "Volkan", "" ] ]
We introduce an online learning algorithm in the bandit feedback model that, once adopted by all agents of a congestion game, results in game-dynamics that converge to an $\epsilon$-approximate Nash Equilibrium in a polynomial number of rounds with respect to $1/\epsilon$, the number of players and the number of available resources. The proposed algorithm also guarantees sublinear regret to any agent adopting it. As a result, our work answers an open question from arXiv:2206.01880 and extends the recent results of arXiv:2306.15543 to the bandit feedback model. We additionally establish that our online learning algorithm can be implemented in polynomial time for the important special case of Network Congestion Games on Directed Acyclic Graphs (DAG) by constructing an exact $1$-barycentric spanner for DAGs.
1609.01614
Mao Zheng
Mao Zheng, Qian Xu and Hao Fan
Modeling The Adaption Rule in Context-aware Systems
11 pages, 4 tables, 7 figures
International Journal of Ad hoc, Sensor & Ubiquitous Computing, Vol.7, No.3/4, August 2016, ISSN : 0976 - 1764 (Online); 0976 - 2205 (Print),
10.5121/ijasuc.2016.7401
null
cs.HC cs.ET
http://creativecommons.org/licenses/by-nc-sa/4.0/
Context awareness is increasingly gaining applicability in interactive ubiquitous mobile computing systems. Each context-aware application has its own set of behaviors to react to context modifications. This paper is concerned with the context modeling and the development methodology for context-aware systems. We proposed a rule-based approach and use the adaption tree to model the adaption rule of context-aware systems. We illustrate this idea in an arithmetic game application.
[ { "created": "Tue, 6 Sep 2016 15:42:56 GMT", "version": "v1" } ]
2016-09-07
[ [ "Zheng", "Mao", "" ], [ "Xu", "Qian", "" ], [ "Fan", "Hao", "" ] ]
Context awareness is increasingly gaining applicability in interactive ubiquitous mobile computing systems. Each context-aware application has its own set of behaviors to react to context modifications. This paper is concerned with the context modeling and the development methodology for context-aware systems. We proposed a rule-based approach and use the adaption tree to model the adaption rule of context-aware systems. We illustrate this idea in an arithmetic game application.
2002.06086
Birgitta Dresp-Langley
Birgitta Dresp-Langley
Artificial Intelligence, connected products, virtual reality: potential impacts on consumer safety in terms of their physical and psychological ability or well-being
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the progressive digitalisation of a majority of services to communities and individuals, humankind is facing new challenges. While energy sources are rapidly dwindling and rigorous choices have to be made to ensure the sustainability of our environment, there is increasing concern in science and society about the safety of connected products and technology for the individual user. This essay provides a first basis for further inquiry into the risks in terms of potentially negative, short and long-term, effects of connected technologies and massive digitalisation on the psychological and/or physical abilities and well-being of users or consumers.
[ { "created": "Fri, 14 Feb 2020 15:43:20 GMT", "version": "v1" } ]
2020-02-17
[ [ "Dresp-Langley", "Birgitta", "" ] ]
With the progressive digitalisation of a majority of services to communities and individuals, humankind is facing new challenges. While energy sources are rapidly dwindling and rigorous choices have to be made to ensure the sustainability of our environment, there is increasing concern in science and society about the safety of connected products and technology for the individual user. This essay provides a first basis for further inquiry into the risks in terms of potentially negative, short and long-term, effects of connected technologies and massive digitalisation on the psychological and/or physical abilities and well-being of users or consumers.
1912.08494
Sameen Maruf
Sameen Maruf, Fahimeh Saleh and Gholamreza Haffari
A Survey on Document-level Neural Machine Translation: Methods and Evaluation
Accepted for publication by ACM Computing Surveys
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine translation (MT) is an important task in natural language processing (NLP) as it automates the translation process and reduces the reliance on human translators. With the resurgence of neural networks, the translation quality surpasses that of the translations obtained using statistical techniques for most language-pairs. Up until a few years ago, almost all of the neural translation models translated sentences independently, without incorporating the wider document-context and inter-dependencies among the sentences. The aim of this survey paper is to highlight the major works that have been undertaken in the space of document-level machine translation after the neural revolution, so that researchers can recognise the current state and future directions of this field. We provide an organisation of the literature based on novelties in modelling and architectures as well as training and decoding strategies. In addition, we cover evaluation strategies that have been introduced to account for the improvements in document MT, including automatic metrics and discourse-targeted test sets. We conclude by presenting possible avenues for future exploration in this research field.
[ { "created": "Wed, 18 Dec 2019 10:07:20 GMT", "version": "v1" }, { "created": "Sun, 11 Oct 2020 23:10:22 GMT", "version": "v2" }, { "created": "Wed, 13 Jan 2021 00:31:53 GMT", "version": "v3" } ]
2021-01-14
[ [ "Maruf", "Sameen", "" ], [ "Saleh", "Fahimeh", "" ], [ "Haffari", "Gholamreza", "" ] ]
Machine translation (MT) is an important task in natural language processing (NLP) as it automates the translation process and reduces the reliance on human translators. With the resurgence of neural networks, the translation quality surpasses that of the translations obtained using statistical techniques for most language-pairs. Up until a few years ago, almost all of the neural translation models translated sentences independently, without incorporating the wider document-context and inter-dependencies among the sentences. The aim of this survey paper is to highlight the major works that have been undertaken in the space of document-level machine translation after the neural revolution, so that researchers can recognise the current state and future directions of this field. We provide an organisation of the literature based on novelties in modelling and architectures as well as training and decoding strategies. In addition, we cover evaluation strategies that have been introduced to account for the improvements in document MT, including automatic metrics and discourse-targeted test sets. We conclude by presenting possible avenues for future exploration in this research field.
2010.00104
Pritam Sarkar
Pritam Sarkar, Ali Etemad
CardioGAN: Attentive Generative Adversarial Network with Dual Discriminators for Synthesis of ECG from PPG
Accepted in AAAI 2021
null
null
null
cs.LG eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Electrocardiogram (ECG) is the electrical measurement of cardiac activity, whereas Photoplethysmogram (PPG) is the optical measurement of volumetric changes in blood circulation. While both signals are used for heart rate monitoring, from a medical perspective, ECG is more useful as it carries additional cardiac information. Despite many attempts toward incorporating ECG sensing in smartwatches or similar wearable devices for continuous and reliable cardiac monitoring, PPG sensors are the main feasible sensing solution available. In order to tackle this problem, we propose CardioGAN, an adversarial model which takes PPG as input and generates ECG as output. The proposed network utilizes an attention-based generator to learn local salient features, as well as dual discriminators to preserve the integrity of generated data in both time and frequency domains. Our experiments show that the ECG generated by CardioGAN provides more reliable heart rate measurements compared to the original input PPG, reducing the error from 9.74 beats per minute (measured from the PPG) to 2.89 (measured from the generated ECG).
[ { "created": "Wed, 30 Sep 2020 20:49:30 GMT", "version": "v1" }, { "created": "Tue, 15 Dec 2020 05:51:03 GMT", "version": "v2" } ]
2020-12-16
[ [ "Sarkar", "Pritam", "" ], [ "Etemad", "Ali", "" ] ]
Electrocardiogram (ECG) is the electrical measurement of cardiac activity, whereas Photoplethysmogram (PPG) is the optical measurement of volumetric changes in blood circulation. While both signals are used for heart rate monitoring, from a medical perspective, ECG is more useful as it carries additional cardiac information. Despite many attempts toward incorporating ECG sensing in smartwatches or similar wearable devices for continuous and reliable cardiac monitoring, PPG sensors are the main feasible sensing solution available. In order to tackle this problem, we propose CardioGAN, an adversarial model which takes PPG as input and generates ECG as output. The proposed network utilizes an attention-based generator to learn local salient features, as well as dual discriminators to preserve the integrity of generated data in both time and frequency domains. Our experiments show that the ECG generated by CardioGAN provides more reliable heart rate measurements compared to the original input PPG, reducing the error from 9.74 beats per minute (measured from the PPG) to 2.89 (measured from the generated ECG).
1307.4468
EPTCS
Mark Reynolds (The University of Western Australia)
A Faster Tableau for CTL*
In Proceedings GandALF 2013, arXiv:1307.4162
EPTCS 119, 2013, pp. 50-63
10.4204/EPTCS.119.7
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There have been several recent suggestions for tableau systems for deciding satisfiability in the practically important branching time temporal logic known as CTL*. In this paper we present a streamlined and more traditional tableau approach built upon the author's earlier theoretical work. Soundness and completeness results are proved. A prototype implementation demonstrates the significantly improved performance of the new approach on a range of test formulas. We also see that it compares favourably to state of the art, game and automata based decision procedures.
[ { "created": "Wed, 17 Jul 2013 01:41:36 GMT", "version": "v1" } ]
2013-07-18
[ [ "Reynolds", "Mark", "", "The University of Western Australia" ] ]
There have been several recent suggestions for tableau systems for deciding satisfiability in the practically important branching time temporal logic known as CTL*. In this paper we present a streamlined and more traditional tableau approach built upon the author's earlier theoretical work. Soundness and completeness results are proved. A prototype implementation demonstrates the significantly improved performance of the new approach on a range of test formulas. We also see that it compares favourably to state of the art, game and automata based decision procedures.
1210.2123
Flavio Calmon
Flavio du Pin Calmon, Nadia Fawaz
Privacy Against Statistical Inference
Allerton 2012, 8 pages
null
null
null
cs.IT cs.CR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a general statistical inference framework to capture the privacy threat incurred by a user that releases data to a passive but curious adversary, given utility constraints. We show that applying this general framework to the setting where the adversary uses the self-information cost function naturally leads to a non-asymptotic information-theoretic approach for characterizing the best achievable privacy subject to utility constraints. Based on these results we introduce two privacy metrics, namely average information leakage and maximum information leakage. We prove that under both metrics the resulting design problem of finding the optimal mapping from the user's data to a privacy-preserving output can be cast as a modified rate-distortion problem which, in turn, can be formulated as a convex program. Finally, we compare our framework with differential privacy.
[ { "created": "Mon, 8 Oct 2012 01:07:48 GMT", "version": "v1" } ]
2012-10-09
[ [ "Calmon", "Flavio du Pin", "" ], [ "Fawaz", "Nadia", "" ] ]
We propose a general statistical inference framework to capture the privacy threat incurred by a user that releases data to a passive but curious adversary, given utility constraints. We show that applying this general framework to the setting where the adversary uses the self-information cost function naturally leads to a non-asymptotic information-theoretic approach for characterizing the best achievable privacy subject to utility constraints. Based on these results we introduce two privacy metrics, namely average information leakage and maximum information leakage. We prove that under both metrics the resulting design problem of finding the optimal mapping from the user's data to a privacy-preserving output can be cast as a modified rate-distortion problem which, in turn, can be formulated as a convex program. Finally, we compare our framework with differential privacy.
2011.13485
Zhiyang He
Zhiyang He, Jason Li, Magnus Wahlstr\"om
Near-linear-time, Optimal Vertex Cut Sparsifiers in Directed Acyclic Graphs
null
null
null
null
cs.DS math.CO
http://creativecommons.org/licenses/by/4.0/
Let $G$ be a graph and $S, T \subseteq V(G)$ be (possibly overlapping) sets of terminals, $|S|=|T|=k$. We are interested in computing a vertex sparsifier for terminal cuts in $G$, i.e., a graph $H$ on a smallest possible number of vertices, where $S \cup T \subseteq V(H)$ and such that for every $A \subseteq S$ and $B \subseteq T$ the size of a minimum $(A,B)$-vertex cut is the same in $G$ as in $H$. We assume that our graphs are unweighted and that terminals may be part of the min-cut. In previous work, Kratsch and Wahlstr\"om (FOCS 2012/JACM 2020) used connections to matroid theory to show that a vertex sparsifier $H$ with $O(k^3)$ vertices can be computed in randomized polynomial time, even for arbitrary digraphs $G$. However, since then, no improvements on the size $O(k^3)$ have been shown. In this paper, we draw inspiration from the renowned Bollob\'as's Two-Families Theorem in extremal combinatorics and introduce the use of total orderings into Kratsch and Wahlstr\"om's methods. This new perspective allows us to construct a sparsifier $H$ of $\Theta(k^2)$ vertices for the case that $G$ is a DAG. We also show how to compute $H$ in time near-linear in the size of $G$, improving on the previous $O(n^{\omega+1})$. Furthermore, $H$ recovers the closest min-cut in $G$ for every partition $(A,B)$, which was not previously known. Finally, we show that a sparsifier of size $\Omega(k^2)$ is required, both for DAGs and for undirected edge cuts.
[ { "created": "Thu, 26 Nov 2020 22:39:34 GMT", "version": "v1" }, { "created": "Sat, 3 Jul 2021 05:19:57 GMT", "version": "v2" } ]
2021-07-06
[ [ "He", "Zhiyang", "" ], [ "Li", "Jason", "" ], [ "Wahlström", "Magnus", "" ] ]
Let $G$ be a graph and $S, T \subseteq V(G)$ be (possibly overlapping) sets of terminals, $|S|=|T|=k$. We are interested in computing a vertex sparsifier for terminal cuts in $G$, i.e., a graph $H$ on a smallest possible number of vertices, where $S \cup T \subseteq V(H)$ and such that for every $A \subseteq S$ and $B \subseteq T$ the size of a minimum $(A,B)$-vertex cut is the same in $G$ as in $H$. We assume that our graphs are unweighted and that terminals may be part of the min-cut. In previous work, Kratsch and Wahlstr\"om (FOCS 2012/JACM 2020) used connections to matroid theory to show that a vertex sparsifier $H$ with $O(k^3)$ vertices can be computed in randomized polynomial time, even for arbitrary digraphs $G$. However, since then, no improvements on the size $O(k^3)$ have been shown. In this paper, we draw inspiration from the renowned Bollob\'as's Two-Families Theorem in extremal combinatorics and introduce the use of total orderings into Kratsch and Wahlstr\"om's methods. This new perspective allows us to construct a sparsifier $H$ of $\Theta(k^2)$ vertices for the case that $G$ is a DAG. We also show how to compute $H$ in time near-linear in the size of $G$, improving on the previous $O(n^{\omega+1})$. Furthermore, $H$ recovers the closest min-cut in $G$ for every partition $(A,B)$, which was not previously known. Finally, we show that a sparsifier of size $\Omega(k^2)$ is required, both for DAGs and for undirected edge cuts.
1009.5398
Ali Reza Manashty
Ali Reza Manashty, Amir Rajabzadeh and Zahra Forootan Jahromi
A Scenario-Based Mobile Application for Robot-Assisted Smart Digital Homes
8 pages, 8 figures, IEEE Publication format, Keywords- smart homes; mobile applications; remote home controls; automated digital homes; robot assisted at home; general packet radio service (GPRS); short message system (SMS); robot assisted at home; scenario based smart home
International Journal of Computer Science and Information Security (IJCSIS), Vol. 8, No. 5, August 2010, ISSN 1947-5500, Pages 89-96
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Smart homes are becoming more popular, as every day a new home appliance can be digitally controlled. Smart Digital Homes are using a server to make interaction with all the possible devices in one place, on a computer or webpage. In this paper we designed and implemented a mobile application using Windows Mobile platform that can connect to the controlling server of a Smart Home and grants the access to the Smart Home devices and robots everywhere possible. UML diagrams are presented to illustrate the application design process. Robots are also considered as devices that are able to interact to other object and devices. Scenarios are defined as a set of sequential actions to help manage different tasks all in one place. The mobile application can connect to the server using GPRS mobile internet and Short Message System (SMS). Interactive home map is also designed for easier status-checking and interacting with the devices using the mobile phones.
[ { "created": "Mon, 27 Sep 2010 20:58:05 GMT", "version": "v1" } ]
2010-09-29
[ [ "Manashty", "Ali Reza", "" ], [ "Rajabzadeh", "Amir", "" ], [ "Jahromi", "Zahra Forootan", "" ] ]
Smart homes are becoming more popular, as every day a new home appliance can be digitally controlled. Smart Digital Homes are using a server to make interaction with all the possible devices in one place, on a computer or webpage. In this paper we designed and implemented a mobile application using Windows Mobile platform that can connect to the controlling server of a Smart Home and grants the access to the Smart Home devices and robots everywhere possible. UML diagrams are presented to illustrate the application design process. Robots are also considered as devices that are able to interact to other object and devices. Scenarios are defined as a set of sequential actions to help manage different tasks all in one place. The mobile application can connect to the server using GPRS mobile internet and Short Message System (SMS). Interactive home map is also designed for easier status-checking and interacting with the devices using the mobile phones.
2012.06961
Jiashuo Jiang
Jiashuo Jiang, Xiaocheng Li, Jiawei Zhang
Online Stochastic Optimization with Wasserstein Based Non-stationarity
null
null
null
null
cs.LG math.OC
http://creativecommons.org/licenses/by/4.0/
We consider a general online stochastic optimization problem with multiple budget constraints over a horizon of finite time periods. In each time period, a reward function and multiple cost functions are revealed, and the decision maker needs to specify an action from a convex and compact action set to collect the reward and consume the budget. Each cost function corresponds to the consumption of one budget. In each period, the reward and cost functions are drawn from an unknown distribution, which is non-stationary across time. The objective of the decision maker is to maximize the cumulative reward subject to the budget constraints. This formulation captures a wide range of applications including online linear programming and network revenue management, among others. In this paper, we consider two settings: (i) a data-driven setting where the true distribution is unknown but a prior estimate (possibly inaccurate) is available; (ii) an uninformative setting where the true distribution is completely unknown. We propose a unified Wasserstein-distance based measure to quantify the inaccuracy of the prior estimate in setting (i) and the non-stationarity of the system in setting (ii). We show that the proposed measure leads to a necessary and sufficient condition for the attainability of a sublinear regret in both settings. For setting (i), we propose a new algorithm, which takes a primal-dual perspective and integrates the prior information of the underlying distributions into an online gradient descent procedure in the dual space. The algorithm also naturally extends to the uninformative setting (ii). Under both settings, we show the corresponding algorithm achieves a regret of optimal order. In numerical experiments, we demonstrate how the proposed algorithms can be naturally integrated with the re-solving technique to further boost the empirical performance.
[ { "created": "Sun, 13 Dec 2020 04:47:37 GMT", "version": "v1" }, { "created": "Wed, 23 Dec 2020 06:24:13 GMT", "version": "v2" }, { "created": "Mon, 25 Jul 2022 00:17:45 GMT", "version": "v3" } ]
2022-07-26
[ [ "Jiang", "Jiashuo", "" ], [ "Li", "Xiaocheng", "" ], [ "Zhang", "Jiawei", "" ] ]
We consider a general online stochastic optimization problem with multiple budget constraints over a horizon of finite time periods. In each time period, a reward function and multiple cost functions are revealed, and the decision maker needs to specify an action from a convex and compact action set to collect the reward and consume the budget. Each cost function corresponds to the consumption of one budget. In each period, the reward and cost functions are drawn from an unknown distribution, which is non-stationary across time. The objective of the decision maker is to maximize the cumulative reward subject to the budget constraints. This formulation captures a wide range of applications including online linear programming and network revenue management, among others. In this paper, we consider two settings: (i) a data-driven setting where the true distribution is unknown but a prior estimate (possibly inaccurate) is available; (ii) an uninformative setting where the true distribution is completely unknown. We propose a unified Wasserstein-distance based measure to quantify the inaccuracy of the prior estimate in setting (i) and the non-stationarity of the system in setting (ii). We show that the proposed measure leads to a necessary and sufficient condition for the attainability of a sublinear regret in both settings. For setting (i), we propose a new algorithm, which takes a primal-dual perspective and integrates the prior information of the underlying distributions into an online gradient descent procedure in the dual space. The algorithm also naturally extends to the uninformative setting (ii). Under both settings, we show the corresponding algorithm achieves a regret of optimal order. In numerical experiments, we demonstrate how the proposed algorithms can be naturally integrated with the re-solving technique to further boost the empirical performance.
2403.17532
Yilin Wang
Yilin Wang, Minghao Hu, Zhen Huang, Dongsheng Li, Dong Yang, Xicheng Lu
KC-GenRe: A Knowledge-constrained Generative Re-ranking Method Based on Large Language Models for Knowledge Graph Completion
This paper has been accepted for publication in the proceedings of LREC-COLING 2024
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of knowledge graph completion (KGC) is to predict missing facts among entities. Previous methods for KGC re-ranking are mostly built on non-generative language models to obtain the probability of each candidate. Recently, generative large language models (LLMs) have shown outstanding performance on several tasks such as information extraction and dialog systems. Leveraging them for KGC re-ranking is beneficial for leveraging the extensive pre-trained knowledge and powerful generative capabilities. However, it may encounter new problems when accomplishing the task, namely mismatch, misordering and omission. To this end, we introduce KC-GenRe, a knowledge-constrained generative re-ranking method based on LLMs for KGC. To overcome the mismatch issue, we formulate the KGC re-ranking task as a candidate identifier sorting generation problem implemented by generative LLMs. To tackle the misordering issue, we develop a knowledge-guided interactive training method that enhances the identification and ranking of candidates. To address the omission issue, we design a knowledge-augmented constrained inference method that enables contextual prompting and controlled generation, so as to obtain valid rankings. Experimental results show that KG-GenRe achieves state-of-the-art performance on four datasets, with gains of up to 6.7% and 7.7% in the MRR and Hits@1 metric compared to previous methods, and 9.0% and 11.1% compared to that without re-ranking. Extensive analysis demonstrates the effectiveness of components in KG-GenRe.
[ { "created": "Tue, 26 Mar 2024 09:36:59 GMT", "version": "v1" } ]
2024-03-27
[ [ "Wang", "Yilin", "" ], [ "Hu", "Minghao", "" ], [ "Huang", "Zhen", "" ], [ "Li", "Dongsheng", "" ], [ "Yang", "Dong", "" ], [ "Lu", "Xicheng", "" ] ]
The goal of knowledge graph completion (KGC) is to predict missing facts among entities. Previous methods for KGC re-ranking are mostly built on non-generative language models to obtain the probability of each candidate. Recently, generative large language models (LLMs) have shown outstanding performance on several tasks such as information extraction and dialog systems. Leveraging them for KGC re-ranking is beneficial for leveraging the extensive pre-trained knowledge and powerful generative capabilities. However, it may encounter new problems when accomplishing the task, namely mismatch, misordering and omission. To this end, we introduce KC-GenRe, a knowledge-constrained generative re-ranking method based on LLMs for KGC. To overcome the mismatch issue, we formulate the KGC re-ranking task as a candidate identifier sorting generation problem implemented by generative LLMs. To tackle the misordering issue, we develop a knowledge-guided interactive training method that enhances the identification and ranking of candidates. To address the omission issue, we design a knowledge-augmented constrained inference method that enables contextual prompting and controlled generation, so as to obtain valid rankings. Experimental results show that KG-GenRe achieves state-of-the-art performance on four datasets, with gains of up to 6.7% and 7.7% in the MRR and Hits@1 metric compared to previous methods, and 9.0% and 11.1% compared to that without re-ranking. Extensive analysis demonstrates the effectiveness of components in KG-GenRe.
2101.11711
Nikolaos Dervilis Dr
Kartik Chandrasekhar, Nevena Stevanovic, Elizabeth J. Cross, Nikolaos Dervilis, Keith Worden
Damage detection in operational wind turbine blades using a new approach based on machine learning
null
This is an author produced version of a paper subsequently published in Renewable Energy, Elsevier, 2021. Uploaded in accordance with the publisher's self-archiving policy
10.1016/j.renene.2020.12.119
null
cs.LG physics.data-an
http://creativecommons.org/licenses/by/4.0/
The application of reliable structural health monitoring (SHM) technologies to operational wind turbine blades is a challenging task, due to the uncertain nature of the environments they operate in. In this paper, a novel SHM methodology, which uses Gaussian Processes (GPs) is proposed. The methodology takes advantage of the fact that the blades on a turbine are nominally identical in structural properties and encounter the same environmental and operational variables (EOVs). The properties of interest are the first edgewise frequencies of the blades. The GPs are used to predict the edge frequencies of one blade given that of another, after these relationships between the pairs of blades have been learned when the blades are in a healthy state. In using this approach, the proposed SHM methodology is able to identify when the blades start behaving differently from one another over time. To validate the concept, the proposed SHM system is applied to real onshore wind turbine blade data, where some form of damage was known to have taken place. X-bar control chart analysis of the residual errors between the GP predictions and actual frequencies show that the system successfully identified early onset of damage as early as six months before it was identified and remedied.
[ { "created": "Mon, 25 Jan 2021 21:56:33 GMT", "version": "v1" } ]
2021-01-29
[ [ "Chandrasekhar", "Kartik", "" ], [ "Stevanovic", "Nevena", "" ], [ "Cross", "Elizabeth J.", "" ], [ "Dervilis", "Nikolaos", "" ], [ "Worden", "Keith", "" ] ]
The application of reliable structural health monitoring (SHM) technologies to operational wind turbine blades is a challenging task, due to the uncertain nature of the environments they operate in. In this paper, a novel SHM methodology, which uses Gaussian Processes (GPs) is proposed. The methodology takes advantage of the fact that the blades on a turbine are nominally identical in structural properties and encounter the same environmental and operational variables (EOVs). The properties of interest are the first edgewise frequencies of the blades. The GPs are used to predict the edge frequencies of one blade given that of another, after these relationships between the pairs of blades have been learned when the blades are in a healthy state. In using this approach, the proposed SHM methodology is able to identify when the blades start behaving differently from one another over time. To validate the concept, the proposed SHM system is applied to real onshore wind turbine blade data, where some form of damage was known to have taken place. X-bar control chart analysis of the residual errors between the GP predictions and actual frequencies show that the system successfully identified early onset of damage as early as six months before it was identified and remedied.
2309.09095
Rustam Zayanov
Rustam Zayanov, Francisco S. Melo, Manuel Lopes
Interactively Teaching an Inverse Reinforcement Learner with Limited Feedback
7 pages, 3 figures
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of teaching via demonstrations in sequential decision-making tasks. In particular, we focus on the situation when the teacher has no access to the learner's model and policy, and the feedback from the learner is limited to trajectories that start from states selected by the teacher. The necessity to select the starting states and infer the learner's policy creates an opportunity for using the methods of inverse reinforcement learning and active learning by the teacher. In this work, we formalize the teaching process with limited feedback and propose an algorithm that solves this teaching problem. The algorithm uses a modified version of the active value-at-risk method to select the starting states, a modified maximum causal entropy algorithm to infer the policy, and the difficulty score ratio method to choose the teaching demonstrations. We test the algorithm in a synthetic car driving environment and conclude that the proposed algorithm is an effective solution when the learner's feedback is limited.
[ { "created": "Sat, 16 Sep 2023 21:12:04 GMT", "version": "v1" } ]
2023-09-19
[ [ "Zayanov", "Rustam", "" ], [ "Melo", "Francisco S.", "" ], [ "Lopes", "Manuel", "" ] ]
We study the problem of teaching via demonstrations in sequential decision-making tasks. In particular, we focus on the situation when the teacher has no access to the learner's model and policy, and the feedback from the learner is limited to trajectories that start from states selected by the teacher. The necessity to select the starting states and infer the learner's policy creates an opportunity for using the methods of inverse reinforcement learning and active learning by the teacher. In this work, we formalize the teaching process with limited feedback and propose an algorithm that solves this teaching problem. The algorithm uses a modified version of the active value-at-risk method to select the starting states, a modified maximum causal entropy algorithm to infer the policy, and the difficulty score ratio method to choose the teaching demonstrations. We test the algorithm in a synthetic car driving environment and conclude that the proposed algorithm is an effective solution when the learner's feedback is limited.
2205.01789
Nishanth Dikkala
Pranjal Awasthi, Nishanth Dikkala, Pritish Kamath
Do More Negative Samples Necessarily Hurt in Contrastive Learning?
16 pages
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent investigations in noise contrastive estimation suggest, both empirically as well as theoretically, that while having more "negative samples" in the contrastive loss improves downstream classification performance initially, beyond a threshold, it hurts downstream performance due to a "collision-coverage" trade-off. But is such a phenomenon inherent in contrastive learning? We show in a simple theoretical setting, where positive pairs are generated by sampling from the underlying latent class (introduced by Saunshi et al. (ICML 2019)), that the downstream performance of the representation optimizing the (population) contrastive loss in fact does not degrade with the number of negative samples. Along the way, we give a structural characterization of the optimal representation in our framework, for noise contrastive estimation. We also provide empirical support for our theoretical results on CIFAR-10 and CIFAR-100 datasets.
[ { "created": "Tue, 3 May 2022 21:29:59 GMT", "version": "v1" }, { "created": "Wed, 22 Jun 2022 20:47:11 GMT", "version": "v2" } ]
2022-06-24
[ [ "Awasthi", "Pranjal", "" ], [ "Dikkala", "Nishanth", "" ], [ "Kamath", "Pritish", "" ] ]
Recent investigations in noise contrastive estimation suggest, both empirically as well as theoretically, that while having more "negative samples" in the contrastive loss improves downstream classification performance initially, beyond a threshold, it hurts downstream performance due to a "collision-coverage" trade-off. But is such a phenomenon inherent in contrastive learning? We show in a simple theoretical setting, where positive pairs are generated by sampling from the underlying latent class (introduced by Saunshi et al. (ICML 2019)), that the downstream performance of the representation optimizing the (population) contrastive loss in fact does not degrade with the number of negative samples. Along the way, we give a structural characterization of the optimal representation in our framework, for noise contrastive estimation. We also provide empirical support for our theoretical results on CIFAR-10 and CIFAR-100 datasets.
1405.7903
Carsten Gottschlich
Carsten Gottschlich and Dominic Schuhmacher
The Shortlist Method for Fast Computation of the Earth Mover's Distance and Finding Optimal Solutions to Transportation Problems
null
PLOS ONE 9(10): e110214, Oct. 2014
10.1371/journal.pone.0110214
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.
[ { "created": "Fri, 30 May 2014 16:07:55 GMT", "version": "v1" } ]
2014-10-15
[ [ "Gottschlich", "Carsten", "" ], [ "Schuhmacher", "Dominic", "" ] ]
Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.
1811.12554
Saeed Seddighin
MohammadHossein Bateni, MohammadTaghi Hajiaghayi, Saeed Seddighin, Cliff Stein
Fast Algorithms for Knapsack via Convolution and Prediction
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The \Problem{knapsack} problem is a fundamental problem in combinatorial optimization. It has been studied extensively from theoretical as well as practical perspectives as it is one of the most well-known NP-hard problems. The goal is to pack a knapsack of size $t$ with the maximum value from a collection of $n$ items with given sizes and values. Recent evidence suggests that a classic $O(nt)$ dynamic-programming solution for the \Problem{knapsack} problem might be the fastest in the worst case. In fact, solving the \Problem{knapsack} problem was shown to be computationally equivalent to the \Problem{$(\min, +)$ convolution} problem, which is thought to be facing a quadratic-time barrier. This hardness is in contrast to the more famous \Problem{$(+, \cdot)$ convolution} (generally known as \Problem{polynomial multiplication}), that has an $O(n\log n)$-time solution via Fast Fourier Transform. Our main results are algorithms with near-linear running times (in terms of the size of the knapsack and the number of items) for the \Problem{knapsack} problem, if either the values or sizes of items are small integers. More specifically, if item sizes are integers bounded by $\smax$, the running time of our algorithm is $\tilde O((n+t)\smax)$. If the item values are integers bounded by $\vmax$, our algorithm runs in time $\tilde O(n+t\vmax)$. Best previously known running times were $O(nt)$, $O(n^2\smax)$ and $O(n\smax\vmax)$ (Pisinger, J. of Alg., 1999).
[ { "created": "Fri, 30 Nov 2018 00:33:51 GMT", "version": "v1" } ]
2018-12-03
[ [ "Bateni", "MohammadHossein", "" ], [ "Hajiaghayi", "MohammadTaghi", "" ], [ "Seddighin", "Saeed", "" ], [ "Stein", "Cliff", "" ] ]
The \Problem{knapsack} problem is a fundamental problem in combinatorial optimization. It has been studied extensively from theoretical as well as practical perspectives as it is one of the most well-known NP-hard problems. The goal is to pack a knapsack of size $t$ with the maximum value from a collection of $n$ items with given sizes and values. Recent evidence suggests that a classic $O(nt)$ dynamic-programming solution for the \Problem{knapsack} problem might be the fastest in the worst case. In fact, solving the \Problem{knapsack} problem was shown to be computationally equivalent to the \Problem{$(\min, +)$ convolution} problem, which is thought to be facing a quadratic-time barrier. This hardness is in contrast to the more famous \Problem{$(+, \cdot)$ convolution} (generally known as \Problem{polynomial multiplication}), that has an $O(n\log n)$-time solution via Fast Fourier Transform. Our main results are algorithms with near-linear running times (in terms of the size of the knapsack and the number of items) for the \Problem{knapsack} problem, if either the values or sizes of items are small integers. More specifically, if item sizes are integers bounded by $\smax$, the running time of our algorithm is $\tilde O((n+t)\smax)$. If the item values are integers bounded by $\vmax$, our algorithm runs in time $\tilde O(n+t\vmax)$. Best previously known running times were $O(nt)$, $O(n^2\smax)$ and $O(n\smax\vmax)$ (Pisinger, J. of Alg., 1999).
1810.09803
Bertrand Corn\'elusse
Bertrand Corn\'elusse, Iacopo Savelli, Simone Paoletti, Antonio Giannitrapani and Antonio Vicino
A Community Microgrid Architecture with an Internal Local Market
16 pages, 15 figures
null
10.1016/j.apenergy.2019.03.109
null
cs.SY econ.GN q-fin.EC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work fits in the context of community microgrids, where members of a community can exchange energy and services among themselves, without going through the usual channels of the public electricity grid. We introduce and analyze a framework to operate a community microgrid, and to share the resulting revenues and costs among its members. A market-oriented pricing of energy exchanges within the community is obtained by implementing an internal local market based on the marginal pricing scheme. The market aims at maximizing the social welfare of the community, thanks to the more efficient allocation of resources, the reduction of the peak power to be paid, and the increased amount of reserve, achieved at an aggregate level. A community microgrid operator, acting as a benevolent planner, redistributes revenues and costs among the members, in such a way that the solution achieved by each member within the community is not worse than the solution it would achieve by acting individually. In this way, each member is incentivized to participate in the community on a voluntary basis. The overall framework is formulated in the form of a bilevel model, where the lower level problem clears the market, while the upper level problem plays the role of the community microgrid operator. Numerical results obtained on a real test case implemented in Belgium show around 54% cost savings on a yearly scale for the community, as compared to the case when its members act individually.
[ { "created": "Tue, 23 Oct 2018 12:05:51 GMT", "version": "v1" }, { "created": "Thu, 10 Jan 2019 09:39:31 GMT", "version": "v2" }, { "created": "Wed, 20 Feb 2019 17:43:05 GMT", "version": "v3" } ]
2019-04-23
[ [ "Cornélusse", "Bertrand", "" ], [ "Savelli", "Iacopo", "" ], [ "Paoletti", "Simone", "" ], [ "Giannitrapani", "Antonio", "" ], [ "Vicino", "Antonio", "" ] ]
This work fits in the context of community microgrids, where members of a community can exchange energy and services among themselves, without going through the usual channels of the public electricity grid. We introduce and analyze a framework to operate a community microgrid, and to share the resulting revenues and costs among its members. A market-oriented pricing of energy exchanges within the community is obtained by implementing an internal local market based on the marginal pricing scheme. The market aims at maximizing the social welfare of the community, thanks to the more efficient allocation of resources, the reduction of the peak power to be paid, and the increased amount of reserve, achieved at an aggregate level. A community microgrid operator, acting as a benevolent planner, redistributes revenues and costs among the members, in such a way that the solution achieved by each member within the community is not worse than the solution it would achieve by acting individually. In this way, each member is incentivized to participate in the community on a voluntary basis. The overall framework is formulated in the form of a bilevel model, where the lower level problem clears the market, while the upper level problem plays the role of the community microgrid operator. Numerical results obtained on a real test case implemented in Belgium show around 54% cost savings on a yearly scale for the community, as compared to the case when its members act individually.
1012.5751
Amir Shachar
Amir Shachar
Introduction to Semi-discrete Calculus
null
null
null
null
cs.DM math.CA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Infinitesimal Calculus explores mainly two measurements: the instantaneous rates of change and the accumulation of quantities. This work shows that scientists, engineers, mathematicians, and teachers increasingly apply another change measurements tool: functions' local trends. While it seems to be a special case of the rate (via the derivative sign), this work proposes a separate and favorable mathematical framework for the trend, called Semi-discrete Calculus.
[ { "created": "Tue, 28 Dec 2010 12:55:07 GMT", "version": "v1" }, { "created": "Wed, 9 Mar 2011 16:58:06 GMT", "version": "v2" }, { "created": "Mon, 25 Jul 2011 22:33:55 GMT", "version": "v3" }, { "created": "Thu, 10 Apr 2014 18:37:23 GMT", "version": "v4" }, { "created": "Sun, 20 Apr 2014 23:16:08 GMT", "version": "v5" }, { "created": "Mon, 2 Jun 2014 11:19:42 GMT", "version": "v6" }, { "created": "Tue, 19 Apr 2022 19:27:13 GMT", "version": "v7" }, { "created": "Wed, 1 Jun 2022 15:31:37 GMT", "version": "v8" } ]
2022-06-02
[ [ "Shachar", "Amir", "" ] ]
The Infinitesimal Calculus explores mainly two measurements: the instantaneous rates of change and the accumulation of quantities. This work shows that scientists, engineers, mathematicians, and teachers increasingly apply another change measurements tool: functions' local trends. While it seems to be a special case of the rate (via the derivative sign), this work proposes a separate and favorable mathematical framework for the trend, called Semi-discrete Calculus.
2011.09970
Xingang Wang Professor
Yali Guo, Han Zhang, Liang Wang, Huawei Fan, and Xingang Wang
Transfer learning of chaotic systems
20 pages, 8 figures
Chaos 31, 011104 (2021)
10.1063/5.0033870
null
cs.NE nlin.CD
http://creativecommons.org/licenses/by-nc-nd/4.0/
Can a neural network trained by the time series of system A be used to predict the evolution of system B? This problem, knowing as transfer learning in a broad sense, is of great importance in machine learning and data mining, yet has not been addressed for chaotic systems. Here we investigate transfer learning of chaotic systems from the perspective of synchronization-based state inference, in which a reservoir computer trained by chaotic system A is used to infer the unmeasured variables of chaotic system B, while A is different from B in either parameter or dynamics. It is found that if systems A and B are different in parameter, the reservoir computer can be well synchronized to system B. However, if systems A and B are different in dynamics, the reservoir computer fails to synchronize with system B in general. Knowledge transfer along a chain of coupled reservoir computers is also studied, and it is found that, although the reservoir computers are trained by different systems, the unmeasured variables of the driving system can be successfully inferred by the remote reservoir computer. Finally, by an experiment of chaotic pendulum, we show that the knowledge learned from the modeling system can be used to predict the evolution of the experimental system.
[ { "created": "Sun, 15 Nov 2020 04:09:35 GMT", "version": "v1" } ]
2021-02-23
[ [ "Guo", "Yali", "" ], [ "Zhang", "Han", "" ], [ "Wang", "Liang", "" ], [ "Fan", "Huawei", "" ], [ "Wang", "Xingang", "" ] ]
Can a neural network trained by the time series of system A be used to predict the evolution of system B? This problem, knowing as transfer learning in a broad sense, is of great importance in machine learning and data mining, yet has not been addressed for chaotic systems. Here we investigate transfer learning of chaotic systems from the perspective of synchronization-based state inference, in which a reservoir computer trained by chaotic system A is used to infer the unmeasured variables of chaotic system B, while A is different from B in either parameter or dynamics. It is found that if systems A and B are different in parameter, the reservoir computer can be well synchronized to system B. However, if systems A and B are different in dynamics, the reservoir computer fails to synchronize with system B in general. Knowledge transfer along a chain of coupled reservoir computers is also studied, and it is found that, although the reservoir computers are trained by different systems, the unmeasured variables of the driving system can be successfully inferred by the remote reservoir computer. Finally, by an experiment of chaotic pendulum, we show that the knowledge learned from the modeling system can be used to predict the evolution of the experimental system.
1806.04761
Ciprian-Octavian Truica
Ciprian-Octavian Truic\u{a} and Florin R\u{a}dulescu and Alexandru Boicea and Ion Bucur
Performance evaluation for CRUD operations in asynchronously replicated document oriented database
null
null
10.1109/CSCS.2015.32
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
NoSQL databases are becoming increasingly popular as more developers seek new ways for storing information. The popularity of these databases has risen due to their flexibility and scalability needed in domains like Big Data and Cloud Computing. This paper examines asynchronous replication, one of the key features for a scalable and flexible system. Three of the most popular Document-Oriented Databases, MongoDB, CouchDB, and Couchbase, are examined. For testing, the execution time for CRUD operations for a single database instance and for a distributed environment with two nodes is taken into account and the results are compared with tests outcomes obtained for three relational database management systems: Microsoft SQL Server, MySQL, and PostgreSQL.
[ { "created": "Tue, 12 Jun 2018 20:38:56 GMT", "version": "v1" } ]
2018-06-14
[ [ "Truică", "Ciprian-Octavian", "" ], [ "Rădulescu", "Florin", "" ], [ "Boicea", "Alexandru", "" ], [ "Bucur", "Ion", "" ] ]
NoSQL databases are becoming increasingly popular as more developers seek new ways for storing information. The popularity of these databases has risen due to their flexibility and scalability needed in domains like Big Data and Cloud Computing. This paper examines asynchronous replication, one of the key features for a scalable and flexible system. Three of the most popular Document-Oriented Databases, MongoDB, CouchDB, and Couchbase, are examined. For testing, the execution time for CRUD operations for a single database instance and for a distributed environment with two nodes is taken into account and the results are compared with tests outcomes obtained for three relational database management systems: Microsoft SQL Server, MySQL, and PostgreSQL.
2403.13423
Xun Gong
Xun Gong, Yu Wu, Jinyu Li, Shujie Liu, Rui Zhao, Xie Chen, Yanmin Qian
Advanced Long-Content Speech Recognition With Factorized Neural Transducer
Accepted by TASLP 2024
IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 32, pp. 1803-1815, 2024
10.1109/TASLP.2024.3350893
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose two novel approaches, which integrate long-content information into the factorized neural transducer (FNT) based architecture in both non-streaming (referred to as LongFNT ) and streaming (referred to as SLongFNT ) scenarios. We first investigate whether long-content transcriptions can improve the vanilla conformer transducer (C-T) models. Our experiments indicate that the vanilla C-T models do not exhibit improved performance when utilizing long-content transcriptions, possibly due to the predictor network of C-T models not functioning as a pure language model. Instead, FNT shows its potential in utilizing long-content information, where we propose the LongFNT model and explore the impact of long-content information in both text (LongFNT-Text) and speech (LongFNT-Speech). The proposed LongFNT-Text and LongFNT-Speech models further complement each other to achieve better performance, with transcription history proving more valuable to the model. The effectiveness of our LongFNT approach is evaluated on LibriSpeech and GigaSpeech corpora, and obtains relative 19% and 12% word error rate reduction, respectively. Furthermore, we extend the LongFNT model to the streaming scenario, which is named SLongFNT , consisting of SLongFNT-Text and SLongFNT-Speech approaches to utilize long-content text and speech information. Experiments show that the proposed SLongFNT model achieves relative 26% and 17% WER reduction on LibriSpeech and GigaSpeech respectively while keeping a good latency, compared to the FNT baseline. Overall, our proposed LongFNT and SLongFNT highlight the significance of considering long-content speech and transcription knowledge for improving both non-streaming and streaming speech recognition systems.
[ { "created": "Wed, 20 Mar 2024 09:09:49 GMT", "version": "v1" } ]
2024-03-21
[ [ "Gong", "Xun", "" ], [ "Wu", "Yu", "" ], [ "Li", "Jinyu", "" ], [ "Liu", "Shujie", "" ], [ "Zhao", "Rui", "" ], [ "Chen", "Xie", "" ], [ "Qian", "Yanmin", "" ] ]
In this paper, we propose two novel approaches, which integrate long-content information into the factorized neural transducer (FNT) based architecture in both non-streaming (referred to as LongFNT ) and streaming (referred to as SLongFNT ) scenarios. We first investigate whether long-content transcriptions can improve the vanilla conformer transducer (C-T) models. Our experiments indicate that the vanilla C-T models do not exhibit improved performance when utilizing long-content transcriptions, possibly due to the predictor network of C-T models not functioning as a pure language model. Instead, FNT shows its potential in utilizing long-content information, where we propose the LongFNT model and explore the impact of long-content information in both text (LongFNT-Text) and speech (LongFNT-Speech). The proposed LongFNT-Text and LongFNT-Speech models further complement each other to achieve better performance, with transcription history proving more valuable to the model. The effectiveness of our LongFNT approach is evaluated on LibriSpeech and GigaSpeech corpora, and obtains relative 19% and 12% word error rate reduction, respectively. Furthermore, we extend the LongFNT model to the streaming scenario, which is named SLongFNT , consisting of SLongFNT-Text and SLongFNT-Speech approaches to utilize long-content text and speech information. Experiments show that the proposed SLongFNT model achieves relative 26% and 17% WER reduction on LibriSpeech and GigaSpeech respectively while keeping a good latency, compared to the FNT baseline. Overall, our proposed LongFNT and SLongFNT highlight the significance of considering long-content speech and transcription knowledge for improving both non-streaming and streaming speech recognition systems.
1408.2037
Issei Sato
Issei Sato, Kenichi Kurihara, Shu Tanaka, Hiroshi Nakagawa, Seiji Miyashita
Quantum Annealing for Variational Bayes Inference
Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI2009)
null
null
UAI-P-2009-PG-479-486
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents studies on a deterministic annealing algorithm based on quantum annealing for variational Bayes (QAVB) inference, which can be seen as an extension of the simulated annealing for variational Bayes (SAVB) inference. QAVB is as easy as SAVB to implement. Experiments revealed QAVB finds a better local optimum than SAVB in terms of the variational free energy in latent Dirichlet allocation (LDA).
[ { "created": "Sat, 9 Aug 2014 05:33:21 GMT", "version": "v1" } ]
2014-08-12
[ [ "Sato", "Issei", "" ], [ "Kurihara", "Kenichi", "" ], [ "Tanaka", "Shu", "" ], [ "Nakagawa", "Hiroshi", "" ], [ "Miyashita", "Seiji", "" ] ]
This paper presents studies on a deterministic annealing algorithm based on quantum annealing for variational Bayes (QAVB) inference, which can be seen as an extension of the simulated annealing for variational Bayes (SAVB) inference. QAVB is as easy as SAVB to implement. Experiments revealed QAVB finds a better local optimum than SAVB in terms of the variational free energy in latent Dirichlet allocation (LDA).
2305.04971
Peng Lu
Peng Lu, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, Philippe Langlais
LABO: Towards Learning Optimal Label Regularization via Bi-level Optimization
Accepted at ACL2023 (Findings)
null
null
null
cs.LG cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Regularization techniques are crucial to improving the generalization performance and training efficiency of deep neural networks. Many deep learning algorithms rely on weight decay, dropout, batch/layer normalization to converge faster and generalize. Label Smoothing (LS) is another simple, versatile and efficient regularization which can be applied to various supervised classification tasks. Conventional LS, however, regardless of the training instance assumes that each non-target class is equally likely. In this work, we present a general framework for training with label regularization, which includes conventional LS but can also model instance-specific variants. Based on this formulation, we propose an efficient way of learning LAbel regularization by devising a Bi-level Optimization (LABO) problem. We derive a deterministic and interpretable solution of the inner loop as the optimal label smoothing without the need to store the parameters or the output of a trained model. Finally, we conduct extensive experiments and demonstrate our LABO consistently yields improvement over conventional label regularization on various fields, including seven machine translation and three image classification tasks across various
[ { "created": "Mon, 8 May 2023 18:04:18 GMT", "version": "v1" } ]
2023-05-10
[ [ "Lu", "Peng", "" ], [ "Rashid", "Ahmad", "" ], [ "Kobyzev", "Ivan", "" ], [ "Rezagholizadeh", "Mehdi", "" ], [ "Langlais", "Philippe", "" ] ]
Regularization techniques are crucial to improving the generalization performance and training efficiency of deep neural networks. Many deep learning algorithms rely on weight decay, dropout, batch/layer normalization to converge faster and generalize. Label Smoothing (LS) is another simple, versatile and efficient regularization which can be applied to various supervised classification tasks. Conventional LS, however, regardless of the training instance assumes that each non-target class is equally likely. In this work, we present a general framework for training with label regularization, which includes conventional LS but can also model instance-specific variants. Based on this formulation, we propose an efficient way of learning LAbel regularization by devising a Bi-level Optimization (LABO) problem. We derive a deterministic and interpretable solution of the inner loop as the optimal label smoothing without the need to store the parameters or the output of a trained model. Finally, we conduct extensive experiments and demonstrate our LABO consistently yields improvement over conventional label regularization on various fields, including seven machine translation and three image classification tasks across various
2006.08737
Raj Kumar Maity
Avishek Ghosh, Raj Kumar Maity and Arya Mazumdar
Distributed Newton Can Communicate Less and Resist Byzantine Workers
null
null
null
null
cs.LG cs.DC math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a distributed second order optimization algorithm that is communication-efficient as well as robust against Byzantine failures of the worker machines. We propose COMRADE (COMunication-efficient and Robust Approximate Distributed nEwton), an iterative second order algorithm, where the worker machines communicate only once per iteration with the center machine. This is in sharp contrast with the state-of-the-art distributed second order algorithms like GIANT [34] and DINGO[7], where the worker machines send (functions of) local gradient and Hessian sequentially; thus ending up communicating twice with the center machine per iteration. Moreover, we show that the worker machines can further compress the local information before sending it to the center. In addition, we employ a simple norm based thresholding rule to filter-out the Byzantine worker machines. We establish the linear-quadratic rate of convergence of COMRADE and establish that the communication savings and Byzantine resilience result in only a small statistical error rate for arbitrary convex loss functions. To the best of our knowledge, this is the first work that addresses the issue of Byzantine resilience in second order distributed optimization. Furthermore, we validate our theoretical results with extensive experiments on synthetic and benchmark LIBSVM [5] data-sets and demonstrate convergence guarantees.
[ { "created": "Mon, 15 Jun 2020 20:16:15 GMT", "version": "v1" } ]
2021-03-19
[ [ "Ghosh", "Avishek", "" ], [ "Maity", "Raj Kumar", "" ], [ "Mazumdar", "Arya", "" ] ]
We develop a distributed second order optimization algorithm that is communication-efficient as well as robust against Byzantine failures of the worker machines. We propose COMRADE (COMunication-efficient and Robust Approximate Distributed nEwton), an iterative second order algorithm, where the worker machines communicate only once per iteration with the center machine. This is in sharp contrast with the state-of-the-art distributed second order algorithms like GIANT [34] and DINGO[7], where the worker machines send (functions of) local gradient and Hessian sequentially; thus ending up communicating twice with the center machine per iteration. Moreover, we show that the worker machines can further compress the local information before sending it to the center. In addition, we employ a simple norm based thresholding rule to filter-out the Byzantine worker machines. We establish the linear-quadratic rate of convergence of COMRADE and establish that the communication savings and Byzantine resilience result in only a small statistical error rate for arbitrary convex loss functions. To the best of our knowledge, this is the first work that addresses the issue of Byzantine resilience in second order distributed optimization. Furthermore, we validate our theoretical results with extensive experiments on synthetic and benchmark LIBSVM [5] data-sets and demonstrate convergence guarantees.
2108.10417
Yiyang Li
GuoLiang Li and Yiyang Li
Recurrent multiple shared layers in Depth for Neural Machine Translation
8 pages, 2 figures. arXiv admin note: substantial text overlap with arXiv:2107.14590
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Learning deeper models is usually a simple and effective approach to improve model performance, but deeper models have larger model parameters and are more difficult to train. To get a deeper model, simply stacking more layers of the model seems to work well, but previous works have claimed that it cannot benefit the model. We propose to train a deeper model with recurrent mechanism, which loops the encoder and decoder blocks of Transformer in the depth direction. To address the increasing of model parameters, we choose to share parameters in different recursive moments. We conduct our experiments on WMT16 English-to-German and WMT14 English-to-France translation tasks, our model outperforms the shallow Transformer-Base/Big baseline by 0.35, 1.45 BLEU points, which is 27.23% of Transformer-Big model parameters. Compared to the deep Transformer(20-layer encoder, 6-layer decoder), our model has similar model performance and infer speed, but our model parameters are 54.72% of the former.
[ { "created": "Mon, 23 Aug 2021 21:21:45 GMT", "version": "v1" }, { "created": "Thu, 26 Aug 2021 13:32:50 GMT", "version": "v2" } ]
2021-08-27
[ [ "Li", "GuoLiang", "" ], [ "Li", "Yiyang", "" ] ]
Learning deeper models is usually a simple and effective approach to improve model performance, but deeper models have larger model parameters and are more difficult to train. To get a deeper model, simply stacking more layers of the model seems to work well, but previous works have claimed that it cannot benefit the model. We propose to train a deeper model with recurrent mechanism, which loops the encoder and decoder blocks of Transformer in the depth direction. To address the increasing of model parameters, we choose to share parameters in different recursive moments. We conduct our experiments on WMT16 English-to-German and WMT14 English-to-France translation tasks, our model outperforms the shallow Transformer-Base/Big baseline by 0.35, 1.45 BLEU points, which is 27.23% of Transformer-Big model parameters. Compared to the deep Transformer(20-layer encoder, 6-layer decoder), our model has similar model performance and infer speed, but our model parameters are 54.72% of the former.
2407.06366
Minghan Wei
Cheng Peng, Minghan Wei, Volkan Isler
Stochastic Traveling Salesperson Problem with Neighborhoods for Object Detection
2023 IEEE International Conference on Robotics and Automation
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
We introduce a new route-finding problem which considers perception and travel costs simultaneously. Specifically, we consider the problem of finding the shortest tour such that all objects of interest can be detected successfully. To represent a viable detection region for each object, we propose to use an entropy-based viewing score that generates a diameter-bounded region as a viewing neighborhood. We formulate the detection-based trajectory planning problem as a stochastic traveling salesperson problem with neighborhoods and propose a center-visit method that obtains an approximation ratio of O(DmaxDmin) for disjoint regions. For non-disjoint regions, our method -provides a novel finite detour in 3D, which utilizes the region's minimum curvature property. Finally, we show that our method can generate efficient trajectories compared to a baseline method in a photo-realistic simulation environment.
[ { "created": "Mon, 8 Jul 2024 20:12:45 GMT", "version": "v1" } ]
2024-07-10
[ [ "Peng", "Cheng", "" ], [ "Wei", "Minghan", "" ], [ "Isler", "Volkan", "" ] ]
We introduce a new route-finding problem which considers perception and travel costs simultaneously. Specifically, we consider the problem of finding the shortest tour such that all objects of interest can be detected successfully. To represent a viable detection region for each object, we propose to use an entropy-based viewing score that generates a diameter-bounded region as a viewing neighborhood. We formulate the detection-based trajectory planning problem as a stochastic traveling salesperson problem with neighborhoods and propose a center-visit method that obtains an approximation ratio of O(DmaxDmin) for disjoint regions. For non-disjoint regions, our method -provides a novel finite detour in 3D, which utilizes the region's minimum curvature property. Finally, we show that our method can generate efficient trajectories compared to a baseline method in a photo-realistic simulation environment.
1710.00954
Anirban Ghosh
Adrian Dumitrescu, Anirban Ghosh, Csaba D. T\'oth
Online Unit Covering in Euclidean Space
14 pages, 5 figures, A preliminary version in: Proceedings of the 27th Annual Fall Workshop on Computational Geometry, Stony Brook University, USA, 2017. arXiv admin note: text overlap with arXiv:1708.02662
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We revisit the online Unit Covering problem in higher dimensions: Given a set of $n$ points in $\mathbb{R}^d$, that arrive one by one, cover the points by balls of unit radius, so as to minimize the number of balls used. In this paper, we work in $\mathbb{R}^d$ using Euclidean distance. The current best competitive ratio of an online algorithm, $O(2^d d \log{d})$, is due to Charikar et al. (2004); their algorithm is deterministic. (I) We give an online deterministic algorithm with competitive ratio $O(1.321^d)$, thereby sharply improving on the earlier record by a large exponential factor. In particular, the competitive ratios are $5$ for the plane and $12$ for $3$-space (the previous ratios were $7$ and $21$, respectively). For $d=3$, the ratio of our online algorithm matches the ratio of the current best offline algorithm for the same problem due to Biniaz et al. (2017), which is remarkable (and rather unusual). (II) We show that the competitive ratio of every deterministic online algorithm (with an adaptive deterministic adversary) for Unit Covering in $\mathbb{R}^d$ under the $L_{2}$ norm is at least $d+1$ for every $d \geq 1$. This greatly improves upon the previous best lower bound, $\Omega(\log{d} / \log{\log{\log{d}}})$, due to Charikar et al. (2004). (III) We obtain lower bounds of $4$ and $5$ for the competitive ratio of any deterministic algorithm for online Unit Covering in $\mathbb{R}^2$ and respectively $\mathbb{R}^3$; the previous best lower bounds were both $3$. (IV) When the input points are taken from the square or hexagonal lattices in $\mathbb{R}^2$, we give deterministic online algorithms for Unit Covering with an optimal competitive ratio of $3$.
[ { "created": "Tue, 3 Oct 2017 01:40:43 GMT", "version": "v1" }, { "created": "Sun, 11 Feb 2018 16:51:24 GMT", "version": "v2" }, { "created": "Fri, 24 Aug 2018 23:55:41 GMT", "version": "v3" } ]
2018-08-29
[ [ "Dumitrescu", "Adrian", "" ], [ "Ghosh", "Anirban", "" ], [ "Tóth", "Csaba D.", "" ] ]
We revisit the online Unit Covering problem in higher dimensions: Given a set of $n$ points in $\mathbb{R}^d$, that arrive one by one, cover the points by balls of unit radius, so as to minimize the number of balls used. In this paper, we work in $\mathbb{R}^d$ using Euclidean distance. The current best competitive ratio of an online algorithm, $O(2^d d \log{d})$, is due to Charikar et al. (2004); their algorithm is deterministic. (I) We give an online deterministic algorithm with competitive ratio $O(1.321^d)$, thereby sharply improving on the earlier record by a large exponential factor. In particular, the competitive ratios are $5$ for the plane and $12$ for $3$-space (the previous ratios were $7$ and $21$, respectively). For $d=3$, the ratio of our online algorithm matches the ratio of the current best offline algorithm for the same problem due to Biniaz et al. (2017), which is remarkable (and rather unusual). (II) We show that the competitive ratio of every deterministic online algorithm (with an adaptive deterministic adversary) for Unit Covering in $\mathbb{R}^d$ under the $L_{2}$ norm is at least $d+1$ for every $d \geq 1$. This greatly improves upon the previous best lower bound, $\Omega(\log{d} / \log{\log{\log{d}}})$, due to Charikar et al. (2004). (III) We obtain lower bounds of $4$ and $5$ for the competitive ratio of any deterministic algorithm for online Unit Covering in $\mathbb{R}^2$ and respectively $\mathbb{R}^3$; the previous best lower bounds were both $3$. (IV) When the input points are taken from the square or hexagonal lattices in $\mathbb{R}^2$, we give deterministic online algorithms for Unit Covering with an optimal competitive ratio of $3$.
2201.12428
Tyler Cody
Tyler Cody, Erin Lanus, Daniel D. Doyle, Laura Freeman
Systematic Training and Testing for Machine Learning Using Combinatorial Interaction Testing
null
null
null
null
cs.LG cs.SE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper demonstrates the systematic use of combinatorial coverage for selecting and characterizing test and training sets for machine learning models. The presented work adapts combinatorial interaction testing, which has been successfully leveraged in identifying faults in software testing, to characterize data used in machine learning. The MNIST hand-written digits data is used to demonstrate that combinatorial coverage can be used to select test sets that stress machine learning model performance, to select training sets that lead to robust model performance, and to select data for fine-tuning models to new domains. Thus, the results posit combinatorial coverage as a holistic approach to training and testing for machine learning. In contrast to prior work which has focused on the use of coverage in regard to the internal of neural networks, this paper considers coverage over simple features derived from inputs and outputs. Thus, this paper addresses the case where the supplier of test and training sets for machine learning models does not have intellectual property rights to the models themselves. Finally, the paper addresses prior criticism of combinatorial coverage and provides a rebuttal which advocates the use of coverage metrics in machine learning applications.
[ { "created": "Fri, 28 Jan 2022 21:33:31 GMT", "version": "v1" } ]
2022-02-01
[ [ "Cody", "Tyler", "" ], [ "Lanus", "Erin", "" ], [ "Doyle", "Daniel D.", "" ], [ "Freeman", "Laura", "" ] ]
This paper demonstrates the systematic use of combinatorial coverage for selecting and characterizing test and training sets for machine learning models. The presented work adapts combinatorial interaction testing, which has been successfully leveraged in identifying faults in software testing, to characterize data used in machine learning. The MNIST hand-written digits data is used to demonstrate that combinatorial coverage can be used to select test sets that stress machine learning model performance, to select training sets that lead to robust model performance, and to select data for fine-tuning models to new domains. Thus, the results posit combinatorial coverage as a holistic approach to training and testing for machine learning. In contrast to prior work which has focused on the use of coverage in regard to the internal of neural networks, this paper considers coverage over simple features derived from inputs and outputs. Thus, this paper addresses the case where the supplier of test and training sets for machine learning models does not have intellectual property rights to the models themselves. Finally, the paper addresses prior criticism of combinatorial coverage and provides a rebuttal which advocates the use of coverage metrics in machine learning applications.
1509.07244
Shawn Andrews
Shawn Andrews and Ghassan Hamarneh
Multi-Region Probabilistic Dice Similarity Coefficient using the Aitchison Distance and Bipartite Graph Matching
8 pages. 5 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Validation of image segmentation methods is of critical importance. Probabilistic image segmentation is increasingly popular as it captures uncertainty in the results. Image segmentation methods that support multi-region (as opposed to binary) delineation are more favourable as they capture interactions between the different objects in the image. The Dice similarity coefficient (DSC) has been a popular metric for evaluating the accuracy of automated or semi-automated segmentation methods by comparing their results to the ground truth. In this work, we develop an extension of the DSC to multi-region probabilistic segmentations (with unordered labels). We use bipartite graph matching to establish label correspondences and propose two functions that extend the DSC, one based on absolute probability differences and one based on the Aitchison distance. These provide a robust and accurate measure of multi-region probabilistic segmentation accuracy.
[ { "created": "Thu, 24 Sep 2015 05:56:38 GMT", "version": "v1" }, { "created": "Fri, 2 Oct 2015 06:25:17 GMT", "version": "v2" }, { "created": "Tue, 13 Oct 2015 04:11:25 GMT", "version": "v3" } ]
2015-10-14
[ [ "Andrews", "Shawn", "" ], [ "Hamarneh", "Ghassan", "" ] ]
Validation of image segmentation methods is of critical importance. Probabilistic image segmentation is increasingly popular as it captures uncertainty in the results. Image segmentation methods that support multi-region (as opposed to binary) delineation are more favourable as they capture interactions between the different objects in the image. The Dice similarity coefficient (DSC) has been a popular metric for evaluating the accuracy of automated or semi-automated segmentation methods by comparing their results to the ground truth. In this work, we develop an extension of the DSC to multi-region probabilistic segmentations (with unordered labels). We use bipartite graph matching to establish label correspondences and propose two functions that extend the DSC, one based on absolute probability differences and one based on the Aitchison distance. These provide a robust and accurate measure of multi-region probabilistic segmentation accuracy.
2103.10873
Daniele Palossi
Daniele Palossi, Nicky Zimmerman, Alessio Burrello, Francesco Conti, Hanna M\"uller, Luca Maria Gambardella, Luca Benini, Alessandro Giusti, J\'er\^ome Guzzi
Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs
15 pages, 15 figures, 4 tables. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial intelligence-powered pocket-sized air robots have the potential to revolutionize the Internet-of-Things ecosystem, acting as autonomous, unobtrusive, and ubiquitous smart sensors. With a few cm$^{2}$ form-factor, nano-sized unmanned aerial vehicles (UAVs) are the natural befit for indoor human-drone interaction missions, as the pose estimation task we address in this work. However, this scenario is challenged by the nano-UAVs' limited payload and computational power that severely relegates the onboard brain to the sub-100 mW microcontroller unit-class. Our work stands at the intersection of the novel parallel ultra-low-power (PULP) architectural paradigm and our general development methodology for deep neural network (DNN) visual pipelines, i.e., covering from perception to control. Addressing the DNN model design, from training and dataset augmentation to 8-bit quantization and deployment, we demonstrate how a PULP-based processor, aboard a nano-UAV, is sufficient for the real-time execution (up to 135 frame/s) of our novel DNN, called PULP-Frontnet. We showcase how, scaling our model's memory and computational requirement, we can significantly improve the onboard inference (top energy efficiency of 0.43 mJ/frame) with no compromise in the quality-of-result vs. a resource-unconstrained baseline (i.e., full-precision DNN). Field experiments demonstrate a closed-loop top-notch autonomous navigation capability, with a heavily resource-constrained 27-gram Crazyflie 2.1 nano-quadrotor. Compared against the control performance achieved using an ideal sensing setup, onboard relative pose inference yields excellent drone behavior in terms of median absolute errors, such as positional (onboard: 41 cm, ideal: 26 cm) and angular (onboard: 3.7$^{\circ}$, ideal: 4.1$^{\circ}$).
[ { "created": "Fri, 19 Mar 2021 15:56:58 GMT", "version": "v1" } ]
2021-03-22
[ [ "Palossi", "Daniele", "" ], [ "Zimmerman", "Nicky", "" ], [ "Burrello", "Alessio", "" ], [ "Conti", "Francesco", "" ], [ "Müller", "Hanna", "" ], [ "Gambardella", "Luca Maria", "" ], [ "Benini", "Luca", "" ], [ "Giusti", "Alessandro", "" ], [ "Guzzi", "Jérôme", "" ] ]
Artificial intelligence-powered pocket-sized air robots have the potential to revolutionize the Internet-of-Things ecosystem, acting as autonomous, unobtrusive, and ubiquitous smart sensors. With a few cm$^{2}$ form-factor, nano-sized unmanned aerial vehicles (UAVs) are the natural befit for indoor human-drone interaction missions, as the pose estimation task we address in this work. However, this scenario is challenged by the nano-UAVs' limited payload and computational power that severely relegates the onboard brain to the sub-100 mW microcontroller unit-class. Our work stands at the intersection of the novel parallel ultra-low-power (PULP) architectural paradigm and our general development methodology for deep neural network (DNN) visual pipelines, i.e., covering from perception to control. Addressing the DNN model design, from training and dataset augmentation to 8-bit quantization and deployment, we demonstrate how a PULP-based processor, aboard a nano-UAV, is sufficient for the real-time execution (up to 135 frame/s) of our novel DNN, called PULP-Frontnet. We showcase how, scaling our model's memory and computational requirement, we can significantly improve the onboard inference (top energy efficiency of 0.43 mJ/frame) with no compromise in the quality-of-result vs. a resource-unconstrained baseline (i.e., full-precision DNN). Field experiments demonstrate a closed-loop top-notch autonomous navigation capability, with a heavily resource-constrained 27-gram Crazyflie 2.1 nano-quadrotor. Compared against the control performance achieved using an ideal sensing setup, onboard relative pose inference yields excellent drone behavior in terms of median absolute errors, such as positional (onboard: 41 cm, ideal: 26 cm) and angular (onboard: 3.7$^{\circ}$, ideal: 4.1$^{\circ}$).
1706.02785
Ophir Lojkine
Ophir Lojkine
Optimal parameters for bloom-filtered joins in Spark
The article is in Russian, but an analysis of the data used in it is available in english at the following address: https://github.com/lovasoa/spark-bloomfiltered-join-analysis/blob/master/analysis.ipynb
null
null
null
cs.DC cs.DB
http://creativecommons.org/licenses/by/4.0/
In this paper, we present an algorithm that joins relational database tables efficiently in a distributed environment using Bloom filters of an optimal size. We propose not to use fixed-size bloom filters as in previous research, but to find an optimal size for the bloom filters, by creating a mathematical model of the join algorithm, and then finding the optimal parameters using traditional mathematical optimization. This algorithm with optimal parameters beats both previous approaches using bloom filters and the default SparkSQL engine not only on star-joins, but also on traditional database schema. The experiments were conducted on a standard TPC-H database stored as parquet files on a distributed file system.
[ { "created": "Thu, 8 Jun 2017 22:29:41 GMT", "version": "v1" }, { "created": "Mon, 12 Jun 2017 12:49:15 GMT", "version": "v2" } ]
2017-06-13
[ [ "Lojkine", "Ophir", "" ] ]
In this paper, we present an algorithm that joins relational database tables efficiently in a distributed environment using Bloom filters of an optimal size. We propose not to use fixed-size bloom filters as in previous research, but to find an optimal size for the bloom filters, by creating a mathematical model of the join algorithm, and then finding the optimal parameters using traditional mathematical optimization. This algorithm with optimal parameters beats both previous approaches using bloom filters and the default SparkSQL engine not only on star-joins, but also on traditional database schema. The experiments were conducted on a standard TPC-H database stored as parquet files on a distributed file system.
2011.04461
Nicholas Adrian
Nicholas Adrian and Quang-Cuong Pham
MoboTSP: Solving the Task Sequencing Problem for Mobile Manipulators
Revised version of arXiv:2011.04461. The edit includes reorganizing and rewriting for better clarity without any changes to the methodology
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new approach to tackle the mobile manipulator task sequencing problem. We leverage computational geometry, graph theory and combinatorial optimization to yield a principled method to segment the task-space targets into clusters, analytically determine reachable base pose for each cluster, and find task sequences that minimize the number of base movements and robot execution time. By clustering targets first and by doing so from first principles, our solution is more general and computationally efficient when compared to existing methods.
[ { "created": "Mon, 9 Nov 2020 14:37:30 GMT", "version": "v1" }, { "created": "Wed, 20 Oct 2021 02:37:06 GMT", "version": "v2" } ]
2021-10-22
[ [ "Adrian", "Nicholas", "" ], [ "Pham", "Quang-Cuong", "" ] ]
We introduce a new approach to tackle the mobile manipulator task sequencing problem. We leverage computational geometry, graph theory and combinatorial optimization to yield a principled method to segment the task-space targets into clusters, analytically determine reachable base pose for each cluster, and find task sequences that minimize the number of base movements and robot execution time. By clustering targets first and by doing so from first principles, our solution is more general and computationally efficient when compared to existing methods.
2207.13283
Zhuqing Liu
Zhuqing Liu, Xin Zhang, Prashant Khanduri, Songtao Lu, and Jia Liu
INTERACT: Achieving Low Sample and Communication Complexities in Decentralized Bilevel Learning over Networks
null
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, decentralized bilevel optimization problems have received increasing attention in the networking and machine learning communities thanks to their versatility in modeling decentralized learning problems over peer-to-peer networks (e.g., multi-agent meta-learning, multi-agent reinforcement learning, personalized training, and Byzantine-resilient learning). However, for decentralized bilevel optimization over peer-to-peer networks with limited computation and communication capabilities, how to achieve low sample and communication complexities are two fundamental challenges that remain under-explored so far. In this paper, we make the first attempt to investigate the class of decentralized bilevel optimization problems with nonconvex and strongly-convex structure corresponding to the outer and inner subproblems, respectively. Our main contributions in this paper are two-fold: i) We first propose a deterministic algorithm called INTERACT (inner-gradient-descent-outer-tracked-gradient) that requires the sample complexity of $\mathcal{O}(n \epsilon^{-1})$ and communication complexity of $\mathcal{O}(\epsilon^{-1})$ to solve the bilevel optimization problem, where $n$ and $\epsilon > 0$ are the number of samples at each agent and the desired stationarity gap, respectively. ii) To relax the need for full gradient evaluations in each iteration, we propose a stochastic variance-reduced version of INTERACT (SVR-INTERACT), which improves the sample complexity to $\mathcal{O}(\sqrt{n} \epsilon^{-1})$ while achieving the same communication complexity as the deterministic algorithm. To our knowledge, this work is the first that achieves both low sample and communication complexities for solving decentralized bilevel optimization problems over networks. Our numerical experiments also corroborate our theoretical findings.
[ { "created": "Wed, 27 Jul 2022 04:19:28 GMT", "version": "v1" }, { "created": "Thu, 28 Jul 2022 14:34:30 GMT", "version": "v2" }, { "created": "Wed, 5 Oct 2022 19:38:14 GMT", "version": "v3" } ]
2022-10-07
[ [ "Liu", "Zhuqing", "" ], [ "Zhang", "Xin", "" ], [ "Khanduri", "Prashant", "" ], [ "Lu", "Songtao", "" ], [ "Liu", "Jia", "" ] ]
In recent years, decentralized bilevel optimization problems have received increasing attention in the networking and machine learning communities thanks to their versatility in modeling decentralized learning problems over peer-to-peer networks (e.g., multi-agent meta-learning, multi-agent reinforcement learning, personalized training, and Byzantine-resilient learning). However, for decentralized bilevel optimization over peer-to-peer networks with limited computation and communication capabilities, how to achieve low sample and communication complexities are two fundamental challenges that remain under-explored so far. In this paper, we make the first attempt to investigate the class of decentralized bilevel optimization problems with nonconvex and strongly-convex structure corresponding to the outer and inner subproblems, respectively. Our main contributions in this paper are two-fold: i) We first propose a deterministic algorithm called INTERACT (inner-gradient-descent-outer-tracked-gradient) that requires the sample complexity of $\mathcal{O}(n \epsilon^{-1})$ and communication complexity of $\mathcal{O}(\epsilon^{-1})$ to solve the bilevel optimization problem, where $n$ and $\epsilon > 0$ are the number of samples at each agent and the desired stationarity gap, respectively. ii) To relax the need for full gradient evaluations in each iteration, we propose a stochastic variance-reduced version of INTERACT (SVR-INTERACT), which improves the sample complexity to $\mathcal{O}(\sqrt{n} \epsilon^{-1})$ while achieving the same communication complexity as the deterministic algorithm. To our knowledge, this work is the first that achieves both low sample and communication complexities for solving decentralized bilevel optimization problems over networks. Our numerical experiments also corroborate our theoretical findings.
2408.04910
Muntasir Adnan
Muntasir Adnan, Buddhi Gamage, Zhiwei Xu, Damith Herath, Carlos C. N. Kuhn
Unleashing Artificial Cognition: Integrating Multiple AI Systems
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In this study, we present an innovative fusion of language models and query analysis techniques to unlock cognition in artificial intelligence. Our system seamlessly integrates a Chess engine with a language model, enabling it to predict moves and provide strategic explanations. Leveraging a vector database to achieve retrievable answer generation, our OpenSI AI system elucidates its decision-making process, bridging the gap between raw computation and human-like understanding. Our choice of Chess as the demonstration environment underscores the versatility of our approach. Beyond Chess, our system holds promise for diverse applications, from medical diagnostics to financial forecasting.
[ { "created": "Fri, 9 Aug 2024 07:36:30 GMT", "version": "v1" }, { "created": "Mon, 12 Aug 2024 04:25:39 GMT", "version": "v2" }, { "created": "Wed, 14 Aug 2024 02:28:19 GMT", "version": "v3" } ]
2024-08-15
[ [ "Adnan", "Muntasir", "" ], [ "Gamage", "Buddhi", "" ], [ "Xu", "Zhiwei", "" ], [ "Herath", "Damith", "" ], [ "Kuhn", "Carlos C. N.", "" ] ]
In this study, we present an innovative fusion of language models and query analysis techniques to unlock cognition in artificial intelligence. Our system seamlessly integrates a Chess engine with a language model, enabling it to predict moves and provide strategic explanations. Leveraging a vector database to achieve retrievable answer generation, our OpenSI AI system elucidates its decision-making process, bridging the gap between raw computation and human-like understanding. Our choice of Chess as the demonstration environment underscores the versatility of our approach. Beyond Chess, our system holds promise for diverse applications, from medical diagnostics to financial forecasting.
2401.09691
Koki Yamane
Koki Yamane, Sho Sakaino, Toshiaki Tsuji
Imitation Learning Inputting Image Feature to Each Layer of Neural Network
6 pages, 4 figures, Accepted at AMC2024
null
null
null
cs.RO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Imitation learning enables robots to learn and replicate human behavior from training data. Recent advances in machine learning enable end-to-end learning approaches that directly process high-dimensional observation data, such as images. However, these approaches face a critical challenge when processing data from multiple modalities, inadvertently ignoring data with a lower correlation to the desired output, especially when using short sampling periods. This paper presents a useful method to address this challenge, which amplifies the influence of data with a relatively low correlation to the output by inputting the data into each neural network layer. The proposed approach effectively incorporates diverse data sources into the learning process. Through experiments using a simple pick-and-place operation with raw images and joint information as input, significant improvements in success rates are demonstrated even when dealing with data from short sampling periods.
[ { "created": "Thu, 18 Jan 2024 02:44:18 GMT", "version": "v1" }, { "created": "Fri, 19 Jan 2024 12:43:36 GMT", "version": "v2" } ]
2024-01-22
[ [ "Yamane", "Koki", "" ], [ "Sakaino", "Sho", "" ], [ "Tsuji", "Toshiaki", "" ] ]
Imitation learning enables robots to learn and replicate human behavior from training data. Recent advances in machine learning enable end-to-end learning approaches that directly process high-dimensional observation data, such as images. However, these approaches face a critical challenge when processing data from multiple modalities, inadvertently ignoring data with a lower correlation to the desired output, especially when using short sampling periods. This paper presents a useful method to address this challenge, which amplifies the influence of data with a relatively low correlation to the output by inputting the data into each neural network layer. The proposed approach effectively incorporates diverse data sources into the learning process. Through experiments using a simple pick-and-place operation with raw images and joint information as input, significant improvements in success rates are demonstrated even when dealing with data from short sampling periods.
2104.12274
Yusha Liu
Yusha Liu, Osvaldo Simeone
HyperRNN: Deep Learning-Aided Downlink CSI Acquisition via Partial Channel Reciprocity for FDD Massive MIMO
To be presented at SPAWC 2021
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
In order to unlock the full advantages of massive multiple input multiple output (MIMO) in the downlink, channel state information (CSI) is required at the base station (BS) to optimize the beamforming matrices. In frequency division duplex (FDD) systems, full channel reciprocity does not hold, and CSI acquisition generally requires downlink pilot transmission followed by uplink feedback. Prior work proposed the end-to-end design of pilot transmission, feedback, and CSI estimation via deep learning. In this work, we introduce an enhanced end-to-end design that leverages partial uplink-downlink reciprocity and temporal correlation of the fading processes by utilizing jointly downlink and uplink pilots. The proposed method is based on a novel deep learning architecture -- HyperRNN -- that combines hypernetworks and recurrent neural networks (RNNs) to optimize the transfer of long-term channel features from uplink to downlink. Simulation results demonstrate that the HyperRNN achieves a lower normalized mean square error (NMSE) performance, and that it reduces requirements in terms of pilot lengths.
[ { "created": "Sun, 25 Apr 2021 22:03:59 GMT", "version": "v1" }, { "created": "Sun, 2 May 2021 04:28:00 GMT", "version": "v2" }, { "created": "Thu, 8 Jul 2021 10:18:54 GMT", "version": "v3" } ]
2021-07-09
[ [ "Liu", "Yusha", "" ], [ "Simeone", "Osvaldo", "" ] ]
In order to unlock the full advantages of massive multiple input multiple output (MIMO) in the downlink, channel state information (CSI) is required at the base station (BS) to optimize the beamforming matrices. In frequency division duplex (FDD) systems, full channel reciprocity does not hold, and CSI acquisition generally requires downlink pilot transmission followed by uplink feedback. Prior work proposed the end-to-end design of pilot transmission, feedback, and CSI estimation via deep learning. In this work, we introduce an enhanced end-to-end design that leverages partial uplink-downlink reciprocity and temporal correlation of the fading processes by utilizing jointly downlink and uplink pilots. The proposed method is based on a novel deep learning architecture -- HyperRNN -- that combines hypernetworks and recurrent neural networks (RNNs) to optimize the transfer of long-term channel features from uplink to downlink. Simulation results demonstrate that the HyperRNN achieves a lower normalized mean square error (NMSE) performance, and that it reduces requirements in terms of pilot lengths.
1808.01976
Jonas Rauber
Wieland Brendel, Jonas Rauber, Alexey Kurakin, Nicolas Papernot, Behar Veliqi, Marcel Salath\'e, Sharada P. Mohanty, Matthias Bethge
Adversarial Vision Challenge
https://www.crowdai.org/challenges/adversarial-vision-challenge
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. This document is an updated version of our competition proposal that was accepted in the competition track of 32nd Conference on Neural Information Processing Systems (NIPS 2018).
[ { "created": "Mon, 6 Aug 2018 16:13:43 GMT", "version": "v1" }, { "created": "Thu, 6 Dec 2018 18:21:49 GMT", "version": "v2" } ]
2018-12-07
[ [ "Brendel", "Wieland", "" ], [ "Rauber", "Jonas", "" ], [ "Kurakin", "Alexey", "" ], [ "Papernot", "Nicolas", "" ], [ "Veliqi", "Behar", "" ], [ "Salathé", "Marcel", "" ], [ "Mohanty", "Sharada P.", "" ], [ "Bethge", "Matthias", "" ] ]
The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. This document is an updated version of our competition proposal that was accepted in the competition track of 32nd Conference on Neural Information Processing Systems (NIPS 2018).
1901.00431
Dimitrios Rafailidis Dr
Dimitrios Rafailidis and Yannis Manolopoulos
The Technological Gap Between Virtual Assistants and Recommendation Systems
6 pages, Report
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Virtual assistants, also known as intelligent conversational systems such as Google's Virtual Assistant and Apple's Siri, interact with human-like responses to users' queries and finish specific tasks. Meanwhile, existing recommendation technologies model users' evolving, diverse and multi-aspect preferences to generate recommendations in various domains/applications, aiming to improve the citizens' daily life by making suggestions. The repertoire of actions is no longer limited to the one-shot presentation of recommendation lists, which can be insufficient when the goal is to offer decision support for the user, by quickly adapting to his/her preferences through conversations. Such an interactive mechanism is currently missing from recommendation systems. This article sheds light on the gap between virtual assistants and recommendation systems in terms of different technological aspects. In particular, we try to answer the most fundamental research question, which are the missing technological factors to implement a personalized intelligent conversational agent for producing accurate recommendations while taking into account how users behave under different conditions. The goal is, instead of adapting humans to machines, to actually provide users with better recommendation services so that machines will be adapted to humans in daily life.
[ { "created": "Fri, 21 Dec 2018 00:50:03 GMT", "version": "v1" }, { "created": "Sun, 6 Jan 2019 13:46:09 GMT", "version": "v2" } ]
2019-01-08
[ [ "Rafailidis", "Dimitrios", "" ], [ "Manolopoulos", "Yannis", "" ] ]
Virtual assistants, also known as intelligent conversational systems such as Google's Virtual Assistant and Apple's Siri, interact with human-like responses to users' queries and finish specific tasks. Meanwhile, existing recommendation technologies model users' evolving, diverse and multi-aspect preferences to generate recommendations in various domains/applications, aiming to improve the citizens' daily life by making suggestions. The repertoire of actions is no longer limited to the one-shot presentation of recommendation lists, which can be insufficient when the goal is to offer decision support for the user, by quickly adapting to his/her preferences through conversations. Such an interactive mechanism is currently missing from recommendation systems. This article sheds light on the gap between virtual assistants and recommendation systems in terms of different technological aspects. In particular, we try to answer the most fundamental research question, which are the missing technological factors to implement a personalized intelligent conversational agent for producing accurate recommendations while taking into account how users behave under different conditions. The goal is, instead of adapting humans to machines, to actually provide users with better recommendation services so that machines will be adapted to humans in daily life.
1611.09701
Sanat Biswas
Sanat Biswas, Li Qiao, Andrew Dempster
Computationally Efficient Unscented Kalman Filtering Techniques for Launch Vehicle Navigation using a Space-borne GPS Receiver
null
Proc. ION GNSS+ 2016, Institute of Navigation, Portland, Oregon, USA, September 14, 2016
null
null
cs.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
The Extended Kalman Filter (EKF) is a well established technique for position and velocity estimation. However, the performance of the EKF degrades considerably in highly non-linear system applications as it requires local linearisation in its prediction stage. The Unscented Kalman Filter (UKF) was developed to address the non-linearity in the system by deterministic sampling. The UKF provides better estimation accuracy than the EKF for highly non-linear systems. However, the UKF requires multiple propagations of sampled state vectors in the measurement interval, which results in higher processing time than for the EKF. This paper proposes an application of two newly developed UKF variants in launch vehicle navigation. These two algorithms called the Single Propagation Unscented Kalman Filter (SPUKF) and the Extrapolated Single Propagation Unscented Kalman Filter (ESPUKF), reduce the processing time of the original UKF significantly and provide estimation accuracies comparable to the UKF. The estimation performance of the SPUKF and the ESPUKF is demonstrated using Falcon 9 V1.1 launch vehicle in CRS-5 mission scenario. The launch vehicle trajectory for the mission is generated using publicly available mission parameters. A SPIRENT GNSS simulator is used to generate the received GPS signal on the trajectory. Pseudo-range observations are used in the EKF, UKF, SPUKF and the ESPUKF separately and the estimation accuracies are compared. The results show that the estimation errors of the SPUKF and the ESPUKF are 15.44% and 10.52% higher than the UKF respectively. The processing time reduces by 83% for the SPUKF and 69.14% for the ESPUKF compared to the UKF.
[ { "created": "Fri, 25 Nov 2016 01:41:08 GMT", "version": "v1" } ]
2016-11-30
[ [ "Biswas", "Sanat", "" ], [ "Qiao", "Li", "" ], [ "Dempster", "Andrew", "" ] ]
The Extended Kalman Filter (EKF) is a well established technique for position and velocity estimation. However, the performance of the EKF degrades considerably in highly non-linear system applications as it requires local linearisation in its prediction stage. The Unscented Kalman Filter (UKF) was developed to address the non-linearity in the system by deterministic sampling. The UKF provides better estimation accuracy than the EKF for highly non-linear systems. However, the UKF requires multiple propagations of sampled state vectors in the measurement interval, which results in higher processing time than for the EKF. This paper proposes an application of two newly developed UKF variants in launch vehicle navigation. These two algorithms called the Single Propagation Unscented Kalman Filter (SPUKF) and the Extrapolated Single Propagation Unscented Kalman Filter (ESPUKF), reduce the processing time of the original UKF significantly and provide estimation accuracies comparable to the UKF. The estimation performance of the SPUKF and the ESPUKF is demonstrated using Falcon 9 V1.1 launch vehicle in CRS-5 mission scenario. The launch vehicle trajectory for the mission is generated using publicly available mission parameters. A SPIRENT GNSS simulator is used to generate the received GPS signal on the trajectory. Pseudo-range observations are used in the EKF, UKF, SPUKF and the ESPUKF separately and the estimation accuracies are compared. The results show that the estimation errors of the SPUKF and the ESPUKF are 15.44% and 10.52% higher than the UKF respectively. The processing time reduces by 83% for the SPUKF and 69.14% for the ESPUKF compared to the UKF.
2305.17268
Yucheng Li
Yucheng Li, Shun Wang, Chenghua Lin, Guerin Frank
Metaphor Detection via Explicit Basic Meanings Modelling
ACL 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
One noticeable trend in metaphor detection is the embrace of linguistic theories such as the metaphor identification procedure (MIP) for model architecture design. While MIP clearly defines that the metaphoricity of a lexical unit is determined based on the contrast between its \textit{contextual meaning} and its \textit{basic meaning}, existing work does not strictly follow this principle, typically using the \textit{aggregated meaning} to approximate the basic meaning of target words. In this paper, we propose a novel metaphor detection method, which models the basic meaning of the word based on literal annotation from the training set, and then compares this with the contextual meaning in a target sentence to identify metaphors. Empirical results show that our method outperforms the state-of-the-art method significantly by 1.0\% in F1 score. Moreover, our performance even reaches the theoretical upper bound on the VUA18 benchmark for targets with basic annotations, which demonstrates the importance of modelling basic meanings for metaphor detection.
[ { "created": "Fri, 26 May 2023 21:25:05 GMT", "version": "v1" } ]
2023-05-30
[ [ "Li", "Yucheng", "" ], [ "Wang", "Shun", "" ], [ "Lin", "Chenghua", "" ], [ "Frank", "Guerin", "" ] ]
One noticeable trend in metaphor detection is the embrace of linguistic theories such as the metaphor identification procedure (MIP) for model architecture design. While MIP clearly defines that the metaphoricity of a lexical unit is determined based on the contrast between its \textit{contextual meaning} and its \textit{basic meaning}, existing work does not strictly follow this principle, typically using the \textit{aggregated meaning} to approximate the basic meaning of target words. In this paper, we propose a novel metaphor detection method, which models the basic meaning of the word based on literal annotation from the training set, and then compares this with the contextual meaning in a target sentence to identify metaphors. Empirical results show that our method outperforms the state-of-the-art method significantly by 1.0\% in F1 score. Moreover, our performance even reaches the theoretical upper bound on the VUA18 benchmark for targets with basic annotations, which demonstrates the importance of modelling basic meanings for metaphor detection.
1610.05419
Ali Khalajmehrabadi
Ali Khalajmehrabadi, Nikolaos Gatsis, Daniel Pack and David Akopian
A Joint Indoor WLAN Localization and Outlier Detection Scheme Using LASSO and Elastic-Net Optimization Techniques
null
null
10.1109/TMC.2016.2616465
null
cs.NI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce two indoor Wireless Local Area Network (WLAN) positioning methods using augmented sparse recovery algorithms. These schemes render a sparse user's position vector, and in parallel, minimize the distance between the online measurement and radio map. The overall localization scheme for both methods consists of three steps: 1) coarse localization, obtained from comparing the online measurements with clustered radio map. A novel graph-based method is proposed to cluster the offline fingerprints. In the online phase, a Region Of Interest (ROI) is selected within which we search for the user's location; 2) Access Point (AP) selection; and 3) fine localization through the novel sparse recovery algorithms. Since the online measurements are subject to inordinate measurement readings, called outliers, the sparse recovery methods are modified in order to jointly estimate the outliers and user's position vector. The outlier detection procedure identifies the APs whose readings are either not available or erroneous. The proposed localization methods have been tested with Received Signal Strength (RSS) measurements in a typical office environment and the results show that they can localize the user with significantly high accuracy and resolution which is superior to the results from competing WLAN fingerprinting localization methods.
[ { "created": "Tue, 18 Oct 2016 03:37:11 GMT", "version": "v1" } ]
2016-10-20
[ [ "Khalajmehrabadi", "Ali", "" ], [ "Gatsis", "Nikolaos", "" ], [ "Pack", "Daniel", "" ], [ "Akopian", "David", "" ] ]
In this paper, we introduce two indoor Wireless Local Area Network (WLAN) positioning methods using augmented sparse recovery algorithms. These schemes render a sparse user's position vector, and in parallel, minimize the distance between the online measurement and radio map. The overall localization scheme for both methods consists of three steps: 1) coarse localization, obtained from comparing the online measurements with clustered radio map. A novel graph-based method is proposed to cluster the offline fingerprints. In the online phase, a Region Of Interest (ROI) is selected within which we search for the user's location; 2) Access Point (AP) selection; and 3) fine localization through the novel sparse recovery algorithms. Since the online measurements are subject to inordinate measurement readings, called outliers, the sparse recovery methods are modified in order to jointly estimate the outliers and user's position vector. The outlier detection procedure identifies the APs whose readings are either not available or erroneous. The proposed localization methods have been tested with Received Signal Strength (RSS) measurements in a typical office environment and the results show that they can localize the user with significantly high accuracy and resolution which is superior to the results from competing WLAN fingerprinting localization methods.
2103.04443
Oliver Hohlfeld
Daniel Kopp and Christoph Dietzel and Oliver Hohlfeld
DDoS Never Dies? An IXP Perspective on DDoS Amplification Attacks
To appear at PAM 2021
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DDoS attacks remain a major security threat to the continuous operation of Internet edge infrastructures, web services, and cloud platforms. While a large body of research focuses on DDoS detection and protection, to date we ultimately failed to eradicate DDoS altogether. Yet, the landscape of DDoS attack mechanisms is even evolving, demanding an updated perspective on DDoS attacks in the wild. In this paper, we identify up to 2608 DDoS amplification attacks at a single day by analyzing multiple Tbps of traffic flows at a major IXP with a rich ecosystem of different networks. We observe the prevalence of well-known amplification attack protocols (e.g., NTP, CLDAP), which should no longer exist given the established mitigation strategies. Nevertheless, they pose the largest fraction on DDoS amplification attacks within our observation and we witness the emergence of DDoS attacks using recently discovered amplification protocols (e.g., OpenVPN, ARMS, Ubiquity Discovery Protocol). By analyzing the impact of DDoS on core Internet infrastructure, we show that DDoS can overload backbone-capacity and that filtering approaches in prior work omit 97% of the attack traffic.
[ { "created": "Sun, 7 Mar 2021 20:22:03 GMT", "version": "v1" } ]
2021-03-09
[ [ "Kopp", "Daniel", "" ], [ "Dietzel", "Christoph", "" ], [ "Hohlfeld", "Oliver", "" ] ]
DDoS attacks remain a major security threat to the continuous operation of Internet edge infrastructures, web services, and cloud platforms. While a large body of research focuses on DDoS detection and protection, to date we ultimately failed to eradicate DDoS altogether. Yet, the landscape of DDoS attack mechanisms is even evolving, demanding an updated perspective on DDoS attacks in the wild. In this paper, we identify up to 2608 DDoS amplification attacks at a single day by analyzing multiple Tbps of traffic flows at a major IXP with a rich ecosystem of different networks. We observe the prevalence of well-known amplification attack protocols (e.g., NTP, CLDAP), which should no longer exist given the established mitigation strategies. Nevertheless, they pose the largest fraction on DDoS amplification attacks within our observation and we witness the emergence of DDoS attacks using recently discovered amplification protocols (e.g., OpenVPN, ARMS, Ubiquity Discovery Protocol). By analyzing the impact of DDoS on core Internet infrastructure, we show that DDoS can overload backbone-capacity and that filtering approaches in prior work omit 97% of the attack traffic.
2001.03643
Sajjad Arshad
Sajjad Arshad
Understanding and Mitigating the Security Risks of Content Inclusion in Web Browsers
Doctor of Philosophy (PhD) Thesis Khoury College of Computer Sciences, Northeastern University Boston, MA, USA, April 2019
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thanks to the wide range of features offered by web browsers, modern websites include various types of content such as JavaScript and CSS in order to create interactive user interfaces. Browser vendors also provided extensions to enhance web browsers with additional useful capabilities that are not necessarily maintained or supported by default. However, included content can introduce security risks to users of these websites, unbeknownst to both website operators and users. In addition, the browser's interpretation of the resource URLs may be very different from how the web server resolves the URL to determine which resource should be returned to the browser. The URL may not correspond to an actual server-side file system structure at all, or the web server may internally rewrite parts of the URL. This semantic disconnect between web browsers and web servers in interpreting relative paths (path confusion) could be exploited by Relative Path Overwrite (RPO). On the other hand, even tough extensions provide useful additional functionality for web browsers, they are also an increasingly popular vector for attacks. Due to the high degree of privilege extensions can hold, extensions have been abused to inject advertisements into web pages that divert revenue from content publishers and potentially expose users to malware. In this thesis, I propose novel research into understanding and mitigating the security risks of content inclusion in web browsers to protect website publishers as well as their users.
[ { "created": "Fri, 10 Jan 2020 19:38:58 GMT", "version": "v1" } ]
2020-01-14
[ [ "Arshad", "Sajjad", "" ] ]
Thanks to the wide range of features offered by web browsers, modern websites include various types of content such as JavaScript and CSS in order to create interactive user interfaces. Browser vendors also provided extensions to enhance web browsers with additional useful capabilities that are not necessarily maintained or supported by default. However, included content can introduce security risks to users of these websites, unbeknownst to both website operators and users. In addition, the browser's interpretation of the resource URLs may be very different from how the web server resolves the URL to determine which resource should be returned to the browser. The URL may not correspond to an actual server-side file system structure at all, or the web server may internally rewrite parts of the URL. This semantic disconnect between web browsers and web servers in interpreting relative paths (path confusion) could be exploited by Relative Path Overwrite (RPO). On the other hand, even tough extensions provide useful additional functionality for web browsers, they are also an increasingly popular vector for attacks. Due to the high degree of privilege extensions can hold, extensions have been abused to inject advertisements into web pages that divert revenue from content publishers and potentially expose users to malware. In this thesis, I propose novel research into understanding and mitigating the security risks of content inclusion in web browsers to protect website publishers as well as their users.
1511.08447
Brijesh Dongol
Brijesh Dongol, Robert M. Hierons
Decidability and Complexity for Quiescent Consistency and its Variations
null
null
null
null
cs.LO cs.DC cs.DS cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quiescent consistency is a notion of correctness for a concurrent object that gives meaning to the object's behaviours in quiescent states, i.e., states in which none of the object's operations are being executed. Correctness of an implementation object is defined in terms of a corresponding abstract specification. This gives rise to two important verification questions: membership (checking whether a behaviour of the implementation is allowed by the specification) and correctness (checking whether all behaviours of the implementation are allowed by the specification). In this paper, we show that the membership problem for quiescent consistency is NP-complete and that the correctness problem is decidable, but coNP-hard and in EXPSPACE. For both problems, we consider restricted versions of quiescent consistency by assuming an upper limit on the number of events between two quiescent points. Here, we show that the membership problem is in PTIME, whereas correctness is in PSPACE. Quiescent consistency does not guarantee sequential consistency, i.e., it allows operation calls by the same process to be reordered when mapping to an abstract specification. Therefore, we also consider quiescent sequential consistency, which strengthens quiescent consistency with an additional sequential consistency condition. We show that the unrestricted versions of membership and correctness are NP-complete and undecidable, respectively. When by placing a limit on the number of events between two quiescent points, membership is in PTIME, while correctness is in PSPACE. Finally, we consider a version of quiescent sequential consistency that places an upper limit on the number of processes for every run of the implementation, and show that the membership problem for quiescent sequential consistency with this restriction is in PTIME.
[ { "created": "Thu, 26 Nov 2015 16:46:30 GMT", "version": "v1" } ]
2015-11-30
[ [ "Dongol", "Brijesh", "" ], [ "Hierons", "Robert M.", "" ] ]
Quiescent consistency is a notion of correctness for a concurrent object that gives meaning to the object's behaviours in quiescent states, i.e., states in which none of the object's operations are being executed. Correctness of an implementation object is defined in terms of a corresponding abstract specification. This gives rise to two important verification questions: membership (checking whether a behaviour of the implementation is allowed by the specification) and correctness (checking whether all behaviours of the implementation are allowed by the specification). In this paper, we show that the membership problem for quiescent consistency is NP-complete and that the correctness problem is decidable, but coNP-hard and in EXPSPACE. For both problems, we consider restricted versions of quiescent consistency by assuming an upper limit on the number of events between two quiescent points. Here, we show that the membership problem is in PTIME, whereas correctness is in PSPACE. Quiescent consistency does not guarantee sequential consistency, i.e., it allows operation calls by the same process to be reordered when mapping to an abstract specification. Therefore, we also consider quiescent sequential consistency, which strengthens quiescent consistency with an additional sequential consistency condition. We show that the unrestricted versions of membership and correctness are NP-complete and undecidable, respectively. When by placing a limit on the number of events between two quiescent points, membership is in PTIME, while correctness is in PSPACE. Finally, we consider a version of quiescent sequential consistency that places an upper limit on the number of processes for every run of the implementation, and show that the membership problem for quiescent sequential consistency with this restriction is in PTIME.
2202.01013
Chukwuemeka Ibe
Ibe Chukwuemeka Emmanuel and Ekaterina Mitrofanova
Fairness of Machine Learning Algorithms in Demography
This is an empirical replication study but with other demographic data. The theory and method description is heavily based on the arXiv:2006.10531
null
null
null
cs.LG cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
The paper is devoted to the study of the model fairness and process fairness of the Russian demographic dataset by making predictions of divorce of the 1st marriage, religiosity, 1st employment and completion of education. Our goal was to make classifiers more equitable by reducing their reliance on sensitive features while increasing or at least maintaining their accuracy. We took inspiration from "dropout" techniques in neural-based approaches and suggested a model that uses "feature drop-out" to address process fairness. To evaluate a classifier's fairness and decide the sensitive features to eliminate, we used "LIME Explanations". This results in a pool of classifiers due to feature dropout whose ensemble has been shown to be less reliant on sensitive features and to have improved or no effect on accuracy. Our empirical study was performed on four families of classifiers (Logistic Regression, Random Forest, Bagging, and Adaboost) and carried out on real-life dataset (Russian demographic data derived from Generations and Gender Survey), and it showed that all of the models became less dependent on sensitive features (such as gender, breakup of the 1st partnership, 1st partnership, etc.) and showed improvements or no impact on accuracy
[ { "created": "Wed, 2 Feb 2022 13:12:35 GMT", "version": "v1" } ]
2022-02-03
[ [ "Emmanuel", "Ibe Chukwuemeka", "" ], [ "Mitrofanova", "Ekaterina", "" ] ]
The paper is devoted to the study of the model fairness and process fairness of the Russian demographic dataset by making predictions of divorce of the 1st marriage, religiosity, 1st employment and completion of education. Our goal was to make classifiers more equitable by reducing their reliance on sensitive features while increasing or at least maintaining their accuracy. We took inspiration from "dropout" techniques in neural-based approaches and suggested a model that uses "feature drop-out" to address process fairness. To evaluate a classifier's fairness and decide the sensitive features to eliminate, we used "LIME Explanations". This results in a pool of classifiers due to feature dropout whose ensemble has been shown to be less reliant on sensitive features and to have improved or no effect on accuracy. Our empirical study was performed on four families of classifiers (Logistic Regression, Random Forest, Bagging, and Adaboost) and carried out on real-life dataset (Russian demographic data derived from Generations and Gender Survey), and it showed that all of the models became less dependent on sensitive features (such as gender, breakup of the 1st partnership, 1st partnership, etc.) and showed improvements or no impact on accuracy
2405.09880
Yuchen Guo
Yuchen Guo, Hanqun Cao, Lok Ming Lui
Deep Learning-Based Quasi-Conformal Surface Registration for Partial 3D Faces Applied to Facial Recognition
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
3D face registration is an important process in which a 3D face model is aligned and mapped to a template face. However, the task of 3D face registration becomes particularly challenging when dealing with partial face data, where only limited facial information is available. To address this challenge, this paper presents a novel deep learning-based approach that combines quasi-conformal geometry with deep neural networks for partial face registration. The proposed framework begins with a Landmark Detection Network that utilizes curvature information to detect the presence of facial features and estimate their corresponding coordinates. These facial landmark features serve as essential guidance for the registration process. To establish a dense correspondence between the partial face and the template surface, a registration network based on quasiconformal theories is employed. The registration network establishes a bijective quasiconformal surface mapping aligning corresponding partial faces based on detected landmarks and curvature values. It consists of the Coefficients Prediction Network, which outputs the optimal Beltrami coefficient representing the surface mapping. The Beltrami coefficient quantifies the local geometric distortion of the mapping. By controlling the magnitude of the Beltrami coefficient through a suitable activation function, the bijectivity and geometric distortion of the mapping can be controlled. The Beltrami coefficient is then fed into the Beltrami solver network to reconstruct the corresponding mapping. The surface registration enables the acquisition of corresponding regions and the establishment of point-wise correspondence between different partial faces, facilitating precise shape comparison through the evaluation of point-wise geometric differences at these corresponding regions. Experimental results demonstrate the effectiveness of the proposed method.
[ { "created": "Thu, 16 May 2024 08:03:41 GMT", "version": "v1" } ]
2024-05-17
[ [ "Guo", "Yuchen", "" ], [ "Cao", "Hanqun", "" ], [ "Lui", "Lok Ming", "" ] ]
3D face registration is an important process in which a 3D face model is aligned and mapped to a template face. However, the task of 3D face registration becomes particularly challenging when dealing with partial face data, where only limited facial information is available. To address this challenge, this paper presents a novel deep learning-based approach that combines quasi-conformal geometry with deep neural networks for partial face registration. The proposed framework begins with a Landmark Detection Network that utilizes curvature information to detect the presence of facial features and estimate their corresponding coordinates. These facial landmark features serve as essential guidance for the registration process. To establish a dense correspondence between the partial face and the template surface, a registration network based on quasiconformal theories is employed. The registration network establishes a bijective quasiconformal surface mapping aligning corresponding partial faces based on detected landmarks and curvature values. It consists of the Coefficients Prediction Network, which outputs the optimal Beltrami coefficient representing the surface mapping. The Beltrami coefficient quantifies the local geometric distortion of the mapping. By controlling the magnitude of the Beltrami coefficient through a suitable activation function, the bijectivity and geometric distortion of the mapping can be controlled. The Beltrami coefficient is then fed into the Beltrami solver network to reconstruct the corresponding mapping. The surface registration enables the acquisition of corresponding regions and the establishment of point-wise correspondence between different partial faces, facilitating precise shape comparison through the evaluation of point-wise geometric differences at these corresponding regions. Experimental results demonstrate the effectiveness of the proposed method.
1810.11934
Shantanu Shahane
Shantanu Shahane, Narayana R. Aluru, Surya Pratap Vanka
Uncertainty Quantification in Three Dimensional Natural Convection using Polynomial Chaos Expansion and Deep Neural Networks
null
null
10.1016/j.ijheatmasstransfer.2019.05.014
null
cs.NA physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper analyzes the effects of input uncertainties on the outputs of a three dimensional natural convection problem in a differentially heated cubical enclosure. Two different cases are considered for parameter uncertainty propagation and global sensitivity analysis. In case A, stochastic variation is introduced in the two non-dimensional parameters (Rayleigh and Prandtl numbers) with an assumption that the boundary temperature is uniform. Being a two dimensional stochastic problem, the polynomial chaos expansion (PCE) method is used as a surrogate model. Case B deals with non-uniform stochasticity in the boundary temperature. Instead of the traditional Gaussian process model with the Karhunen-Lo$\grave{e}$ve expansion, a novel approach is successfully implemented to model uncertainty in the boundary condition. The boundary is divided into multiple domains and the temperature imposed on each domain is assumed to be an independent and identically distributed (i.i.d) random variable. Deep neural networks are trained with the boundary temperatures as inputs and Nusselt number, internal temperature or velocities as outputs. The number of domains which is essentially the stochastic dimension is 4, 8, 16 or 32. Rigorous training and testing process shows that the neural network is able to approximate the outputs to a reasonable accuracy. For a high stochastic dimension such as 32, it is computationally expensive to fit the PCE. This paper demonstrates a novel way of using the deep neural network as a surrogate modeling method for uncertainty quantification with the number of simulations much fewer than that required for fitting the PCE, thus, saving the computational cost.
[ { "created": "Mon, 29 Oct 2018 03:24:11 GMT", "version": "v1" }, { "created": "Tue, 8 Jan 2019 15:41:30 GMT", "version": "v2" } ]
2020-10-06
[ [ "Shahane", "Shantanu", "" ], [ "Aluru", "Narayana R.", "" ], [ "Vanka", "Surya Pratap", "" ] ]
This paper analyzes the effects of input uncertainties on the outputs of a three dimensional natural convection problem in a differentially heated cubical enclosure. Two different cases are considered for parameter uncertainty propagation and global sensitivity analysis. In case A, stochastic variation is introduced in the two non-dimensional parameters (Rayleigh and Prandtl numbers) with an assumption that the boundary temperature is uniform. Being a two dimensional stochastic problem, the polynomial chaos expansion (PCE) method is used as a surrogate model. Case B deals with non-uniform stochasticity in the boundary temperature. Instead of the traditional Gaussian process model with the Karhunen-Lo$\grave{e}$ve expansion, a novel approach is successfully implemented to model uncertainty in the boundary condition. The boundary is divided into multiple domains and the temperature imposed on each domain is assumed to be an independent and identically distributed (i.i.d) random variable. Deep neural networks are trained with the boundary temperatures as inputs and Nusselt number, internal temperature or velocities as outputs. The number of domains which is essentially the stochastic dimension is 4, 8, 16 or 32. Rigorous training and testing process shows that the neural network is able to approximate the outputs to a reasonable accuracy. For a high stochastic dimension such as 32, it is computationally expensive to fit the PCE. This paper demonstrates a novel way of using the deep neural network as a surrogate modeling method for uncertainty quantification with the number of simulations much fewer than that required for fitting the PCE, thus, saving the computational cost.
1209.6476
Reena Philips Mrs
K. S. Rashmi, V. Suma, M. Vaidehi
Factors Influencing Job Rejections in Cloud Environment
6 Pages, 5 Figures, 8 Tables
null
null
null
cs.DC cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The IT organizations invests heavy capital by consuming large scale infrastructure and advanced operating platforms. The advances in technology has resulted in emergence of cloud computing, which is promising technology to achieve the aforementioned objective. At the peak hours, the jobs arriving to the cloud system are normally high demanding efficient execution and dispatch. An observation that has been carried out in this paper by capturing a job arriving pattern from a monitoring system explains that most of the jobs get rejected because of lack of efficient technology. The job rejections can be controlled by certain factors such as job scheduling and load balancing. Therefore, in this paper the efficiency of Round Robin (RR) scheduling strategy used for job scheduling and Shortest Job First Scheduling (SJFS) technique used for load balancing in reducing the job rejections are analyzed. Further, a proposal for an effective load balancing approach to avoid deadlocks has been discussed.
[ { "created": "Fri, 28 Sep 2012 10:39:04 GMT", "version": "v1" } ]
2012-10-01
[ [ "Rashmi", "K. S.", "" ], [ "Suma", "V.", "" ], [ "Vaidehi", "M.", "" ] ]
The IT organizations invests heavy capital by consuming large scale infrastructure and advanced operating platforms. The advances in technology has resulted in emergence of cloud computing, which is promising technology to achieve the aforementioned objective. At the peak hours, the jobs arriving to the cloud system are normally high demanding efficient execution and dispatch. An observation that has been carried out in this paper by capturing a job arriving pattern from a monitoring system explains that most of the jobs get rejected because of lack of efficient technology. The job rejections can be controlled by certain factors such as job scheduling and load balancing. Therefore, in this paper the efficiency of Round Robin (RR) scheduling strategy used for job scheduling and Shortest Job First Scheduling (SJFS) technique used for load balancing in reducing the job rejections are analyzed. Further, a proposal for an effective load balancing approach to avoid deadlocks has been discussed.
1911.10499
Milan Lopuha\"a-Zwakenberg
Milan Lopuha\"a-Zwakenberg, Zitao Li, Boris \v{S}kori\'c and Ninghui Li
Improving Frequency Estimation under Local Differential Privacy
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Local Differential Privacy protocols are stochastic protocols used in data aggregation when individual users do not trust the data aggregator with their private data. In such protocols there is a fundamental tradeoff between user privacy and aggregator utility. In the setting of frequency estimation, established bounds on this tradeoff are either nonquantitative, or far from what is known to be attainable. In this paper, we use information-theoretical methods to significantly improve established bounds. We also show that the new bounds are attainable for binary inputs. Furthermore, our methods lead to improved frequency estimators, which we experimentally show to outperform state-of-the-art methods.
[ { "created": "Sun, 24 Nov 2019 10:51:43 GMT", "version": "v1" }, { "created": "Tue, 1 Sep 2020 08:02:53 GMT", "version": "v2" } ]
2020-09-04
[ [ "Lopuhaä-Zwakenberg", "Milan", "" ], [ "Li", "Zitao", "" ], [ "Škorić", "Boris", "" ], [ "Li", "Ninghui", "" ] ]
Local Differential Privacy protocols are stochastic protocols used in data aggregation when individual users do not trust the data aggregator with their private data. In such protocols there is a fundamental tradeoff between user privacy and aggregator utility. In the setting of frequency estimation, established bounds on this tradeoff are either nonquantitative, or far from what is known to be attainable. In this paper, we use information-theoretical methods to significantly improve established bounds. We also show that the new bounds are attainable for binary inputs. Furthermore, our methods lead to improved frequency estimators, which we experimentally show to outperform state-of-the-art methods.
2011.07661
Luca Parisi
Luca Parisi, Renfei Ma, Narrendar RaviChandran and Matteo Lanzillotta
hyper-sinh: An Accurate and Reliable Function from Shallow to Deep Learning in TensorFlow and Keras
19 pages, 6 listings/Python code snippets, 4 figures, 5 tables
null
10.1016/j.mlwa.2021.100112
null
cs.CV cs.AI cs.CL cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
This paper presents the 'hyper-sinh', a variation of the m-arcsinh activation function suitable for Deep Learning (DL)-based algorithms for supervised learning, such as Convolutional Neural Networks (CNN). hyper-sinh, developed in the open source Python libraries TensorFlow and Keras, is thus described and validated as an accurate and reliable activation function for both shallow and deep neural networks. Improvements in accuracy and reliability in image and text classification tasks on five (N = 5) benchmark data sets available from Keras are discussed. Experimental results demonstrate the overall competitive classification performance of both shallow and deep neural networks, obtained via this novel function. This function is evaluated with respect to gold standard activation functions, demonstrating its overall competitive accuracy and reliability for both image and text classification.
[ { "created": "Sun, 15 Nov 2020 23:38:59 GMT", "version": "v1" } ]
2021-08-19
[ [ "Parisi", "Luca", "" ], [ "Ma", "Renfei", "" ], [ "RaviChandran", "Narrendar", "" ], [ "Lanzillotta", "Matteo", "" ] ]
This paper presents the 'hyper-sinh', a variation of the m-arcsinh activation function suitable for Deep Learning (DL)-based algorithms for supervised learning, such as Convolutional Neural Networks (CNN). hyper-sinh, developed in the open source Python libraries TensorFlow and Keras, is thus described and validated as an accurate and reliable activation function for both shallow and deep neural networks. Improvements in accuracy and reliability in image and text classification tasks on five (N = 5) benchmark data sets available from Keras are discussed. Experimental results demonstrate the overall competitive classification performance of both shallow and deep neural networks, obtained via this novel function. This function is evaluated with respect to gold standard activation functions, demonstrating its overall competitive accuracy and reliability for both image and text classification.
1911.01318
Michael Segundo Ortiz
Michael Segundo Ortz
Sequential/Spatial, a Survey of Interactive Information Retrieval Methods for Controlled Experimentation and Evaluation
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This survey presents studies that investigated non-spatial (sequential) and spatial information retrieval systems in parallel during a battery of information-seeking tasks with respect to user navigational behaviors, incidental learning, retrieval performance, cognitive abilities & load, direct manipulation of 2D & 3D interfaces, and satisfaction. I consider how information theory has contributed to the concepts of foraging, sense-making, exploration, and how the applied areas of interactive information retrieval (IIR) and cognitive/behavioral psychology have implemented these concepts into architecture, interface design, experimental design, user study, and evaluation methodology.
[ { "created": "Mon, 4 Nov 2019 16:30:39 GMT", "version": "v1" } ]
2019-11-05
[ [ "Ortz", "Michael Segundo", "" ] ]
This survey presents studies that investigated non-spatial (sequential) and spatial information retrieval systems in parallel during a battery of information-seeking tasks with respect to user navigational behaviors, incidental learning, retrieval performance, cognitive abilities & load, direct manipulation of 2D & 3D interfaces, and satisfaction. I consider how information theory has contributed to the concepts of foraging, sense-making, exploration, and how the applied areas of interactive information retrieval (IIR) and cognitive/behavioral psychology have implemented these concepts into architecture, interface design, experimental design, user study, and evaluation methodology.
2308.00862
Sarah Shoker
Sarah Shoker, Andrew Reddie, Sarah Barrington, Ruby Booth, Miles Brundage, Husanjot Chahal, Michael Depp, Bill Drexel, Ritwik Gupta, Marina Favaro, Jake Hecla, Alan Hickey, Margarita Konaev, Kirthi Kumar, Nathan Lambert, Andrew Lohn, Cullen O'Keefe, Nazneen Rajani, Michael Sellitto, Robert Trager, Leah Walker, Alexa Wehsener, Jessica Young
Confidence-Building Measures for Artificial Intelligence: Workshop Proceedings
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Foundation models could eventually introduce several pathways for undermining state security: accidents, inadvertent escalation, unintentional conflict, the proliferation of weapons, and the interference with human diplomacy are just a few on a long list. The Confidence-Building Measures for Artificial Intelligence workshop hosted by the Geopolitics Team at OpenAI and the Berkeley Risk and Security Lab at the University of California brought together a multistakeholder group to think through the tools and strategies to mitigate the potential risks introduced by foundation models to international security. Originating in the Cold War, confidence-building measures (CBMs) are actions that reduce hostility, prevent conflict escalation, and improve trust between parties. The flexibility of CBMs make them a key instrument for navigating the rapid changes in the foundation model landscape. Participants identified the following CBMs that directly apply to foundation models and which are further explained in this conference proceedings: 1. crisis hotlines 2. incident sharing 3. model, transparency, and system cards 4. content provenance and watermarks 5. collaborative red teaming and table-top exercises and 6. dataset and evaluation sharing. Because most foundation model developers are non-government entities, many CBMs will need to involve a wider stakeholder community. These measures can be implemented either by AI labs or by relevant government actors.
[ { "created": "Tue, 1 Aug 2023 22:20:11 GMT", "version": "v1" }, { "created": "Thu, 3 Aug 2023 20:06:39 GMT", "version": "v2" } ]
2023-08-07
[ [ "Shoker", "Sarah", "" ], [ "Reddie", "Andrew", "" ], [ "Barrington", "Sarah", "" ], [ "Booth", "Ruby", "" ], [ "Brundage", "Miles", "" ], [ "Chahal", "Husanjot", "" ], [ "Depp", "Michael", "" ], [ "Drexel", "Bill", "" ], [ "Gupta", "Ritwik", "" ], [ "Favaro", "Marina", "" ], [ "Hecla", "Jake", "" ], [ "Hickey", "Alan", "" ], [ "Konaev", "Margarita", "" ], [ "Kumar", "Kirthi", "" ], [ "Lambert", "Nathan", "" ], [ "Lohn", "Andrew", "" ], [ "O'Keefe", "Cullen", "" ], [ "Rajani", "Nazneen", "" ], [ "Sellitto", "Michael", "" ], [ "Trager", "Robert", "" ], [ "Walker", "Leah", "" ], [ "Wehsener", "Alexa", "" ], [ "Young", "Jessica", "" ] ]
Foundation models could eventually introduce several pathways for undermining state security: accidents, inadvertent escalation, unintentional conflict, the proliferation of weapons, and the interference with human diplomacy are just a few on a long list. The Confidence-Building Measures for Artificial Intelligence workshop hosted by the Geopolitics Team at OpenAI and the Berkeley Risk and Security Lab at the University of California brought together a multistakeholder group to think through the tools and strategies to mitigate the potential risks introduced by foundation models to international security. Originating in the Cold War, confidence-building measures (CBMs) are actions that reduce hostility, prevent conflict escalation, and improve trust between parties. The flexibility of CBMs make them a key instrument for navigating the rapid changes in the foundation model landscape. Participants identified the following CBMs that directly apply to foundation models and which are further explained in this conference proceedings: 1. crisis hotlines 2. incident sharing 3. model, transparency, and system cards 4. content provenance and watermarks 5. collaborative red teaming and table-top exercises and 6. dataset and evaluation sharing. Because most foundation model developers are non-government entities, many CBMs will need to involve a wider stakeholder community. These measures can be implemented either by AI labs or by relevant government actors.
1902.09711
Jing Yan
Jing Nathan Yan, Oliver Schulte, Jiannan Wang, Reynold Cheng
Detecting Data Errors with Statistical Constraints
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A powerful approach to detecting erroneous data is to check which potentially dirty data records are incompatible with a user's domain knowledge. Previous approaches allow the user to specify domain knowledge in the form of logical constraints (e.g., functional dependency and denial constraints). We extend the constraint-based approach by introducing a novel class of statistical constraints (SCs). An SC treats each column as a random variable, and enforces an independence or dependence relationship between two (or a few) random variables. Statistical constraints are expressive, allowing the user to specify a wide range of domain knowledge, beyond traditional integrity constraints. Furthermore, they work harmoniously with downstream statistical modeling. We develop CODED, an SC-Oriented Data Error Detection system that supports three key tasks: (1) Checking whether an SC is violated or not on a given dataset, (2) Identify the top-k records that contribute the most to the violation of an SC, and (3) Checking whether a set of input SCs have conflicts or not. We present effective solutions for each task. Experiments on synthetic and real-world data illustrate how SCs apply to error detection, and provide evidence that CODED performs better than state-of-the-art approaches.
[ { "created": "Tue, 26 Feb 2019 02:44:20 GMT", "version": "v1" } ]
2019-02-27
[ [ "Yan", "Jing Nathan", "" ], [ "Schulte", "Oliver", "" ], [ "Wang", "Jiannan", "" ], [ "Cheng", "Reynold", "" ] ]
A powerful approach to detecting erroneous data is to check which potentially dirty data records are incompatible with a user's domain knowledge. Previous approaches allow the user to specify domain knowledge in the form of logical constraints (e.g., functional dependency and denial constraints). We extend the constraint-based approach by introducing a novel class of statistical constraints (SCs). An SC treats each column as a random variable, and enforces an independence or dependence relationship between two (or a few) random variables. Statistical constraints are expressive, allowing the user to specify a wide range of domain knowledge, beyond traditional integrity constraints. Furthermore, they work harmoniously with downstream statistical modeling. We develop CODED, an SC-Oriented Data Error Detection system that supports three key tasks: (1) Checking whether an SC is violated or not on a given dataset, (2) Identify the top-k records that contribute the most to the violation of an SC, and (3) Checking whether a set of input SCs have conflicts or not. We present effective solutions for each task. Experiments on synthetic and real-world data illustrate how SCs apply to error detection, and provide evidence that CODED performs better than state-of-the-art approaches.
2305.09641
Alexandros Lattas
Alexandros Lattas, Stylianos Moschoglou, Stylianos Ploumpis, Baris Gecer, Jiankang Deng, Stefanos Zafeiriou
FitMe: Deep Photorealistic 3D Morphable Model Avatars
Accepted at CVPR 2023, project page at https://lattas.github.io/fitme , 17 pages including supplementary material
null
null
null
cs.CV cs.GR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce FitMe, a facial reflectance model and a differentiable rendering optimization pipeline, that can be used to acquire high-fidelity renderable human avatars from single or multiple images. The model consists of a multi-modal style-based generator, that captures facial appearance in terms of diffuse and specular reflectance, and a PCA-based shape model. We employ a fast differentiable rendering process that can be used in an optimization pipeline, while also achieving photorealistic facial shading. Our optimization process accurately captures both the facial reflectance and shape in high-detail, by exploiting the expressivity of the style-based latent representation and of our shape model. FitMe achieves state-of-the-art reflectance acquisition and identity preservation on single "in-the-wild" facial images, while it produces impressive scan-like results, when given multiple unconstrained facial images pertaining to the same identity. In contrast with recent implicit avatar reconstructions, FitMe requires only one minute and produces relightable mesh and texture-based avatars, that can be used by end-user applications.
[ { "created": "Tue, 16 May 2023 17:42:45 GMT", "version": "v1" } ]
2023-05-17
[ [ "Lattas", "Alexandros", "" ], [ "Moschoglou", "Stylianos", "" ], [ "Ploumpis", "Stylianos", "" ], [ "Gecer", "Baris", "" ], [ "Deng", "Jiankang", "" ], [ "Zafeiriou", "Stefanos", "" ] ]
In this paper, we introduce FitMe, a facial reflectance model and a differentiable rendering optimization pipeline, that can be used to acquire high-fidelity renderable human avatars from single or multiple images. The model consists of a multi-modal style-based generator, that captures facial appearance in terms of diffuse and specular reflectance, and a PCA-based shape model. We employ a fast differentiable rendering process that can be used in an optimization pipeline, while also achieving photorealistic facial shading. Our optimization process accurately captures both the facial reflectance and shape in high-detail, by exploiting the expressivity of the style-based latent representation and of our shape model. FitMe achieves state-of-the-art reflectance acquisition and identity preservation on single "in-the-wild" facial images, while it produces impressive scan-like results, when given multiple unconstrained facial images pertaining to the same identity. In contrast with recent implicit avatar reconstructions, FitMe requires only one minute and produces relightable mesh and texture-based avatars, that can be used by end-user applications.
1712.07810
Zhao Li
Zhao Li, Fengjuan Guo, Kang G Shin, Yinghou Liu, and Jia Liu
Interference Steering to Manage Interference
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To enable densely deployed base stations (BSs) or access points (APs) to serve an increasing number of users and provide diverse mobile services, we need to improve spectrum utilization in wireless communication networks. Although spectral efficiency (SE) can be enhanced via smart and dynamic resource allocation, interference has become a major impediment in improving SE. There have been numerous interference management (IM) proposals at the interfering transmitter or the victim transmitter/receiver separately or cooperatively. Moreover, the existing IM schemes rely mainly on the use of channel state information (CSI). However, in some communication scenarios, the option to adjust the interferer is not available, and, in the case of downlink transmission, it is always difficult or even impossible for the victim receiver to acquire necessary information for IM. Based on the above observations, we first propose a novel IM technique, called interference steering (IS). By making use of both CSI w.r.t. and data carried in the interfering signal, IS generates a signal to modify the spatial feature of the original interference, so that the steered interference at the victim receiver is orthogonal to its intended signal. We then apply IS to an infrastructurebased enterprise wireless local area network (WLAN) in which the same frequency band is reused by adjacent basic service sets (BSSs) with overlapping areas. With IS, multiple nearby APs could simultaneously transmit data on the same channel to their mobile stations (STAs), thus enhancing spectrum reuse. Our in-depth simulation results show that IS significantly improves network SE over existing IM schemes.
[ { "created": "Thu, 21 Dec 2017 06:32:43 GMT", "version": "v1" } ]
2017-12-22
[ [ "Li", "Zhao", "" ], [ "Guo", "Fengjuan", "" ], [ "Shin", "Kang G", "" ], [ "Liu", "Yinghou", "" ], [ "Liu", "Jia", "" ] ]
To enable densely deployed base stations (BSs) or access points (APs) to serve an increasing number of users and provide diverse mobile services, we need to improve spectrum utilization in wireless communication networks. Although spectral efficiency (SE) can be enhanced via smart and dynamic resource allocation, interference has become a major impediment in improving SE. There have been numerous interference management (IM) proposals at the interfering transmitter or the victim transmitter/receiver separately or cooperatively. Moreover, the existing IM schemes rely mainly on the use of channel state information (CSI). However, in some communication scenarios, the option to adjust the interferer is not available, and, in the case of downlink transmission, it is always difficult or even impossible for the victim receiver to acquire necessary information for IM. Based on the above observations, we first propose a novel IM technique, called interference steering (IS). By making use of both CSI w.r.t. and data carried in the interfering signal, IS generates a signal to modify the spatial feature of the original interference, so that the steered interference at the victim receiver is orthogonal to its intended signal. We then apply IS to an infrastructurebased enterprise wireless local area network (WLAN) in which the same frequency band is reused by adjacent basic service sets (BSSs) with overlapping areas. With IS, multiple nearby APs could simultaneously transmit data on the same channel to their mobile stations (STAs), thus enhancing spectrum reuse. Our in-depth simulation results show that IS significantly improves network SE over existing IM schemes.
2211.09019
Minttu Alakuijala
Minttu Alakuijala, Gabriel Dulac-Arnold, Julien Mairal, Jean Ponce and Cordelia Schmid
Learning Reward Functions for Robotic Manipulation by Observing Humans
null
null
null
null
cs.RO cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Observing a human demonstrator manipulate objects provides a rich, scalable and inexpensive source of data for learning robotic policies. However, transferring skills from human videos to a robotic manipulator poses several challenges, not least a difference in action and observation spaces. In this work, we use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies. Thanks to the diversity of this training data, the learned reward function sufficiently generalizes to image observations from a previously unseen robot embodiment and environment to provide a meaningful prior for directed exploration in reinforcement learning. We propose two methods for scoring states relative to a goal image: through direct temporal regression, and through distances in an embedding space obtained with time-contrastive learning. By conditioning the function on a goal image, we are able to reuse one model across a variety of tasks. Unlike prior work on leveraging human videos to teach robots, our method, Human Offline Learned Distances (HOLD) requires neither a priori data from the robot environment, nor a set of task-specific human demonstrations, nor a predefined notion of correspondence across morphologies, yet it is able to accelerate training of several manipulation tasks on a simulated robot arm compared to using only a sparse reward obtained from task completion.
[ { "created": "Wed, 16 Nov 2022 16:26:48 GMT", "version": "v1" }, { "created": "Tue, 7 Mar 2023 16:29:49 GMT", "version": "v2" } ]
2023-03-08
[ [ "Alakuijala", "Minttu", "" ], [ "Dulac-Arnold", "Gabriel", "" ], [ "Mairal", "Julien", "" ], [ "Ponce", "Jean", "" ], [ "Schmid", "Cordelia", "" ] ]
Observing a human demonstrator manipulate objects provides a rich, scalable and inexpensive source of data for learning robotic policies. However, transferring skills from human videos to a robotic manipulator poses several challenges, not least a difference in action and observation spaces. In this work, we use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies. Thanks to the diversity of this training data, the learned reward function sufficiently generalizes to image observations from a previously unseen robot embodiment and environment to provide a meaningful prior for directed exploration in reinforcement learning. We propose two methods for scoring states relative to a goal image: through direct temporal regression, and through distances in an embedding space obtained with time-contrastive learning. By conditioning the function on a goal image, we are able to reuse one model across a variety of tasks. Unlike prior work on leveraging human videos to teach robots, our method, Human Offline Learned Distances (HOLD) requires neither a priori data from the robot environment, nor a set of task-specific human demonstrations, nor a predefined notion of correspondence across morphologies, yet it is able to accelerate training of several manipulation tasks on a simulated robot arm compared to using only a sparse reward obtained from task completion.
2204.07749
Mamtaj Akter
Mamtaj Akter, Amy Godfrey, Jess Kropczynski, Heather Lipford, and Pamela Wisniewski
From Parental Control to Joint Family Oversight: Can Parents and Teens Manage Mobile Online Safety and Privacy as Equals?
null
null
10.1145/3512904
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Our research aims to highlight and alleviate the complex tensions around online safety, privacy, and smartphone usage in families so that parents and teens can work together to better manage mobile privacy and security-related risks. We developed a mobile application ("app") for Community Oversight of Privacy and Security ("CO-oPS") and had parents and teens assess whether it would be applicable for use with their families. CO-oPS is an Android app that allows a group of users to co-monitor the apps installed on one another's devices and the privacy permissions granted to those apps. We conducted a study with 19 parent-teen (ages 13-17) pairs to understand how they currently managed mobile safety and app privacy within their family and then had them install, use, and evaluate the CO-oPS app. We found that both parents and teens gave little consideration to online safety and privacy before installing new apps or granting privacy permissions. When using CO-oPS, participants liked how the app increased transparency into one another's devices in a way that facilitated communication, but were less inclined to use features for in-app messaging or to hide apps from one another. Key themes related to power imbalances between parents and teens surfaced that made co-management challenging. Parents were more open to collaborative oversight than teens, who felt that it was not their place to monitor their parents, even though both often believed parents lacked the technological expertise to monitor themselves. Our study sheds light on why collaborative practices for managing online safety and privacy within families may be beneficial but also quite difficult to implement in practice. We provide recommendations for overcoming these challenges based on the insights gained from our study.
[ { "created": "Sat, 16 Apr 2022 08:30:08 GMT", "version": "v1" }, { "created": "Tue, 16 Apr 2024 03:18:34 GMT", "version": "v2" } ]
2024-04-17
[ [ "Akter", "Mamtaj", "" ], [ "Godfrey", "Amy", "" ], [ "Kropczynski", "Jess", "" ], [ "Lipford", "Heather", "" ], [ "Wisniewski", "Pamela", "" ] ]
Our research aims to highlight and alleviate the complex tensions around online safety, privacy, and smartphone usage in families so that parents and teens can work together to better manage mobile privacy and security-related risks. We developed a mobile application ("app") for Community Oversight of Privacy and Security ("CO-oPS") and had parents and teens assess whether it would be applicable for use with their families. CO-oPS is an Android app that allows a group of users to co-monitor the apps installed on one another's devices and the privacy permissions granted to those apps. We conducted a study with 19 parent-teen (ages 13-17) pairs to understand how they currently managed mobile safety and app privacy within their family and then had them install, use, and evaluate the CO-oPS app. We found that both parents and teens gave little consideration to online safety and privacy before installing new apps or granting privacy permissions. When using CO-oPS, participants liked how the app increased transparency into one another's devices in a way that facilitated communication, but were less inclined to use features for in-app messaging or to hide apps from one another. Key themes related to power imbalances between parents and teens surfaced that made co-management challenging. Parents were more open to collaborative oversight than teens, who felt that it was not their place to monitor their parents, even though both often believed parents lacked the technological expertise to monitor themselves. Our study sheds light on why collaborative practices for managing online safety and privacy within families may be beneficial but also quite difficult to implement in practice. We provide recommendations for overcoming these challenges based on the insights gained from our study.
1905.11244
Andrew Collins Mr
Andrew Collins, Joeran Beel
Document Embeddings vs. Keyphrases vs. Terms: An Online Evaluation in Digital Library Recommender Systems
null
null
null
null
cs.IR cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many recommendation algorithms are available to digital library recommender system operators. The effectiveness of algorithms is largely unreported by way of online evaluation. We compare a standard term-based recommendation approach to two promising approaches for related-article recommendation in digital libraries: document embeddings, and keyphrases. We evaluate the consistency of their performance across multiple scenarios. Through our recommender-as-a-service Mr. DLib, we delivered 33.5M recommendations to users of Sowiport and Jabref over the course of 19 months, from March 2017 to October 2018. The effectiveness of the algorithms differs significantly between Sowiport and Jabref (Wilcoxon rank-sum test; p < 0.05). There is a ~400% difference in effectiveness between the best and worst algorithm in both scenarios separately. The best performing algorithm in Sowiport (terms) is the worst performing in Jabref. The best performing algorithm in Jabref (keyphrases) is 70% worse in Sowiport, than Sowiport`s best algorithm (click-through rate; 0.1% terms, 0.03% keyphrases).
[ { "created": "Mon, 27 May 2019 14:05:50 GMT", "version": "v1" } ]
2019-05-28
[ [ "Collins", "Andrew", "" ], [ "Beel", "Joeran", "" ] ]
Many recommendation algorithms are available to digital library recommender system operators. The effectiveness of algorithms is largely unreported by way of online evaluation. We compare a standard term-based recommendation approach to two promising approaches for related-article recommendation in digital libraries: document embeddings, and keyphrases. We evaluate the consistency of their performance across multiple scenarios. Through our recommender-as-a-service Mr. DLib, we delivered 33.5M recommendations to users of Sowiport and Jabref over the course of 19 months, from March 2017 to October 2018. The effectiveness of the algorithms differs significantly between Sowiport and Jabref (Wilcoxon rank-sum test; p < 0.05). There is a ~400% difference in effectiveness between the best and worst algorithm in both scenarios separately. The best performing algorithm in Sowiport (terms) is the worst performing in Jabref. The best performing algorithm in Jabref (keyphrases) is 70% worse in Sowiport, than Sowiport`s best algorithm (click-through rate; 0.1% terms, 0.03% keyphrases).