id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2307.12309
Jiepan Li
Wei He, Jiepan Li, Weinan Cao, Liangpei Zhang, Hongyan Zhang
Building Extraction from Remote Sensing Images via an Uncertainty-Aware Network
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Building extraction aims to segment building pixels from remote sensing images and plays an essential role in many applications, such as city planning and urban dynamic monitoring. Over the past few years, deep learning methods with encoder-decoder architectures have achieved remarkable performance due to their powerful feature representation capability. Nevertheless, due to the varying scales and styles of buildings, conventional deep learning models always suffer from uncertain predictions and cannot accurately distinguish the complete footprints of the building from the complex distribution of ground objects, leading to a large degree of omission and commission. In this paper, we realize the importance of uncertain prediction and propose a novel and straightforward Uncertainty-Aware Network (UANet) to alleviate this problem. To verify the performance of our proposed UANet, we conduct extensive experiments on three public building datasets, including the WHU building dataset, the Massachusetts building dataset, and the Inria aerial image dataset. Results demonstrate that the proposed UANet outperforms other state-of-the-art algorithms by a large margin.
[ { "created": "Sun, 23 Jul 2023 12:42:15 GMT", "version": "v1" } ]
2023-07-25
[ [ "He", "Wei", "" ], [ "Li", "Jiepan", "" ], [ "Cao", "Weinan", "" ], [ "Zhang", "Liangpei", "" ], [ "Zhang", "Hongyan", "" ] ]
Building extraction aims to segment building pixels from remote sensing images and plays an essential role in many applications, such as city planning and urban dynamic monitoring. Over the past few years, deep learning methods with encoder-decoder architectures have achieved remarkable performance due to their powerful feature representation capability. Nevertheless, due to the varying scales and styles of buildings, conventional deep learning models always suffer from uncertain predictions and cannot accurately distinguish the complete footprints of the building from the complex distribution of ground objects, leading to a large degree of omission and commission. In this paper, we realize the importance of uncertain prediction and propose a novel and straightforward Uncertainty-Aware Network (UANet) to alleviate this problem. To verify the performance of our proposed UANet, we conduct extensive experiments on three public building datasets, including the WHU building dataset, the Massachusetts building dataset, and the Inria aerial image dataset. Results demonstrate that the proposed UANet outperforms other state-of-the-art algorithms by a large margin.
1901.06765
Guoxian Song
Guoxian Song and Jianfei Cai and Tat-Jen Cham and Jianmin Zheng and Juyong Zhang and Henry Fuchs
Real-time 3D Face-Eye Performance Capture of a Person Wearing VR Headset
ACM Multimedia Conference 2018
null
10.1145/3240508.3240570
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Teleconference or telepresence based on virtual reality (VR) headmount display (HMD) device is a very interesting and promising application since HMD can provide immersive feelings for users. However, in order to facilitate face-to-face communications for HMD users, real-time 3D facial performance capture of a person wearing HMD is needed, which is a very challenging task due to the large occlusion caused by HMD. The existing limited solutions are very complex either in setting or in approach as well as lacking the performance capture of 3D eye gaze movement. In this paper, we propose a convolutional neural network (CNN) based solution for real-time 3D face-eye performance capture of HMD users without complex modification to devices. To address the issue of lacking training data, we generate massive pairs of HMD face-label dataset by data synthesis as well as collecting VR-IR eye dataset from multiple subjects. Then, we train a dense-fitting network for facial region and an eye gaze network to regress 3D eye model parameters. Extensive experimental results demonstrate that our system can efficiently and effectively produce in real time a vivid personalized 3D avatar with the correct identity, pose, expression and eye motion corresponding to the HMD user.
[ { "created": "Mon, 21 Jan 2019 01:58:15 GMT", "version": "v1" } ]
2019-01-23
[ [ "Song", "Guoxian", "" ], [ "Cai", "Jianfei", "" ], [ "Cham", "Tat-Jen", "" ], [ "Zheng", "Jianmin", "" ], [ "Zhang", "Juyong", "" ], [ "Fuchs", "Henry", "" ] ]
Teleconference or telepresence based on virtual reality (VR) headmount display (HMD) device is a very interesting and promising application since HMD can provide immersive feelings for users. However, in order to facilitate face-to-face communications for HMD users, real-time 3D facial performance capture of a person wearing HMD is needed, which is a very challenging task due to the large occlusion caused by HMD. The existing limited solutions are very complex either in setting or in approach as well as lacking the performance capture of 3D eye gaze movement. In this paper, we propose a convolutional neural network (CNN) based solution for real-time 3D face-eye performance capture of HMD users without complex modification to devices. To address the issue of lacking training data, we generate massive pairs of HMD face-label dataset by data synthesis as well as collecting VR-IR eye dataset from multiple subjects. Then, we train a dense-fitting network for facial region and an eye gaze network to regress 3D eye model parameters. Extensive experimental results demonstrate that our system can efficiently and effectively produce in real time a vivid personalized 3D avatar with the correct identity, pose, expression and eye motion corresponding to the HMD user.
2407.09704
Aleksandar Shtedritski
Viktor Mihaylov, Aleksandar Shtedritski
What an Elegant Bridge: Multilingual LLMs are Biased Similarly in Different Languages
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender. Drawing inspiration from seminal works in psycholinguistics, particularly the study of gender's influence on language perception, we leverage multilingual LLMs to revisit and expand upon the foundational experiments of Boroditsky (2003). Employing LLMs as a novel method for examining psycholinguistic biases related to grammatical gender, we prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender. In particular, we look at adjective co-occurrences across gender and languages, and train a binary classifier to predict grammatical gender given adjectives an LLM uses to describe a noun. Surprisingly, we find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability. We show that while LLMs may describe words differently in different languages, they are biased similarly.
[ { "created": "Fri, 12 Jul 2024 22:10:16 GMT", "version": "v1" } ]
2024-07-16
[ [ "Mihaylov", "Viktor", "" ], [ "Shtedritski", "Aleksandar", "" ] ]
This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender. Drawing inspiration from seminal works in psycholinguistics, particularly the study of gender's influence on language perception, we leverage multilingual LLMs to revisit and expand upon the foundational experiments of Boroditsky (2003). Employing LLMs as a novel method for examining psycholinguistic biases related to grammatical gender, we prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender. In particular, we look at adjective co-occurrences across gender and languages, and train a binary classifier to predict grammatical gender given adjectives an LLM uses to describe a noun. Surprisingly, we find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability. We show that while LLMs may describe words differently in different languages, they are biased similarly.
2404.04549
Philipp Petersen
A. Martina Neuman, Philipp Christian Petersen
Efficient Learning Using Spiking Neural Networks Equipped With Affine Encoders and Decoders
null
null
null
null
cs.NE cs.LG math.FA stat.ML
http://creativecommons.org/licenses/by/4.0/
We study the learning problem associated with spiking neural networks. Specifically, we consider hypothesis sets of spiking neural networks with affine temporal encoders and decoders and simple spiking neurons having only positive synaptic weights. We demonstrate that the positivity of the weights continues to enable a wide range of expressivity results, including rate-optimal approximation of smooth functions or approximation without the curse of dimensionality. Moreover, positive-weight spiking neural networks are shown to depend continuously on their parameters which facilitates classical covering number-based generalization statements. Finally, we observe that from a generalization perspective, contrary to feedforward neural networks or previous results for general spiking neural networks, the depth has little to no adverse effect on the generalization capabilities.
[ { "created": "Sat, 6 Apr 2024 08:17:07 GMT", "version": "v1" } ]
2024-04-09
[ [ "Neuman", "A. Martina", "" ], [ "Petersen", "Philipp Christian", "" ] ]
We study the learning problem associated with spiking neural networks. Specifically, we consider hypothesis sets of spiking neural networks with affine temporal encoders and decoders and simple spiking neurons having only positive synaptic weights. We demonstrate that the positivity of the weights continues to enable a wide range of expressivity results, including rate-optimal approximation of smooth functions or approximation without the curse of dimensionality. Moreover, positive-weight spiking neural networks are shown to depend continuously on their parameters which facilitates classical covering number-based generalization statements. Finally, we observe that from a generalization perspective, contrary to feedforward neural networks or previous results for general spiking neural networks, the depth has little to no adverse effect on the generalization capabilities.
2212.03601
Eva Ponick
Eva Ponick and Gabriele Wieczorek
Artificial Intelligence in Governance, Risk and Compliance: Results of a study on potentials for the application of artificial intelligence (AI) in governance, risk and compliance (GRC)
in German (55 pages) and in English (55 pages), total 110 pages, 32 figures. K\"unstliche Intelligenz in Governance, Risk und Compliance: Ergebnisse einer Studie zu Anwendungspotentialen von K\"unstlicher Intelligenz (KI) in Governance, Risk und Compliance (GRC)
null
null
null
cs.CY
http://creativecommons.org/licenses/by-sa/4.0/
The digital transformation leads to fundamental change in organizational structures. To be able to apply new technologies not only selectively, processes in companies must be revised and functional units must be viewed holistically, especially with regard to interfaces. Target-oriented management decisions are made, among other things, on the basis of risk management and compliance in combination with the internal control system as governance functions. The effectiveness and efficiency of these functions is decisive to follow guidelines and regulatory requirements as well as for the evaluation of alternative options for acting with regard to activities of companies. GRC (Governance, Risk and Compliance) means an integrated governance-approach, in which the mentioned governance functions are interlinked and not separated from each other. Methods of artificial intelligence represents an important technology of digital transformation. This technology, which offers a broad range of methods such as machine learning, artificial neural networks, natural language processing or deep learning, offers a lot of possible applications in many business areas from purchasing to production or customer service. Artificial intelligence is also being used in GRC, for example for processing and analysis of unstructured data sets. This study contains the results of a survey conducted in 2021 to identify and analyze the potential applications of artificial intelligence in GRC.
[ { "created": "Wed, 7 Dec 2022 12:36:10 GMT", "version": "v1" }, { "created": "Wed, 8 May 2024 16:18:57 GMT", "version": "v2" } ]
2024-05-18
[ [ "Ponick", "Eva", "" ], [ "Wieczorek", "Gabriele", "" ] ]
The digital transformation leads to fundamental change in organizational structures. To be able to apply new technologies not only selectively, processes in companies must be revised and functional units must be viewed holistically, especially with regard to interfaces. Target-oriented management decisions are made, among other things, on the basis of risk management and compliance in combination with the internal control system as governance functions. The effectiveness and efficiency of these functions is decisive to follow guidelines and regulatory requirements as well as for the evaluation of alternative options for acting with regard to activities of companies. GRC (Governance, Risk and Compliance) means an integrated governance-approach, in which the mentioned governance functions are interlinked and not separated from each other. Methods of artificial intelligence represents an important technology of digital transformation. This technology, which offers a broad range of methods such as machine learning, artificial neural networks, natural language processing or deep learning, offers a lot of possible applications in many business areas from purchasing to production or customer service. Artificial intelligence is also being used in GRC, for example for processing and analysis of unstructured data sets. This study contains the results of a survey conducted in 2021 to identify and analyze the potential applications of artificial intelligence in GRC.
2401.13996
Cheng Qian
Cheng Qian, Shihao Liang, Yujia Qin, Yining Ye, Xin Cong, Yankai Lin, Yesai Wu, Zhiyuan Liu, Maosong Sun
Investigate-Consolidate-Exploit: A General Strategy for Inter-Task Agent Self-Evolution
18 pages, 5 figures
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces Investigate-Consolidate-Exploit (ICE), a novel strategy for enhancing the adaptability and flexibility of AI agents through inter-task self-evolution. Unlike existing methods focused on intra-task learning, ICE promotes the transfer of knowledge between tasks for genuine self-evolution, similar to human experience learning. The strategy dynamically investigates planning and execution trajectories, consolidates them into simplified workflows and pipelines, and exploits them for improved task execution. Our experiments on the XAgent framework demonstrate ICE's effectiveness, reducing API calls by as much as 80% and significantly decreasing the demand for the model's capability. Specifically, when combined with GPT-3.5, ICE's performance matches that of raw GPT-4 across various agent tasks. We argue that this self-evolution approach represents a paradigm shift in agent design, contributing to a more robust AI community and ecosystem, and moving a step closer to full autonomy.
[ { "created": "Thu, 25 Jan 2024 07:47:49 GMT", "version": "v1" } ]
2024-01-26
[ [ "Qian", "Cheng", "" ], [ "Liang", "Shihao", "" ], [ "Qin", "Yujia", "" ], [ "Ye", "Yining", "" ], [ "Cong", "Xin", "" ], [ "Lin", "Yankai", "" ], [ "Wu", "Yesai", "" ], [ "Liu", "Zhiyuan", "" ], [ "Sun", "Maosong", "" ] ]
This paper introduces Investigate-Consolidate-Exploit (ICE), a novel strategy for enhancing the adaptability and flexibility of AI agents through inter-task self-evolution. Unlike existing methods focused on intra-task learning, ICE promotes the transfer of knowledge between tasks for genuine self-evolution, similar to human experience learning. The strategy dynamically investigates planning and execution trajectories, consolidates them into simplified workflows and pipelines, and exploits them for improved task execution. Our experiments on the XAgent framework demonstrate ICE's effectiveness, reducing API calls by as much as 80% and significantly decreasing the demand for the model's capability. Specifically, when combined with GPT-3.5, ICE's performance matches that of raw GPT-4 across various agent tasks. We argue that this self-evolution approach represents a paradigm shift in agent design, contributing to a more robust AI community and ecosystem, and moving a step closer to full autonomy.
2406.13724
Xuehao Zhai
Xuehao Zhai, Junqi Jiang, Adam Dejl, Antonio Rago, Fangce Guo, Francesca Toni, Aruna Sivakumar
Heterogeneous Graph Neural Networks with Post-hoc Explanations for Multi-modal and Explainable Land Use Inference
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Urban land use inference is a critically important task that aids in city planning and policy-making. Recently, the increased use of sensor and location technologies has facilitated the collection of multi-modal mobility data, offering valuable insights into daily activity patterns. Many studies have adopted advanced data-driven techniques to explore the potential of these multi-modal mobility data in land use inference. However, existing studies often process samples independently, ignoring the spatial correlations among neighbouring objects and heterogeneity among different services. Furthermore, the inherently low interpretability of complex deep learning methods poses a significant barrier in urban planning, where transparency and extrapolability are crucial for making long-term policy decisions. To overcome these challenges, we introduce an explainable framework for inferring land use that synergises heterogeneous graph neural networks (HGNs) with Explainable AI techniques, enhancing both accuracy and explainability. The empirical experiments demonstrate that the proposed HGNs significantly outperform baseline graph neural networks for all six land-use indicators, especially in terms of 'office' and 'sustenance'. As explanations, we consider feature attribution and counterfactual explanations. The analysis of feature attribution explanations shows that the symmetrical nature of the `residence' and 'work' categories predicted by the framework aligns well with the commuter's 'work' and 'recreation' activities in London. The analysis of the counterfactual explanations reveals that variations in node features and types are primarily responsible for the differences observed between the predicted land use distribution and the ideal mixed state. These analyses demonstrate that the proposed HGNs can suitably support urban stakeholders in their urban planning and policy-making.
[ { "created": "Wed, 19 Jun 2024 17:39:10 GMT", "version": "v1" } ]
2024-06-21
[ [ "Zhai", "Xuehao", "" ], [ "Jiang", "Junqi", "" ], [ "Dejl", "Adam", "" ], [ "Rago", "Antonio", "" ], [ "Guo", "Fangce", "" ], [ "Toni", "Francesca", "" ], [ "Sivakumar", "Aruna", "" ] ]
Urban land use inference is a critically important task that aids in city planning and policy-making. Recently, the increased use of sensor and location technologies has facilitated the collection of multi-modal mobility data, offering valuable insights into daily activity patterns. Many studies have adopted advanced data-driven techniques to explore the potential of these multi-modal mobility data in land use inference. However, existing studies often process samples independently, ignoring the spatial correlations among neighbouring objects and heterogeneity among different services. Furthermore, the inherently low interpretability of complex deep learning methods poses a significant barrier in urban planning, where transparency and extrapolability are crucial for making long-term policy decisions. To overcome these challenges, we introduce an explainable framework for inferring land use that synergises heterogeneous graph neural networks (HGNs) with Explainable AI techniques, enhancing both accuracy and explainability. The empirical experiments demonstrate that the proposed HGNs significantly outperform baseline graph neural networks for all six land-use indicators, especially in terms of 'office' and 'sustenance'. As explanations, we consider feature attribution and counterfactual explanations. The analysis of feature attribution explanations shows that the symmetrical nature of the `residence' and 'work' categories predicted by the framework aligns well with the commuter's 'work' and 'recreation' activities in London. The analysis of the counterfactual explanations reveals that variations in node features and types are primarily responsible for the differences observed between the predicted land use distribution and the ideal mixed state. These analyses demonstrate that the proposed HGNs can suitably support urban stakeholders in their urban planning and policy-making.
1811.11242
Gerrit van den Burg
Gerrit J.J. van den Burg, Alfredo Nazabal, Charles Sutton
Wrangling Messy CSV Files by Detecting Row and Type Patterns
null
Data Mining and Knowledge Discovery (July, 2019)
10.1007/s10618-019-00646-y
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well known that data scientists spend the majority of their time on preparing data for analysis. One of the first steps in this preparation phase is to load the data from the raw storage format. Comma-separated value (CSV) files are a popular format for tabular data due to their simplicity and ostensible ease of use. However, formatting standards for CSV files are not followed consistently, so each file requires manual inspection and potentially repair before the data can be loaded, an enormous waste of human effort for a task that should be one of the simplest parts of data science. The first and most essential step in retrieving data from CSV files is deciding on the dialect of the file, such as the cell delimiter and quote character. Existing dialect detection approaches are few and non-robust. In this paper, we propose a dialect detection method based on a novel measure of data consistency of parsed data files. Our method achieves 97% overall accuracy on a large corpus of real-world CSV files and improves the accuracy on messy CSV files by almost 22% compared to existing approaches, including those in the Python standard library.
[ { "created": "Tue, 27 Nov 2018 20:26:33 GMT", "version": "v1" } ]
2019-07-29
[ [ "Burg", "Gerrit J. J. van den", "" ], [ "Nazabal", "Alfredo", "" ], [ "Sutton", "Charles", "" ] ]
It is well known that data scientists spend the majority of their time on preparing data for analysis. One of the first steps in this preparation phase is to load the data from the raw storage format. Comma-separated value (CSV) files are a popular format for tabular data due to their simplicity and ostensible ease of use. However, formatting standards for CSV files are not followed consistently, so each file requires manual inspection and potentially repair before the data can be loaded, an enormous waste of human effort for a task that should be one of the simplest parts of data science. The first and most essential step in retrieving data from CSV files is deciding on the dialect of the file, such as the cell delimiter and quote character. Existing dialect detection approaches are few and non-robust. In this paper, we propose a dialect detection method based on a novel measure of data consistency of parsed data files. Our method achieves 97% overall accuracy on a large corpus of real-world CSV files and improves the accuracy on messy CSV files by almost 22% compared to existing approaches, including those in the Python standard library.
2204.09957
Xi Li
Peihan Miao, Wei Su, Gaoang Wang, Xuewei Li, Xi Li
Self-paced Multi-grained Cross-modal Interaction Modeling for Referring Expression Comprehension
Accepted by TIP
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As an important and challenging problem in vision-language tasks, referring expression comprehension (REC) generally requires a large amount of multi-grained information of visual and linguistic modalities to realize accurate reasoning. In addition, due to the diversity of visual scenes and the variation of linguistic expressions, some hard examples have much more abundant multi-grained information than others. How to aggregate multi-grained information from different modalities and extract abundant knowledge from hard examples is crucial in the REC task. To address aforementioned challenges, in this paper, we propose a Self-paced Multi-grained Cross-modal Interaction Modeling framework, which improves the language-to-vision localization ability through innovations in network structure and learning mechanism. Concretely, we design a transformer-based multi-grained cross-modal attention, which effectively utilizes the inherent multi-grained information in visual and linguistic encoders. Furthermore, considering the large variance of samples, we propose a self-paced sample informativeness learning to adaptively enhance the network learning for samples containing abundant multi-grained information. The proposed framework significantly outperforms state-of-the-art methods on widely used datasets, such as RefCOCO, RefCOCO+, RefCOCOg, and ReferItGame datasets, demonstrating the effectiveness of our method.
[ { "created": "Thu, 21 Apr 2022 08:32:47 GMT", "version": "v1" }, { "created": "Sun, 9 Oct 2022 09:30:11 GMT", "version": "v2" }, { "created": "Tue, 12 Mar 2024 08:13:27 GMT", "version": "v3" } ]
2024-03-13
[ [ "Miao", "Peihan", "" ], [ "Su", "Wei", "" ], [ "Wang", "Gaoang", "" ], [ "Li", "Xuewei", "" ], [ "Li", "Xi", "" ] ]
As an important and challenging problem in vision-language tasks, referring expression comprehension (REC) generally requires a large amount of multi-grained information of visual and linguistic modalities to realize accurate reasoning. In addition, due to the diversity of visual scenes and the variation of linguistic expressions, some hard examples have much more abundant multi-grained information than others. How to aggregate multi-grained information from different modalities and extract abundant knowledge from hard examples is crucial in the REC task. To address aforementioned challenges, in this paper, we propose a Self-paced Multi-grained Cross-modal Interaction Modeling framework, which improves the language-to-vision localization ability through innovations in network structure and learning mechanism. Concretely, we design a transformer-based multi-grained cross-modal attention, which effectively utilizes the inherent multi-grained information in visual and linguistic encoders. Furthermore, considering the large variance of samples, we propose a self-paced sample informativeness learning to adaptively enhance the network learning for samples containing abundant multi-grained information. The proposed framework significantly outperforms state-of-the-art methods on widely used datasets, such as RefCOCO, RefCOCO+, RefCOCOg, and ReferItGame datasets, demonstrating the effectiveness of our method.
2004.14592
Chongyang Tao
Jiayi Zhang, Chongyang Tao, Zhenjing Xu, Qiaojing Xie, Wei Chen, Rui Yan
EnsembleGAN: Adversarial Learning for Retrieval-Generation Ensemble Model on Short-Text Conversation
10 pages, SIGIR 2019
null
null
null
cs.CL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generating qualitative responses has always been a challenge for human-computer dialogue systems. Existing dialogue systems generally derive from either retrieval-based or generative-based approaches, both of which have their own pros and cons. Despite the natural idea of an ensemble model of the two, existing ensemble methods only focused on leveraging one approach to enhance another, we argue however that they can be further mutually enhanced with a proper training strategy. In this paper, we propose ensembleGAN, an adversarial learning framework for enhancing a retrieval-generation ensemble model in open-domain conversation scenario. It consists of a language-model-like generator, a ranker generator, and one ranker discriminator. Aiming at generating responses that approximate the ground-truth and receive high ranking scores from the discriminator, the two generators learn to generate improved highly relevant responses and competitive unobserved candidates respectively, while the discriminative ranker is trained to identify true responses from adversarial ones, thus featuring the merits of both generator counterparts. The experimental results on a large short-text conversation data demonstrate the effectiveness of the ensembleGAN by the amelioration on both human and automatic evaluation metrics.
[ { "created": "Thu, 30 Apr 2020 05:59:12 GMT", "version": "v1" } ]
2020-05-01
[ [ "Zhang", "Jiayi", "" ], [ "Tao", "Chongyang", "" ], [ "Xu", "Zhenjing", "" ], [ "Xie", "Qiaojing", "" ], [ "Chen", "Wei", "" ], [ "Yan", "Rui", "" ] ]
Generating qualitative responses has always been a challenge for human-computer dialogue systems. Existing dialogue systems generally derive from either retrieval-based or generative-based approaches, both of which have their own pros and cons. Despite the natural idea of an ensemble model of the two, existing ensemble methods only focused on leveraging one approach to enhance another, we argue however that they can be further mutually enhanced with a proper training strategy. In this paper, we propose ensembleGAN, an adversarial learning framework for enhancing a retrieval-generation ensemble model in open-domain conversation scenario. It consists of a language-model-like generator, a ranker generator, and one ranker discriminator. Aiming at generating responses that approximate the ground-truth and receive high ranking scores from the discriminator, the two generators learn to generate improved highly relevant responses and competitive unobserved candidates respectively, while the discriminative ranker is trained to identify true responses from adversarial ones, thus featuring the merits of both generator counterparts. The experimental results on a large short-text conversation data demonstrate the effectiveness of the ensembleGAN by the amelioration on both human and automatic evaluation metrics.
2009.09646
Aleksandar Shurbevski
Naveed Ahmed Azam, Jianshen Zhu, Yanming Sun, Yu Shi, Aleksandar Shurbevski, Liang Zhao, Hiroshi Nagamochi, Tatsuya Akutsu
A Novel Method for Inference of Acyclic Chemical Compounds with Bounded Branch-height Based on Artificial Neural Networks and Integer Programming
null
null
null
null
cs.DS cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analysis of chemical graphs is a major research topic in computational molecular biology due to its potential applications to drug design. One approach is inverse quantitative structure activity/property relationship (inverse QSAR/QSPR) analysis, which is to infer chemical structures from given chemical activities/properties. Recently, a framework has been proposed for inverse QSAR/QSPR using artificial neural networks (ANN) and mixed integer linear programming (MILP). This method consists of a prediction phase and an inverse prediction phase. In the first phase, a feature vector $f(G)$ of a chemical graph $G$ is introduced and a prediction function $\psi$ on a chemical property $\pi$ is constructed with an ANN. In the second phase, given a target value $y^*$ of property $\pi$, a feature vector $x^*$ is inferred by solving an MILP formulated from the trained ANN so that $\psi(x^*)$ is close to $y^*$ and then a set of chemical structures $G^*$ such that $f(G^*)= x^*$ is enumerated by a graph search algorithm. The framework has been applied to the case of chemical compounds with cycle index up to 2. The computational results conducted on instances with $n$ non-hydrogen atoms show that a feature vector $x^*$ can be inferred for up to around $n=40$ whereas graphs $G^*$ can be enumerated for up to $n=15$. When applied to the case of chemical acyclic graphs, the maximum computable diameter of $G^*$ was around up to around 8. We introduce a new characterization of graph structure, "branch-height," based on which an MILP formulation and a graph search algorithm are designed for chemical acyclic graphs. The results of computational experiments using properties such as octanol/water partition coefficient, boiling point and heat of combustion suggest that the proposed method can infer chemical acyclic graphs $G^*$ with $n=50$ and diameter 30.
[ { "created": "Mon, 21 Sep 2020 07:11:59 GMT", "version": "v1" } ]
2020-09-22
[ [ "Azam", "Naveed Ahmed", "" ], [ "Zhu", "Jianshen", "" ], [ "Sun", "Yanming", "" ], [ "Shi", "Yu", "" ], [ "Shurbevski", "Aleksandar", "" ], [ "Zhao", "Liang", "" ], [ "Nagamochi", "Hiroshi", "" ], [ "Akutsu", "Tatsuya", "" ] ]
Analysis of chemical graphs is a major research topic in computational molecular biology due to its potential applications to drug design. One approach is inverse quantitative structure activity/property relationship (inverse QSAR/QSPR) analysis, which is to infer chemical structures from given chemical activities/properties. Recently, a framework has been proposed for inverse QSAR/QSPR using artificial neural networks (ANN) and mixed integer linear programming (MILP). This method consists of a prediction phase and an inverse prediction phase. In the first phase, a feature vector $f(G)$ of a chemical graph $G$ is introduced and a prediction function $\psi$ on a chemical property $\pi$ is constructed with an ANN. In the second phase, given a target value $y^*$ of property $\pi$, a feature vector $x^*$ is inferred by solving an MILP formulated from the trained ANN so that $\psi(x^*)$ is close to $y^*$ and then a set of chemical structures $G^*$ such that $f(G^*)= x^*$ is enumerated by a graph search algorithm. The framework has been applied to the case of chemical compounds with cycle index up to 2. The computational results conducted on instances with $n$ non-hydrogen atoms show that a feature vector $x^*$ can be inferred for up to around $n=40$ whereas graphs $G^*$ can be enumerated for up to $n=15$. When applied to the case of chemical acyclic graphs, the maximum computable diameter of $G^*$ was around up to around 8. We introduce a new characterization of graph structure, "branch-height," based on which an MILP formulation and a graph search algorithm are designed for chemical acyclic graphs. The results of computational experiments using properties such as octanol/water partition coefficient, boiling point and heat of combustion suggest that the proposed method can infer chemical acyclic graphs $G^*$ with $n=50$ and diameter 30.
1710.10532
Daniel Kasenberg
Daniel Kasenberg, Matthias Scheutz
Interpretable Apprenticeship Learning with Temporal Logic Specifications
Accepted to the 56th IEEE Conference on Decision and Control (CDC 2017)
null
null
null
cs.SY cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work has addressed using formulas in linear temporal logic (LTL) as specifications for agents planning in Markov Decision Processes (MDPs). We consider the inverse problem: inferring an LTL specification from demonstrated behavior trajectories in MDPs. We formulate this as a multiobjective optimization problem, and describe state-based ("what actually happened") and action-based ("what the agent expected to happen") objective functions based on a notion of "violation cost". We demonstrate the efficacy of the approach by employing genetic programming to solve this problem in two simple domains.
[ { "created": "Sat, 28 Oct 2017 22:01:55 GMT", "version": "v1" } ]
2017-11-02
[ [ "Kasenberg", "Daniel", "" ], [ "Scheutz", "Matthias", "" ] ]
Recent work has addressed using formulas in linear temporal logic (LTL) as specifications for agents planning in Markov Decision Processes (MDPs). We consider the inverse problem: inferring an LTL specification from demonstrated behavior trajectories in MDPs. We formulate this as a multiobjective optimization problem, and describe state-based ("what actually happened") and action-based ("what the agent expected to happen") objective functions based on a notion of "violation cost". We demonstrate the efficacy of the approach by employing genetic programming to solve this problem in two simple domains.
2112.04745
Fei Ma
Fei Ma, Renbo Zhu, Ping Wang
OPTT: Optimal Piecewise Transformation Technique for Analyzing Numerical Data under Local Differential Privacy
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Privacy preserving data analysis (PPDA) has received increasing attention due to a great variety of applications. Local differential privacy (LDP), as an emerging standard that is suitable for PPDA, has been widely deployed into various real-world scenarios to analyze massive data while protecting against many forms of privacy breach. In this study, we are mainly concerned with piecewise transformation technique (PTT) for analyzing numerical data under local differential privacy. We provide a principled framework for PTT in the context of LDP, based on which PTT is studied systematically. As a result, we show that (1) many members in PTTs are asymptotically optimal when used to obtain an unbiased estimator for mean of numerical data, and (2) for a given privacy budget, there is PTT that reaches the theoretical low bound with respect to variance. Next, we prove by studying two classes of PTTs in detail that (1) there do not exist optimal PTTs compared to the well-used technique, i.e., Duchi's scheme, in terms of the consistency noisy variance, (2) on the other hand, one has the ability to find a great number of PTTs that are consistently more optimal than the latter with regard to the worst-case noisy variance, which is never reported so far. When we are restricted to consider only the high privacy level, enough PTTs turn out to be optimal than the well-known Laplace mechanism. Lastly, we prove that for a family of PTTs, the correspondingly theoretical low bound of noisy variance follows $O(\epsilon^{-2})$ when considering the high privacy level.
[ { "created": "Thu, 9 Dec 2021 08:06:25 GMT", "version": "v1" } ]
2021-12-10
[ [ "Ma", "Fei", "" ], [ "Zhu", "Renbo", "" ], [ "Wang", "Ping", "" ] ]
Privacy preserving data analysis (PPDA) has received increasing attention due to a great variety of applications. Local differential privacy (LDP), as an emerging standard that is suitable for PPDA, has been widely deployed into various real-world scenarios to analyze massive data while protecting against many forms of privacy breach. In this study, we are mainly concerned with piecewise transformation technique (PTT) for analyzing numerical data under local differential privacy. We provide a principled framework for PTT in the context of LDP, based on which PTT is studied systematically. As a result, we show that (1) many members in PTTs are asymptotically optimal when used to obtain an unbiased estimator for mean of numerical data, and (2) for a given privacy budget, there is PTT that reaches the theoretical low bound with respect to variance. Next, we prove by studying two classes of PTTs in detail that (1) there do not exist optimal PTTs compared to the well-used technique, i.e., Duchi's scheme, in terms of the consistency noisy variance, (2) on the other hand, one has the ability to find a great number of PTTs that are consistently more optimal than the latter with regard to the worst-case noisy variance, which is never reported so far. When we are restricted to consider only the high privacy level, enough PTTs turn out to be optimal than the well-known Laplace mechanism. Lastly, we prove that for a family of PTTs, the correspondingly theoretical low bound of noisy variance follows $O(\epsilon^{-2})$ when considering the high privacy level.
2207.05118
Laura Dilley
L. Dilley, W. Welna, F. Foster (Michigan State University)
QAnon Propaganda on Twitter as Information Warfare: Influencers, Networks, and Narratives
60 pages, 14 figures
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
QAnon refers to a set of far-right, conspiratorial ideologies that have risen in popularity in the U.S. since their initial promotion in 2017 on the 4chan internet message board. A central narrative element of QAnon is that a powerful group of elite, liberal members of the Democratic Party engage in morally reprehensible practices, but that former U.S. President Donald J. Trump was prosecuting them. Five studies investigated the influence and network connectivity of accounts promoting QAnon on Twitter from August, 2020 through January, 2021. Selection of Twitter accounts emphasized on-line influencers and "persons of interest" known or suspected of participation in QAnon propaganda promotion activities. Evidence of large-scale coordination among accounts promoting QAnon was observed, demonstrating rigorous, quantitative evidence of "astroturfing" in QAnon propaganda promotion on Twitter, as opposed to strictly "grassroots" activities of citizens acting independently. Further, evidence was obtained supporting that networks of extreme far-right adherents engaged in organized QAnon propaganda promotion, as revealed by network overlap among accounts promoting far-right extremist (e.g., anti-Semitic) content and insurrectionist themes; New Age, occult, and "esoteric" themes; and internet puzzle games like Cicada 3301 and other "alternate reality games." Based on well-grounded theories and findings from the social sciences, it is argued that QAnon propaganda on Twitter in the months circa the 2020 U.S. Presidential election likely reflected joint participation of multiple actors, including nation-states like Russia, in innovative misuse of social media toward undermining democratic processes by promoting "magical" thinking, ostracism of Democrats and liberals, and salience of White extinction narratives common among otherwise ideologically diverse groups on the extreme far-right.
[ { "created": "Mon, 11 Jul 2022 18:23:30 GMT", "version": "v1" } ]
2022-07-13
[ [ "Dilley", "L.", "", "Michigan State University" ], [ "Welna", "W.", "", "Michigan State University" ], [ "Foster", "F.", "", "Michigan State University" ] ]
QAnon refers to a set of far-right, conspiratorial ideologies that have risen in popularity in the U.S. since their initial promotion in 2017 on the 4chan internet message board. A central narrative element of QAnon is that a powerful group of elite, liberal members of the Democratic Party engage in morally reprehensible practices, but that former U.S. President Donald J. Trump was prosecuting them. Five studies investigated the influence and network connectivity of accounts promoting QAnon on Twitter from August, 2020 through January, 2021. Selection of Twitter accounts emphasized on-line influencers and "persons of interest" known or suspected of participation in QAnon propaganda promotion activities. Evidence of large-scale coordination among accounts promoting QAnon was observed, demonstrating rigorous, quantitative evidence of "astroturfing" in QAnon propaganda promotion on Twitter, as opposed to strictly "grassroots" activities of citizens acting independently. Further, evidence was obtained supporting that networks of extreme far-right adherents engaged in organized QAnon propaganda promotion, as revealed by network overlap among accounts promoting far-right extremist (e.g., anti-Semitic) content and insurrectionist themes; New Age, occult, and "esoteric" themes; and internet puzzle games like Cicada 3301 and other "alternate reality games." Based on well-grounded theories and findings from the social sciences, it is argued that QAnon propaganda on Twitter in the months circa the 2020 U.S. Presidential election likely reflected joint participation of multiple actors, including nation-states like Russia, in innovative misuse of social media toward undermining democratic processes by promoting "magical" thinking, ostracism of Democrats and liberals, and salience of White extinction narratives common among otherwise ideologically diverse groups on the extreme far-right.
2110.00704
Josiah Wong
Josiah Wong, Viktor Makoviychuk, Anima Anandkumar, Yuke Zhu
OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation
null
null
null
null
cs.RO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning performant robot manipulation policies can be challenging due to high-dimensional continuous actions and complex physics-based dynamics. This can be alleviated through intelligent choice of action space. Operational Space Control (OSC) has been used as an effective task-space controller for manipulation. Nonetheless, its strength depends on the underlying modeling fidelity, and is prone to failure when there are modeling errors. In this work, we propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of OSC that compensates for modeling errors by inferring relevant dynamics parameters from online trajectories. OSCAR decomposes dynamics learning into task-agnostic and task-specific phases, decoupling the dynamics dependencies of the robot and the extrinsics due to its environment. This structure enables robust zero-shot performance under out-of-distribution and rapid adaptation to significant domain shifts through additional finetuning. We evaluate our method on a variety of simulated manipulation problems, and find substantial improvements over an array of controller baselines. For more results and information, please visit https://cremebrule.github.io/oscar-web/.
[ { "created": "Sat, 2 Oct 2021 01:21:38 GMT", "version": "v1" } ]
2021-10-05
[ [ "Wong", "Josiah", "" ], [ "Makoviychuk", "Viktor", "" ], [ "Anandkumar", "Anima", "" ], [ "Zhu", "Yuke", "" ] ]
Learning performant robot manipulation policies can be challenging due to high-dimensional continuous actions and complex physics-based dynamics. This can be alleviated through intelligent choice of action space. Operational Space Control (OSC) has been used as an effective task-space controller for manipulation. Nonetheless, its strength depends on the underlying modeling fidelity, and is prone to failure when there are modeling errors. In this work, we propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of OSC that compensates for modeling errors by inferring relevant dynamics parameters from online trajectories. OSCAR decomposes dynamics learning into task-agnostic and task-specific phases, decoupling the dynamics dependencies of the robot and the extrinsics due to its environment. This structure enables robust zero-shot performance under out-of-distribution and rapid adaptation to significant domain shifts through additional finetuning. We evaluate our method on a variety of simulated manipulation problems, and find substantial improvements over an array of controller baselines. For more results and information, please visit https://cremebrule.github.io/oscar-web/.
2305.13724
Yuki Saito
Yuki Saito, Shinnosuke Takamichi, Eiji Iimori, Kentaro Tachibana, Hiroshi Saruwatari
ChatGPT-EDSS: Empathetic Dialogue Speech Synthesis Trained from ChatGPT-derived Context Word Embeddings
5 pages, accepted for INTERSPEECH 2023
null
null
null
cs.SD cs.CL cs.LG eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose ChatGPT-EDSS, an empathetic dialogue speech synthesis (EDSS) method using ChatGPT for extracting dialogue context. ChatGPT is a chatbot that can deeply understand the content and purpose of an input prompt and appropriately respond to the user's request. We focus on ChatGPT's reading comprehension and introduce it to EDSS, a task of synthesizing speech that can empathize with the interlocutor's emotion. Our method first gives chat history to ChatGPT and asks it to generate three words representing the intention, emotion, and speaking style for each line in the chat. Then, it trains an EDSS model using the embeddings of ChatGPT-derived context words as the conditioning features. The experimental results demonstrate that our method performs comparably to ones using emotion labels or neural network-derived context embeddings learned from chat histories. The collected ChatGPT-derived context information is available at https://sarulab-speech.github.io/demo_ChatGPT_EDSS/.
[ { "created": "Tue, 23 May 2023 06:19:37 GMT", "version": "v1" } ]
2023-05-24
[ [ "Saito", "Yuki", "" ], [ "Takamichi", "Shinnosuke", "" ], [ "Iimori", "Eiji", "" ], [ "Tachibana", "Kentaro", "" ], [ "Saruwatari", "Hiroshi", "" ] ]
We propose ChatGPT-EDSS, an empathetic dialogue speech synthesis (EDSS) method using ChatGPT for extracting dialogue context. ChatGPT is a chatbot that can deeply understand the content and purpose of an input prompt and appropriately respond to the user's request. We focus on ChatGPT's reading comprehension and introduce it to EDSS, a task of synthesizing speech that can empathize with the interlocutor's emotion. Our method first gives chat history to ChatGPT and asks it to generate three words representing the intention, emotion, and speaking style for each line in the chat. Then, it trains an EDSS model using the embeddings of ChatGPT-derived context words as the conditioning features. The experimental results demonstrate that our method performs comparably to ones using emotion labels or neural network-derived context embeddings learned from chat histories. The collected ChatGPT-derived context information is available at https://sarulab-speech.github.io/demo_ChatGPT_EDSS/.
2312.12042
Zhiming Hu
Zhiming Hu and Jiahui Xu and Syn Schmitt and Andreas Bulling
Pose2Gaze: Eye-body Coordination during Daily Activities for Gaze Prediction from Full-body Poses
Accepted at TVCG 2024, code available at https://zhiminghu.net/hu24_pose2gaze.html
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Human eye gaze plays a significant role in many virtual and augmented reality (VR/AR) applications, such as gaze-contingent rendering, gaze-based interaction, or eye-based activity recognition. However, prior works on gaze analysis and prediction have only explored eye-head coordination and were limited to human-object interactions. We first report a comprehensive analysis of eye-body coordination in various human-object and human-human interaction activities based on four public datasets collected in real-world (MoGaze), VR (ADT), as well as AR (GIMO and EgoBody) environments. We show that in human-object interactions, e.g. pick and place, eye gaze exhibits strong correlations with full-body motion while in human-human interactions, e.g. chat and teach, a person's gaze direction is correlated with the body orientation towards the interaction partner. Informed by these analyses we then present Pose2Gaze, a novel eye-body coordination model that uses a convolutional neural network and a spatio-temporal graph convolutional neural network to extract features from head direction and full-body poses, respectively, and then uses a convolutional neural network to predict eye gaze. We compare our method with state-of-the-art methods that predict eye gaze only from head movements and show that Pose2Gaze outperforms these baselines with an average improvement of 24.0% on MoGaze, 10.1% on ADT, 21.3% on GIMO, and 28.6% on EgoBody in mean angular error, respectively. We also show that our method significantly outperforms prior methods in the sample downstream task of eye-based activity recognition. These results underline the significant information content available in eye-body coordination during daily activities and open up a new direction for gaze prediction.
[ { "created": "Tue, 19 Dec 2023 10:55:46 GMT", "version": "v1" }, { "created": "Fri, 7 Jun 2024 09:41:53 GMT", "version": "v2" }, { "created": "Mon, 10 Jun 2024 06:38:16 GMT", "version": "v3" } ]
2024-06-11
[ [ "Hu", "Zhiming", "" ], [ "Xu", "Jiahui", "" ], [ "Schmitt", "Syn", "" ], [ "Bulling", "Andreas", "" ] ]
Human eye gaze plays a significant role in many virtual and augmented reality (VR/AR) applications, such as gaze-contingent rendering, gaze-based interaction, or eye-based activity recognition. However, prior works on gaze analysis and prediction have only explored eye-head coordination and were limited to human-object interactions. We first report a comprehensive analysis of eye-body coordination in various human-object and human-human interaction activities based on four public datasets collected in real-world (MoGaze), VR (ADT), as well as AR (GIMO and EgoBody) environments. We show that in human-object interactions, e.g. pick and place, eye gaze exhibits strong correlations with full-body motion while in human-human interactions, e.g. chat and teach, a person's gaze direction is correlated with the body orientation towards the interaction partner. Informed by these analyses we then present Pose2Gaze, a novel eye-body coordination model that uses a convolutional neural network and a spatio-temporal graph convolutional neural network to extract features from head direction and full-body poses, respectively, and then uses a convolutional neural network to predict eye gaze. We compare our method with state-of-the-art methods that predict eye gaze only from head movements and show that Pose2Gaze outperforms these baselines with an average improvement of 24.0% on MoGaze, 10.1% on ADT, 21.3% on GIMO, and 28.6% on EgoBody in mean angular error, respectively. We also show that our method significantly outperforms prior methods in the sample downstream task of eye-based activity recognition. These results underline the significant information content available in eye-body coordination during daily activities and open up a new direction for gaze prediction.
1407.2705
Badr Benmammar
Badr Benmammar (LTT), Asma Amraoui (LTT)
R\'eseaux de radio cognitive : Allocation des ressources radio et acc\`es dynamique au spectre
in French
null
null
null
cs.MA cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the first chapter of this report, we provide an overview on mobile and wireless networks, with special focus on the IEEE 802.22 norm, which is a norm dedicated to cognitive radio (CR). Chapter 2 goes into detail about CR and Chapter 3 is devoted to the presentation of the concept of agents and in particular the concept of multi-agent systems (MAS). Finally, Chapter 4 provides a state of the art on the use of artificial intelligence techniques, particularly MAS for radio resource allocation and dynamic spectrum access in the field of CR.
[ { "created": "Thu, 10 Jul 2014 06:20:01 GMT", "version": "v1" } ]
2014-07-11
[ [ "Benmammar", "Badr", "", "LTT" ], [ "Amraoui", "Asma", "", "LTT" ] ]
In the first chapter of this report, we provide an overview on mobile and wireless networks, with special focus on the IEEE 802.22 norm, which is a norm dedicated to cognitive radio (CR). Chapter 2 goes into detail about CR and Chapter 3 is devoted to the presentation of the concept of agents and in particular the concept of multi-agent systems (MAS). Finally, Chapter 4 provides a state of the art on the use of artificial intelligence techniques, particularly MAS for radio resource allocation and dynamic spectrum access in the field of CR.
1904.02528
Armelle Brun
Armelle Brun (KIWI), Geoffray Bonnin (KIWI), Sylvain Castagnos (KIWI), Azim Roussanaly (KIWI), Anne Boyer (KIWI)
Learning Analytics Made in France: The METALproject
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the METAL project, an ongoing French open Learning Analytics (LA) project for secondary school, that aims at improving the quality of the learning process. The originality of METAL is that it relies on research through exploratory activities and focuses on all the aspects of a Learning Analytics implementation. This large-scale project includes many concerns, divided into 4 main actions. (1) data management: multi-source data identification, collection and storage, selection and promotion of standards, and design and development of an open-source Learning Record Store (LRS); (2) data visualization: learner and teacher dashboards, with a design that relies on the co-conception with final users, including trust and usability concerns; (3) data exploitation: study of the link between gaze and memory of learners, design of explainable multi-source data-mining algorithms, including ethics and privacy concerns. An additional key of originality lies in the global dissemination of LA at an institution level or at a broader level such as a territory, at the opposite on many projects that focus on a specific school or a school curriculum. Each of these aspects is a hot topic in the literature. Taking into account all of them in a holistic view of education is an additional added value of the project.
[ { "created": "Thu, 4 Apr 2019 13:07:42 GMT", "version": "v1" } ]
2019-04-05
[ [ "Brun", "Armelle", "", "KIWI" ], [ "Bonnin", "Geoffray", "", "KIWI" ], [ "Castagnos", "Sylvain", "", "KIWI" ], [ "Roussanaly", "Azim", "", "KIWI" ], [ "Boyer", "Anne", "", "KIWI" ] ]
This paper presents the METAL project, an ongoing French open Learning Analytics (LA) project for secondary school, that aims at improving the quality of the learning process. The originality of METAL is that it relies on research through exploratory activities and focuses on all the aspects of a Learning Analytics implementation. This large-scale project includes many concerns, divided into 4 main actions. (1) data management: multi-source data identification, collection and storage, selection and promotion of standards, and design and development of an open-source Learning Record Store (LRS); (2) data visualization: learner and teacher dashboards, with a design that relies on the co-conception with final users, including trust and usability concerns; (3) data exploitation: study of the link between gaze and memory of learners, design of explainable multi-source data-mining algorithms, including ethics and privacy concerns. An additional key of originality lies in the global dissemination of LA at an institution level or at a broader level such as a territory, at the opposite on many projects that focus on a specific school or a school curriculum. Each of these aspects is a hot topic in the literature. Taking into account all of them in a holistic view of education is an additional added value of the project.
1202.3775
Kun Zhang
Kun Zhang, Jonas Peters, Dominik Janzing, Bernhard Schoelkopf
Kernel-based Conditional Independence Test and Application in Causal Discovery
null
null
null
UAI-P-2011-PG-804-813
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conditional independence testing is an important problem, especially in Bayesian network learning and causal discovery. Due to the curse of dimensionality, testing for conditional independence of continuous variables is particularly challenging. We propose a Kernel-based Conditional Independence test (KCI-test), by constructing an appropriate test statistic and deriving its asymptotic distribution under the null hypothesis of conditional independence. The proposed method is computationally efficient and easy to implement. Experimental results show that it outperforms other methods, especially when the conditioning set is large or the sample size is not very large, in which case other methods encounter difficulties.
[ { "created": "Tue, 14 Feb 2012 16:41:17 GMT", "version": "v1" } ]
2012-02-20
[ [ "Zhang", "Kun", "" ], [ "Peters", "Jonas", "" ], [ "Janzing", "Dominik", "" ], [ "Schoelkopf", "Bernhard", "" ] ]
Conditional independence testing is an important problem, especially in Bayesian network learning and causal discovery. Due to the curse of dimensionality, testing for conditional independence of continuous variables is particularly challenging. We propose a Kernel-based Conditional Independence test (KCI-test), by constructing an appropriate test statistic and deriving its asymptotic distribution under the null hypothesis of conditional independence. The proposed method is computationally efficient and easy to implement. Experimental results show that it outperforms other methods, especially when the conditioning set is large or the sample size is not very large, in which case other methods encounter difficulties.
1407.7294
Lili Dworkin
Kareem Amin, Rachel Cummings, Lili Dworkin, Michael Kearns, Aaron Roth
Online Learning and Profit Maximization from Revealed Preferences
null
null
null
null
cs.DS cs.GT cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of learning from revealed preferences in an online setting. In our framework, each period a consumer buys an optimal bundle of goods from a merchant according to her (linear) utility function and current prices, subject to a budget constraint. The merchant observes only the purchased goods, and seeks to adapt prices to optimize his profits. We give an efficient algorithm for the merchant's problem that consists of a learning phase in which the consumer's utility function is (perhaps partially) inferred, followed by a price optimization step. We also consider an alternative online learning algorithm for the setting where prices are set exogenously, but the merchant would still like to predict the bundle that will be bought by the consumer for purposes of inventory or supply chain management. In contrast with most prior work on the revealed preferences problem, we demonstrate that by making stronger assumptions on the form of utility functions, efficient algorithms for both learning and profit maximization are possible, even in adaptive, online settings.
[ { "created": "Sun, 27 Jul 2014 23:38:09 GMT", "version": "v1" }, { "created": "Fri, 28 Nov 2014 21:45:08 GMT", "version": "v2" } ]
2014-12-02
[ [ "Amin", "Kareem", "" ], [ "Cummings", "Rachel", "" ], [ "Dworkin", "Lili", "" ], [ "Kearns", "Michael", "" ], [ "Roth", "Aaron", "" ] ]
We consider the problem of learning from revealed preferences in an online setting. In our framework, each period a consumer buys an optimal bundle of goods from a merchant according to her (linear) utility function and current prices, subject to a budget constraint. The merchant observes only the purchased goods, and seeks to adapt prices to optimize his profits. We give an efficient algorithm for the merchant's problem that consists of a learning phase in which the consumer's utility function is (perhaps partially) inferred, followed by a price optimization step. We also consider an alternative online learning algorithm for the setting where prices are set exogenously, but the merchant would still like to predict the bundle that will be bought by the consumer for purposes of inventory or supply chain management. In contrast with most prior work on the revealed preferences problem, we demonstrate that by making stronger assumptions on the form of utility functions, efficient algorithms for both learning and profit maximization are possible, even in adaptive, online settings.
1709.09377
Maryia Kabanava
Maryia Kabanava, Holger Rauhut
Masked Toeplitz covariance estimation
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of estimating the covariance matrix $\Sigma$ of a $p$-variate distribution based on its $n$ observations arises in many data analysis contexts. While for $n>p$, the classical sample covariance matrix $\hat{\Sigma}_n$ is a good estimator for $\Sigma$, it fails in the high-dimensional setting when $n\ll p$. In this scenario one requires prior knowledge about the structure of the covariance matrix in order to construct reasonable estimators. Under the common assumption that $\Sigma$ is sparse, a refined estimator is given by $M\cdot\hat{\Sigma}_n$, where $M$ is a suitable symmetric mask matrix indicating the nonzero entries of $\Sigma$ and $\cdot$ denotes the entrywise product of matrices. In the present work we assume that $\Sigma$ has Toeplitz structure corresponding to stationary signals. This suggests to average the sample covariance $\hat{\Sigma}_n$ over the diagonals in order to obtain an estimator $\tilde{\Sigma}_n$ of Toeplitz structure. Assuming in addition that $\Sigma$ is sparse suggests to study estimators of the form $M\cdot\tilde{\Sigma}_n$. For Gaussian random vectors and, more generally, random vectors satisfying the convex concentration property, our main result bounds the estimation error in terms of $n$ and $p$ and shows that accurate estimation is indeed possible when $n \ll p$. The new bound significantly generalizes previous results by Cai, Ren and Zhou and provides an alternative proof. Our analysis exploits the connection between the spectral norm of a Toeplitz matrix and the supremum norm of the corresponding spectral density function.
[ { "created": "Wed, 27 Sep 2017 08:00:19 GMT", "version": "v1" } ]
2017-09-28
[ [ "Kabanava", "Maryia", "" ], [ "Rauhut", "Holger", "" ] ]
The problem of estimating the covariance matrix $\Sigma$ of a $p$-variate distribution based on its $n$ observations arises in many data analysis contexts. While for $n>p$, the classical sample covariance matrix $\hat{\Sigma}_n$ is a good estimator for $\Sigma$, it fails in the high-dimensional setting when $n\ll p$. In this scenario one requires prior knowledge about the structure of the covariance matrix in order to construct reasonable estimators. Under the common assumption that $\Sigma$ is sparse, a refined estimator is given by $M\cdot\hat{\Sigma}_n$, where $M$ is a suitable symmetric mask matrix indicating the nonzero entries of $\Sigma$ and $\cdot$ denotes the entrywise product of matrices. In the present work we assume that $\Sigma$ has Toeplitz structure corresponding to stationary signals. This suggests to average the sample covariance $\hat{\Sigma}_n$ over the diagonals in order to obtain an estimator $\tilde{\Sigma}_n$ of Toeplitz structure. Assuming in addition that $\Sigma$ is sparse suggests to study estimators of the form $M\cdot\tilde{\Sigma}_n$. For Gaussian random vectors and, more generally, random vectors satisfying the convex concentration property, our main result bounds the estimation error in terms of $n$ and $p$ and shows that accurate estimation is indeed possible when $n \ll p$. The new bound significantly generalizes previous results by Cai, Ren and Zhou and provides an alternative proof. Our analysis exploits the connection between the spectral norm of a Toeplitz matrix and the supremum norm of the corresponding spectral density function.
2111.06757
J.-M. Chauvet
J.-M. Chauvet
Multiway Storage Modification Machines
15 pages, 6 figures
null
null
null
cs.AI cs.CC
http://creativecommons.org/licenses/by/4.0/
We present a parallel version of Sch\"onhage's Storage Modification Machine, the Multiway Storage Modification Machine (MWSMM). Like the alternative Association Storage Modification Machine of Tromp and van Emde Boas, MWSMMs recognize in polynomial time what Turing Machines recognize in polynomial space. Falling thus into the Second Machine Class, the MWSMM is a parallel machine model conforming to the Parallel Computation Thesis. We illustrate MWSMMs by a simple implementation of Wolfram's String Substitution System.
[ { "created": "Fri, 12 Nov 2021 15:06:48 GMT", "version": "v1" } ]
2021-11-15
[ [ "Chauvet", "J. -M.", "" ] ]
We present a parallel version of Sch\"onhage's Storage Modification Machine, the Multiway Storage Modification Machine (MWSMM). Like the alternative Association Storage Modification Machine of Tromp and van Emde Boas, MWSMMs recognize in polynomial time what Turing Machines recognize in polynomial space. Falling thus into the Second Machine Class, the MWSMM is a parallel machine model conforming to the Parallel Computation Thesis. We illustrate MWSMMs by a simple implementation of Wolfram's String Substitution System.
1603.03947
Cemal Hanilci
Cemal Hanilci, Tomi Kinnunen, Md Sahidullah, Aleksandr Sizov
Spoofing Detection Goes Noisy: An Analysis of Synthetic Speech Detection in the Presence of Additive Noise
23 Pages, 7 figures
null
null
null
cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic speaker verification (ASV) technology is recently finding its way to end-user applications for secure access to personal data, smart services or physical facilities. Similar to other biometric technologies, speaker verification is vulnerable to spoofing attacks where an attacker masquerades as a particular target speaker via impersonation, replay, text-to-speech (TTS) or voice conversion (VC) techniques to gain illegitimate access to the system. We focus on TTS and VC that represent the most flexible, high-end spoofing attacks. Most of the prior studies on synthesized or converted speech detection report their findings using high-quality clean recordings. Meanwhile, the performance of spoofing detectors in the presence of additive noise, an important consideration in practical ASV implementations, remains largely unknown. To this end, we analyze the suitability of state-of-the-art synthetic speech detectors under additive noise with a special focus on front-end features. Our comparison includes eight acoustic feature sets, five related to spectral magnitude and three to spectral phase information. Our extensive experiments on ASVSpoof 2015 corpus reveal several important findings. Firstly, all the countermeasures break down even at relatively high signal-to-noise ratios (SNRs) and fail to generalize to noisy conditions. Secondly, speech enhancement is not found helpful. Thirdly, GMM back-end generally outperforms the more involved i-vector back-end. Fourthly, concerning the compared features, the Mel-frequency cepstral coefficients (MFCCs) and subband spectral centroid magnitude coefficients (SCMCs) perform the best on average though the winner method depends on SNR and noise type. Finally, a study with two score fusion strategies shows that combining different feature based systems improves recognition accuracy for known and unknown attacks in both clean and noisy conditions.
[ { "created": "Sat, 12 Mar 2016 17:44:48 GMT", "version": "v1" }, { "created": "Mon, 2 May 2016 17:32:23 GMT", "version": "v2" }, { "created": "Wed, 14 Sep 2016 21:07:59 GMT", "version": "v3" } ]
2016-09-16
[ [ "Hanilci", "Cemal", "" ], [ "Kinnunen", "Tomi", "" ], [ "Sahidullah", "Md", "" ], [ "Sizov", "Aleksandr", "" ] ]
Automatic speaker verification (ASV) technology is recently finding its way to end-user applications for secure access to personal data, smart services or physical facilities. Similar to other biometric technologies, speaker verification is vulnerable to spoofing attacks where an attacker masquerades as a particular target speaker via impersonation, replay, text-to-speech (TTS) or voice conversion (VC) techniques to gain illegitimate access to the system. We focus on TTS and VC that represent the most flexible, high-end spoofing attacks. Most of the prior studies on synthesized or converted speech detection report their findings using high-quality clean recordings. Meanwhile, the performance of spoofing detectors in the presence of additive noise, an important consideration in practical ASV implementations, remains largely unknown. To this end, we analyze the suitability of state-of-the-art synthetic speech detectors under additive noise with a special focus on front-end features. Our comparison includes eight acoustic feature sets, five related to spectral magnitude and three to spectral phase information. Our extensive experiments on ASVSpoof 2015 corpus reveal several important findings. Firstly, all the countermeasures break down even at relatively high signal-to-noise ratios (SNRs) and fail to generalize to noisy conditions. Secondly, speech enhancement is not found helpful. Thirdly, GMM back-end generally outperforms the more involved i-vector back-end. Fourthly, concerning the compared features, the Mel-frequency cepstral coefficients (MFCCs) and subband spectral centroid magnitude coefficients (SCMCs) perform the best on average though the winner method depends on SNR and noise type. Finally, a study with two score fusion strategies shows that combining different feature based systems improves recognition accuracy for known and unknown attacks in both clean and noisy conditions.
2304.02052
Ethan Gordon
Ethan K. Gordon and Rana Soltani Zarrin
Online augmentation of learned grasp sequence policies for more adaptable and data-efficient in-hand manipulation
7 pages (6+1 bibliography), 4 figures, 1 table, 2 algorithms, to appear in ICRA 2023
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When using a tool, the grasps used for picking it up, reposing, and holding it in a suitable pose for the desired task could be distinct. Therefore, a key challenge for autonomous in-hand tool manipulation is finding a sequence of grasps that facilitates every step of the tool use process while continuously maintaining force closure and stability. Due to the complexity of modeling the contact dynamics, reinforcement learning (RL) techniques can provide a solution in this continuous space subject to highly parameterized physical models. However, these techniques impose a trade-off in adaptability and data efficiency. At test time the tool properties, desired trajectory, and desired application forces could differ substantially from training scenarios. Adapting to this necessitates more data or computationally expensive online policy updates. In this work, we apply the principles of discrete dynamic programming (DP) to augment RL performance with domain knowledge. Specifically, we first design a computationally simple approximation of our environment. We then demonstrate in physical simulation that performing tree searches (i.e., lookaheads) and policy rollouts with this approximation can improve an RL-derived grasp sequence policy with minimal additional online computation. Additionally, we show that pretraining a deep RL network with the DP-derived solution to the discretized problem can speed up policy training.
[ { "created": "Tue, 4 Apr 2023 18:03:49 GMT", "version": "v1" } ]
2023-04-06
[ [ "Gordon", "Ethan K.", "" ], [ "Zarrin", "Rana Soltani", "" ] ]
When using a tool, the grasps used for picking it up, reposing, and holding it in a suitable pose for the desired task could be distinct. Therefore, a key challenge for autonomous in-hand tool manipulation is finding a sequence of grasps that facilitates every step of the tool use process while continuously maintaining force closure and stability. Due to the complexity of modeling the contact dynamics, reinforcement learning (RL) techniques can provide a solution in this continuous space subject to highly parameterized physical models. However, these techniques impose a trade-off in adaptability and data efficiency. At test time the tool properties, desired trajectory, and desired application forces could differ substantially from training scenarios. Adapting to this necessitates more data or computationally expensive online policy updates. In this work, we apply the principles of discrete dynamic programming (DP) to augment RL performance with domain knowledge. Specifically, we first design a computationally simple approximation of our environment. We then demonstrate in physical simulation that performing tree searches (i.e., lookaheads) and policy rollouts with this approximation can improve an RL-derived grasp sequence policy with minimal additional online computation. Additionally, we show that pretraining a deep RL network with the DP-derived solution to the discretized problem can speed up policy training.
2401.07009
Quan Qi
Fan Lu, Quan Qi, Huaibin Qin
Joint Extraction of Uyghur Medicine Knowledge with Edge Computing
11 pages,6 figures,Has been accepted by Tsinghua Science and Technology
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Medical knowledge extraction methods based on edge computing deploy deep learning models on edge devices to achieve localized entity and relation extraction. This approach avoids transferring substantial sensitive data to cloud data centers, effectively safeguarding the privacy of healthcare services. However, existing relation extraction methods mainly employ a sequential pipeline approach, which classifies relations between determined entities after entity recognition. This mode faces challenges such as error propagation between tasks, insufficient consideration of dependencies between the two subtasks, and the neglect of interrelations between different relations within a sentence. To address these challenges, a joint extraction model with parameter sharing in edge computing is proposed, named CoEx-Bert. This model leverages shared parameterization between two models to jointly extract entities and relations. Specifically, CoEx-Bert employs two models, each separately sharing hidden layer parameters, and combines these two loss functions for joint backpropagation to optimize the model parameters. Additionally, it effectively resolves the issue of entity overlapping when extracting knowledge from unstructured Uyghur medical texts by considering contextual relations. Finally, this model is deployed on edge devices for real-time extraction and inference of Uyghur medical knowledge. Experimental results demonstrate that CoEx-Bert outperforms existing state-of-the-art methods, achieving accuracy, recall, and F1 scores of 90.65\%, 92.45\%, and 91.54\%, respectively, in the Uyghur traditional medical literature dataset. These improvements represent a 6.45\% increase in accuracy, a 9.45\% increase in recall, and a 7.95\% increase in F1 score compared to the baseline.
[ { "created": "Sat, 13 Jan 2024 08:27:24 GMT", "version": "v1" } ]
2024-01-17
[ [ "Lu", "Fan", "" ], [ "Qi", "Quan", "" ], [ "Qin", "Huaibin", "" ] ]
Medical knowledge extraction methods based on edge computing deploy deep learning models on edge devices to achieve localized entity and relation extraction. This approach avoids transferring substantial sensitive data to cloud data centers, effectively safeguarding the privacy of healthcare services. However, existing relation extraction methods mainly employ a sequential pipeline approach, which classifies relations between determined entities after entity recognition. This mode faces challenges such as error propagation between tasks, insufficient consideration of dependencies between the two subtasks, and the neglect of interrelations between different relations within a sentence. To address these challenges, a joint extraction model with parameter sharing in edge computing is proposed, named CoEx-Bert. This model leverages shared parameterization between two models to jointly extract entities and relations. Specifically, CoEx-Bert employs two models, each separately sharing hidden layer parameters, and combines these two loss functions for joint backpropagation to optimize the model parameters. Additionally, it effectively resolves the issue of entity overlapping when extracting knowledge from unstructured Uyghur medical texts by considering contextual relations. Finally, this model is deployed on edge devices for real-time extraction and inference of Uyghur medical knowledge. Experimental results demonstrate that CoEx-Bert outperforms existing state-of-the-art methods, achieving accuracy, recall, and F1 scores of 90.65\%, 92.45\%, and 91.54\%, respectively, in the Uyghur traditional medical literature dataset. These improvements represent a 6.45\% increase in accuracy, a 9.45\% increase in recall, and a 7.95\% increase in F1 score compared to the baseline.
2310.13931
Hongjiang Lei Dr.
Hongjiang Lei, Jiacheng Jiang, Haosi Yang, Ki-Hong Park, Imran Shafique Ansari, Gaofeng Pan, Mohamed-Slim Alouini
Trajectory and power design for aerial CRNs with colluding eavesdroppers
10 pages, 7 figures.submitted to the IEEE journal for review
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Unmanned aerial vehicles (UAVs) can provide wireless access services to terrestrial users without geographical limitations and will become an essential part of the future communication system. However, the openness of wireless channels and the mobility of UAVs make the security of UAV-based communication systems particularly challenging. This work investigates the security of aerial cognitive radio networks (CRNs) with multiple uncertainties colluding eavesdroppers. A cognitive aerial base station transmits messages to cognitive terrestrial users using the spectrum resource of the primary users. All secondary terrestrial users and illegitimate receivers jointly decode the received message. The average secrecy rate of the aerial CRNs is maximized by jointly optimizing the UAV's trajectory and transmission power. An iterative algorithm based on block coordinate descent and successive convex approximation is proposed to solve the non-convex mixed-variable optimization problem. Numerical results verify the effectiveness of our proposed algorithm and show that our scheme improves the secrecy performance of airborne CRNs.
[ { "created": "Sat, 21 Oct 2023 07:48:05 GMT", "version": "v1" } ]
2023-10-24
[ [ "Lei", "Hongjiang", "" ], [ "Jiang", "Jiacheng", "" ], [ "Yang", "Haosi", "" ], [ "Park", "Ki-Hong", "" ], [ "Ansari", "Imran Shafique", "" ], [ "Pan", "Gaofeng", "" ], [ "Alouini", "Mohamed-Slim", "" ] ]
Unmanned aerial vehicles (UAVs) can provide wireless access services to terrestrial users without geographical limitations and will become an essential part of the future communication system. However, the openness of wireless channels and the mobility of UAVs make the security of UAV-based communication systems particularly challenging. This work investigates the security of aerial cognitive radio networks (CRNs) with multiple uncertainties colluding eavesdroppers. A cognitive aerial base station transmits messages to cognitive terrestrial users using the spectrum resource of the primary users. All secondary terrestrial users and illegitimate receivers jointly decode the received message. The average secrecy rate of the aerial CRNs is maximized by jointly optimizing the UAV's trajectory and transmission power. An iterative algorithm based on block coordinate descent and successive convex approximation is proposed to solve the non-convex mixed-variable optimization problem. Numerical results verify the effectiveness of our proposed algorithm and show that our scheme improves the secrecy performance of airborne CRNs.
2209.06825
Wenge Xu
Xuanru Meng, Wenge Xu, Hai-Ning Liang
An Exploration of Hands-free Text Selection for Virtual Reality Head-Mounted Displays
IEEE ISMAR'22 conference track; 8 pages. arXiv admin note: text overlap with arXiv:2209.06498 There was a mistake in symbol in Section 4.5.1 where we have updated in this version
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Hand-based interaction, such as using a handheld controller or making hand gestures, has been widely adopted as the primary method for interacting with both virtual reality (VR) and augmented reality (AR) head-mounted displays (HMDs). In contrast, hands-free interaction avoids the need for users' hands and although it can afford additional benefits, there has been limited research in exploring and evaluating hands-free techniques for these HMDs. As VR HMDs become ubiquitous, people will need to do text editing, which requires selecting text segments. Similar to hands-free interaction, text selection is underexplored. This research focuses on both, text selection via hands-free interaction. Our exploration involves a user study with 24 participants to investigate the performance, user experience, and workload of three hands-free selection mechanisms (Dwell, Blink, Voice) to complement head-based pointing. Results indicate that Blink outperforms Dwell and Voice in completion time. Users' subjective feedback also shows that Blink is the preferred technique for text selection. This work is the first to explore hands-free interaction for text selection in VR HMDs. Our results provide a solid platform for further research in this important area.
[ { "created": "Wed, 14 Sep 2022 09:25:54 GMT", "version": "v1" }, { "created": "Sat, 15 Oct 2022 18:13:46 GMT", "version": "v2" } ]
2022-10-18
[ [ "Meng", "Xuanru", "" ], [ "Xu", "Wenge", "" ], [ "Liang", "Hai-Ning", "" ] ]
Hand-based interaction, such as using a handheld controller or making hand gestures, has been widely adopted as the primary method for interacting with both virtual reality (VR) and augmented reality (AR) head-mounted displays (HMDs). In contrast, hands-free interaction avoids the need for users' hands and although it can afford additional benefits, there has been limited research in exploring and evaluating hands-free techniques for these HMDs. As VR HMDs become ubiquitous, people will need to do text editing, which requires selecting text segments. Similar to hands-free interaction, text selection is underexplored. This research focuses on both, text selection via hands-free interaction. Our exploration involves a user study with 24 participants to investigate the performance, user experience, and workload of three hands-free selection mechanisms (Dwell, Blink, Voice) to complement head-based pointing. Results indicate that Blink outperforms Dwell and Voice in completion time. Users' subjective feedback also shows that Blink is the preferred technique for text selection. This work is the first to explore hands-free interaction for text selection in VR HMDs. Our results provide a solid platform for further research in this important area.
2308.07661
Zhe Chen
Zhe Chen
Attention Is Not All You Need Anymore
null
null
null
null
cs.LG cs.CL cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, the popular Transformer architecture has achieved great success in many application areas, including natural language processing and computer vision. Many existing works aim to reduce the computational and memory complexity of the self-attention mechanism in the Transformer by trading off performance. However, performance is key for the continuing success of the Transformer. In this paper, a family of drop-in replacements for the self-attention mechanism in the Transformer, called the Extractors, is proposed. Four types of the Extractors, namely the super high-performance Extractor (SHE), the higher-performance Extractor (HE), the worthwhile Extractor (WE), and the minimalist Extractor (ME), are proposed as examples. Experimental results show that replacing the self-attention mechanism with the SHE evidently improves the performance of the Transformer, whereas the simplified versions of the SHE, i.e., the HE, the WE, and the ME, perform close to or better than the self-attention mechanism with less computational and memory complexity. Furthermore, the proposed Extractors have the potential or are able to run faster than the self-attention mechanism since their critical paths of computation are much shorter. Additionally, the sequence prediction problem in the context of text generation is formulated using variable-length discrete-time Markov chains, and the Transformer is reviewed based on our understanding.
[ { "created": "Tue, 15 Aug 2023 09:24:38 GMT", "version": "v1" }, { "created": "Tue, 19 Sep 2023 13:32:07 GMT", "version": "v2" } ]
2023-09-20
[ [ "Chen", "Zhe", "" ] ]
In recent years, the popular Transformer architecture has achieved great success in many application areas, including natural language processing and computer vision. Many existing works aim to reduce the computational and memory complexity of the self-attention mechanism in the Transformer by trading off performance. However, performance is key for the continuing success of the Transformer. In this paper, a family of drop-in replacements for the self-attention mechanism in the Transformer, called the Extractors, is proposed. Four types of the Extractors, namely the super high-performance Extractor (SHE), the higher-performance Extractor (HE), the worthwhile Extractor (WE), and the minimalist Extractor (ME), are proposed as examples. Experimental results show that replacing the self-attention mechanism with the SHE evidently improves the performance of the Transformer, whereas the simplified versions of the SHE, i.e., the HE, the WE, and the ME, perform close to or better than the self-attention mechanism with less computational and memory complexity. Furthermore, the proposed Extractors have the potential or are able to run faster than the self-attention mechanism since their critical paths of computation are much shorter. Additionally, the sequence prediction problem in the context of text generation is formulated using variable-length discrete-time Markov chains, and the Transformer is reviewed based on our understanding.
2107.13632
Yunian Pan
Yunian Pan and Quanyan Zhu
Efficient Episodic Learning of Nonstationary and Unknown Zero-Sum Games Using Expert Game Ensembles
null
null
null
null
cs.GT cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Game theory provides essential analysis in many applications of strategic interactions. However, the question of how to construct a game model and what is its fidelity is seldom addressed. In this work, we consider learning in a class of repeated zero-sum games with unknown, time-varying payoff matrix, and noisy feedbacks, by making use of an ensemble of benchmark game models. These models can be pre-trained and collected dynamically during sequential plays. They serve as prior side information and imperfectly underpin the unknown true game model. We propose OFULinMat, an episodic learning algorithm that integrates the adaptive estimation of game models and the learning of the strategies. The proposed algorithm is shown to achieve a sublinear bound on the saddle-point regret. We show that this algorithm is provably efficient through both theoretical analysis and numerical examples. We use a dynamic honeypot allocation game as a case study to illustrate and corroborate our results. We also discuss the relationship and highlight the difference between our framework and the classical adversarial multi-armed bandit framework.
[ { "created": "Wed, 28 Jul 2021 20:35:12 GMT", "version": "v1" } ]
2021-07-30
[ [ "Pan", "Yunian", "" ], [ "Zhu", "Quanyan", "" ] ]
Game theory provides essential analysis in many applications of strategic interactions. However, the question of how to construct a game model and what is its fidelity is seldom addressed. In this work, we consider learning in a class of repeated zero-sum games with unknown, time-varying payoff matrix, and noisy feedbacks, by making use of an ensemble of benchmark game models. These models can be pre-trained and collected dynamically during sequential plays. They serve as prior side information and imperfectly underpin the unknown true game model. We propose OFULinMat, an episodic learning algorithm that integrates the adaptive estimation of game models and the learning of the strategies. The proposed algorithm is shown to achieve a sublinear bound on the saddle-point regret. We show that this algorithm is provably efficient through both theoretical analysis and numerical examples. We use a dynamic honeypot allocation game as a case study to illustrate and corroborate our results. We also discuss the relationship and highlight the difference between our framework and the classical adversarial multi-armed bandit framework.
2104.03795
Marc Satkowski
Marc Satkowski, Wolfgang B\"uschel, Raimund Dachselt
Experiences with User Studies in Augmented Reality
This work has been accepted for the ACM CHI 2021 Workshop "Evaluating User Experiences in Mixed Reality" (https://sputze.github.io/evaluating-mr/)
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The research field of augmented reality (AR) is of increasing popularity, as seen, among others, in several recently published surveys. To produce further advancements in AR, it is not only necessary to create new systems or applications, but also to evaluate them. One important aspect in regards to the evaluation is the general understanding of how users experience a given AR application, which can also be seen by the increased number of papers focusing on this topic that were published in the last years. With the steadily growing understanding and development of AR in general, it is only a matter of time until AR devices make the leap into the consumer market where such an in-depth user understanding is even more essential. Thus, a better understanding of factors that could influence the design and results of user experience studies can help us to make them more robust and dependable in the future. In this position paper, we describe three challenges which researchers face while designing and conducting AR users studies. We encountered these challenges in our past and current research, including papers that focus on perceptual studies of visualizations, interaction studies, and studies exploring the use of AR applications and their design spaces.
[ { "created": "Thu, 8 Apr 2021 14:18:51 GMT", "version": "v1" } ]
2021-04-09
[ [ "Satkowski", "Marc", "" ], [ "Büschel", "Wolfgang", "" ], [ "Dachselt", "Raimund", "" ] ]
The research field of augmented reality (AR) is of increasing popularity, as seen, among others, in several recently published surveys. To produce further advancements in AR, it is not only necessary to create new systems or applications, but also to evaluate them. One important aspect in regards to the evaluation is the general understanding of how users experience a given AR application, which can also be seen by the increased number of papers focusing on this topic that were published in the last years. With the steadily growing understanding and development of AR in general, it is only a matter of time until AR devices make the leap into the consumer market where such an in-depth user understanding is even more essential. Thus, a better understanding of factors that could influence the design and results of user experience studies can help us to make them more robust and dependable in the future. In this position paper, we describe three challenges which researchers face while designing and conducting AR users studies. We encountered these challenges in our past and current research, including papers that focus on perceptual studies of visualizations, interaction studies, and studies exploring the use of AR applications and their design spaces.
1804.01310
Guillermo Gallego
Ana I. Maqueda, Antonio Loquercio, Guillermo Gallego, Narciso Garcia, Davide Scaramuzza
Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars
9 pages, 8 figures, 6 tables. Video: https://youtu.be/_r_bsjkJTHA
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, 2018
10.1109/CVPR.2018.00568
null
cs.CV cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (~1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras.
[ { "created": "Wed, 4 Apr 2018 09:05:41 GMT", "version": "v1" } ]
2019-01-21
[ [ "Maqueda", "Ana I.", "" ], [ "Loquercio", "Antonio", "" ], [ "Gallego", "Guillermo", "" ], [ "Garcia", "Narciso", "" ], [ "Scaramuzza", "Davide", "" ] ]
Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (~1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras.
2005.11037
Xin Jin
Xin Jin, Cuiling Lan, Wenjun Zeng, Zhibo Chen, Li Zhang
Style Normalization and Restitution for Generalizable Person Re-identification
Accepted by CVPR2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing fully-supervised person re-identification (ReID) methods usually suffer from poor generalization capability caused by domain gaps. The key to solving this problem lies in filtering out identity-irrelevant interference and learning domain-invariant person representations. In this paper, we aim to design a generalizable person ReID framework which trains a model on source domains yet is able to generalize/perform well on target domains. To achieve this goal, we propose a simple yet effective Style Normalization and Restitution (SNR) module. Specifically, we filter out style variations (e.g., illumination, color contrast) by Instance Normalization (IN). However, such a process inevitably removes discriminative information. We propose to distill identity-relevant feature from the removed information and restitute it to the network to ensure high discrimination. For better disentanglement, we enforce a dual causal loss constraint in SNR to encourage the separation of identity-relevant features and identity-irrelevant features. Extensive experiments demonstrate the strong generalization capability of our framework. Our models empowered by the SNR modules significantly outperform the state-of-the-art domain generalization approaches on multiple widely-used person ReID benchmarks, and also show superiority on unsupervised domain adaptation.
[ { "created": "Fri, 22 May 2020 07:15:10 GMT", "version": "v1" } ]
2020-05-25
[ [ "Jin", "Xin", "" ], [ "Lan", "Cuiling", "" ], [ "Zeng", "Wenjun", "" ], [ "Chen", "Zhibo", "" ], [ "Zhang", "Li", "" ] ]
Existing fully-supervised person re-identification (ReID) methods usually suffer from poor generalization capability caused by domain gaps. The key to solving this problem lies in filtering out identity-irrelevant interference and learning domain-invariant person representations. In this paper, we aim to design a generalizable person ReID framework which trains a model on source domains yet is able to generalize/perform well on target domains. To achieve this goal, we propose a simple yet effective Style Normalization and Restitution (SNR) module. Specifically, we filter out style variations (e.g., illumination, color contrast) by Instance Normalization (IN). However, such a process inevitably removes discriminative information. We propose to distill identity-relevant feature from the removed information and restitute it to the network to ensure high discrimination. For better disentanglement, we enforce a dual causal loss constraint in SNR to encourage the separation of identity-relevant features and identity-irrelevant features. Extensive experiments demonstrate the strong generalization capability of our framework. Our models empowered by the SNR modules significantly outperform the state-of-the-art domain generalization approaches on multiple widely-used person ReID benchmarks, and also show superiority on unsupervised domain adaptation.
1306.2422
Kai Cai
Kai Cai, Renyuan Zhang, and W.Murray Wonham
Relative Observability of Discrete-Event Systems and its Supremal Sublanguages
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We identify a new observability concept, called relative observability, in supervisory control of discrete-event systems under partial observation. A fixed, ambient language is given, relative to which observability is tested. Relative observability is stronger than observability, but enjoys the important property that it is preserved under set union; hence there exists the supremal relatively observable sublanguage of a given language. Relative observability is weaker than normality, and thus yields, when combined with controllability, a generally larger controlled behavior; in particular, no constraint is imposed that only observable controllable events may be disabled. We design algorithms which compute the supremal relatively observable (and controllable) sublanguage of a given language, which is generally larger than the normal counterparts. We demonstrate the new observability concept and algorithms with a Guideway and an AGV example.
[ { "created": "Tue, 11 Jun 2013 05:28:59 GMT", "version": "v1" }, { "created": "Fri, 14 Jun 2013 20:34:38 GMT", "version": "v2" }, { "created": "Fri, 21 Mar 2014 04:28:49 GMT", "version": "v3" } ]
2014-03-24
[ [ "Cai", "Kai", "" ], [ "Zhang", "Renyuan", "" ], [ "Wonham", "W. Murray", "" ] ]
We identify a new observability concept, called relative observability, in supervisory control of discrete-event systems under partial observation. A fixed, ambient language is given, relative to which observability is tested. Relative observability is stronger than observability, but enjoys the important property that it is preserved under set union; hence there exists the supremal relatively observable sublanguage of a given language. Relative observability is weaker than normality, and thus yields, when combined with controllability, a generally larger controlled behavior; in particular, no constraint is imposed that only observable controllable events may be disabled. We design algorithms which compute the supremal relatively observable (and controllable) sublanguage of a given language, which is generally larger than the normal counterparts. We demonstrate the new observability concept and algorithms with a Guideway and an AGV example.
1511.01137
L\'aszl\'o V\'egh
Matthias Mnich, Virginia Vassilevska Williams, L\'aszl\'o A. V\'egh
A 7/3-Approximation for Feedback Vertex Sets in Tournaments
null
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the minimum-weight feedback vertex set problem in tournaments: given a tournament with non-negative vertex weights, remove a minimum-weight set of vertices that intersects all cycles. This problem is $\mathsf{NP}$-hard to solve exactly, and Unique Games-hard to approximate by a factor better than 2. We present the first $7/3$ approximation algorithm for this problem, improving on the previously best known ratio $5/2$ given by Cai et al. [FOCS 1998, SICOMP 2001].
[ { "created": "Tue, 3 Nov 2015 22:09:46 GMT", "version": "v1" } ]
2015-11-05
[ [ "Mnich", "Matthias", "" ], [ "Williams", "Virginia Vassilevska", "" ], [ "Végh", "László A.", "" ] ]
We consider the minimum-weight feedback vertex set problem in tournaments: given a tournament with non-negative vertex weights, remove a minimum-weight set of vertices that intersects all cycles. This problem is $\mathsf{NP}$-hard to solve exactly, and Unique Games-hard to approximate by a factor better than 2. We present the first $7/3$ approximation algorithm for this problem, improving on the previously best known ratio $5/2$ given by Cai et al. [FOCS 1998, SICOMP 2001].
2403.12893
Matteo Nerini
Hongyu Li, Matteo Nerini, Shanpu Shen, Bruno Clerckx
Wideband Modeling and Beamforming for Beyond Diagonal Reconfigurable Intelligent Surfaces
Submitted to IEEE for publication
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
This work studies the wideband modeling and beamforming design of beyond diagonal reconfigurable intelligent surface (BD-RIS), which generalizes and goes beyond conventional RIS with diagonal phase shift matrices to achieve enhanced channel gain. Specifically, we investigate the response of BD-RIS in wideband systems by going back to its hardware circuit realizations. We propose a novel wideband model which has simple expressions while capturing the response variations of BD-RIS for signals with different frequencies. With this wideband model, we propose a BD-RIS design algorithm for an orthogonal frequency division multiplexing system to maximize the average rate over all subcarriers. Finally, we provide simulation results to evaluate the performance of the proposed design and show the importance of wideband modeling for BD-RIS.
[ { "created": "Tue, 19 Mar 2024 16:45:45 GMT", "version": "v1" } ]
2024-03-20
[ [ "Li", "Hongyu", "" ], [ "Nerini", "Matteo", "" ], [ "Shen", "Shanpu", "" ], [ "Clerckx", "Bruno", "" ] ]
This work studies the wideband modeling and beamforming design of beyond diagonal reconfigurable intelligent surface (BD-RIS), which generalizes and goes beyond conventional RIS with diagonal phase shift matrices to achieve enhanced channel gain. Specifically, we investigate the response of BD-RIS in wideband systems by going back to its hardware circuit realizations. We propose a novel wideband model which has simple expressions while capturing the response variations of BD-RIS for signals with different frequencies. With this wideband model, we propose a BD-RIS design algorithm for an orthogonal frequency division multiplexing system to maximize the average rate over all subcarriers. Finally, we provide simulation results to evaluate the performance of the proposed design and show the importance of wideband modeling for BD-RIS.
2010.08719
Xiaogang Wang
Xiaogang Wang, Marcelo H Ang Jr, Gim Hee Lee
Cascaded Refinement Network for Point Cloud Completion with Self-supervision
Accepted by PAMI. Extended version of the following paper: Cascaded Refinement Network for Point Cloud Completion. CVPR 2020. arXiv link: arXiv:2004.03327
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point clouds are often sparse and incomplete, which imposes difficulties for real-world applications. Existing shape completion methods tend to generate rough shapes without fine-grained details. Considering this, we introduce a two-branch network for shape completion. The first branch is a cascaded shape completion sub-network to synthesize complete objects, where we propose to use the partial input together with the coarse output to preserve the object details during the dense point reconstruction. The second branch is an auto-encoder to reconstruct the original partial input. The two branches share a same feature extractor to learn an accurate global feature for shape completion. Furthermore, we propose two strategies to enable the training of our network when ground truth data are not available. This is to mitigate the dependence of existing approaches on large amounts of ground truth training data that are often difficult to obtain in real-world applications. Additionally, our proposed strategies are also able to improve the reconstruction quality for fully supervised learning. We verify our approach in self-supervised, semi-supervised and fully supervised settings with superior performances. Quantitative and qualitative results on different datasets demonstrate that our method achieves more realistic outputs than state-of-the-art approaches on the point cloud completion task.
[ { "created": "Sat, 17 Oct 2020 04:56:22 GMT", "version": "v1" }, { "created": "Sat, 7 Aug 2021 08:37:10 GMT", "version": "v2" }, { "created": "Thu, 26 Aug 2021 10:24:05 GMT", "version": "v3" } ]
2021-08-27
[ [ "Wang", "Xiaogang", "" ], [ "Ang", "Marcelo H", "Jr" ], [ "Lee", "Gim Hee", "" ] ]
Point clouds are often sparse and incomplete, which imposes difficulties for real-world applications. Existing shape completion methods tend to generate rough shapes without fine-grained details. Considering this, we introduce a two-branch network for shape completion. The first branch is a cascaded shape completion sub-network to synthesize complete objects, where we propose to use the partial input together with the coarse output to preserve the object details during the dense point reconstruction. The second branch is an auto-encoder to reconstruct the original partial input. The two branches share a same feature extractor to learn an accurate global feature for shape completion. Furthermore, we propose two strategies to enable the training of our network when ground truth data are not available. This is to mitigate the dependence of existing approaches on large amounts of ground truth training data that are often difficult to obtain in real-world applications. Additionally, our proposed strategies are also able to improve the reconstruction quality for fully supervised learning. We verify our approach in self-supervised, semi-supervised and fully supervised settings with superior performances. Quantitative and qualitative results on different datasets demonstrate that our method achieves more realistic outputs than state-of-the-art approaches on the point cloud completion task.
1709.07192
Yikang Li
Yikang Li, Nan Duan, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang
Visual Question Generation as Dual Task of Visual Question Answering
9 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently visual question answering (VQA) and visual question generation (VQG) are two trending topics in the computer vision, which have been explored separately. In this work, we propose an end-to-end unified framework, the Invertible Question Answering Network (iQAN), to leverage the complementary relations between questions and answers in images by jointly training the model on VQA and VQG tasks. Corresponding parameter sharing scheme and regular terms are proposed as constraints to explicitly leverage Q,A's dependencies to guide the training process. After training, iQAN can take either question or answer as input, then output the counterpart. Evaluated on the large-scale visual question answering datasets CLEVR and VQA2, our iQAN improves the VQA accuracy over the baselines. We also show the dual learning framework of iQAN can be generalized to other VQA architectures and consistently improve the results over both the VQA and VQG tasks.
[ { "created": "Thu, 21 Sep 2017 08:04:48 GMT", "version": "v1" } ]
2017-09-22
[ [ "Li", "Yikang", "" ], [ "Duan", "Nan", "" ], [ "Zhou", "Bolei", "" ], [ "Chu", "Xiao", "" ], [ "Ouyang", "Wanli", "" ], [ "Wang", "Xiaogang", "" ] ]
Recently visual question answering (VQA) and visual question generation (VQG) are two trending topics in the computer vision, which have been explored separately. In this work, we propose an end-to-end unified framework, the Invertible Question Answering Network (iQAN), to leverage the complementary relations between questions and answers in images by jointly training the model on VQA and VQG tasks. Corresponding parameter sharing scheme and regular terms are proposed as constraints to explicitly leverage Q,A's dependencies to guide the training process. After training, iQAN can take either question or answer as input, then output the counterpart. Evaluated on the large-scale visual question answering datasets CLEVR and VQA2, our iQAN improves the VQA accuracy over the baselines. We also show the dual learning framework of iQAN can be generalized to other VQA architectures and consistently improve the results over both the VQA and VQG tasks.
1705.11087
Neil Ernst
Neil A. Ernst and Stephany Bellomo and Ipek Ozkaya and Robert L. Nord
What to Fix? Distinguishing between design and non-design rules in automated tools
Long version of accepted short paper at International Conference on Software Architecture 2017 (Gothenburg, SE)
Proceedings of International Conference on Software Architecture, pp 165-168, 2017
10.1109/ICSA.2017.25
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Technical debt---design shortcuts taken to optimize for delivery speed---is a critical part of long-term software costs. Consequently, automatically detecting technical debt is a high priority for software practitioners. Software quality tool vendors have responded to this need by positioning their tools to detect and manage technical debt. While these tools bundle a number of rules, it is hard for users to understand which rules identify design issues, as opposed to syntactic quality. This is important, since previous studies have revealed the most significant technical debt is related to design issues. Other research has focused on comparing these tools on open source projects, but these comparisons have not looked at whether the rules were relevant to design. We conducted an empirical study using a structured categorization approach, and manually classify 466 software quality rules from three industry tools---CAST, SonarQube, and NDepend. We found that most of these rules were easily labeled as either not design (55%) or design (19%). The remainder (26%) resulted in disagreements among the labelers. Our results are a first step in formalizing a definition of a design rule, in order to support automatic detection.
[ { "created": "Wed, 31 May 2017 13:35:37 GMT", "version": "v1" } ]
2017-06-01
[ [ "Ernst", "Neil A.", "" ], [ "Bellomo", "Stephany", "" ], [ "Ozkaya", "Ipek", "" ], [ "Nord", "Robert L.", "" ] ]
Technical debt---design shortcuts taken to optimize for delivery speed---is a critical part of long-term software costs. Consequently, automatically detecting technical debt is a high priority for software practitioners. Software quality tool vendors have responded to this need by positioning their tools to detect and manage technical debt. While these tools bundle a number of rules, it is hard for users to understand which rules identify design issues, as opposed to syntactic quality. This is important, since previous studies have revealed the most significant technical debt is related to design issues. Other research has focused on comparing these tools on open source projects, but these comparisons have not looked at whether the rules were relevant to design. We conducted an empirical study using a structured categorization approach, and manually classify 466 software quality rules from three industry tools---CAST, SonarQube, and NDepend. We found that most of these rules were easily labeled as either not design (55%) or design (19%). The remainder (26%) resulted in disagreements among the labelers. Our results are a first step in formalizing a definition of a design rule, in order to support automatic detection.
2307.14024
Sen Zhao
Sen Zhao, Wei Wei, Xian-Ling Mao, Shuai Zhu, Minghui Yang, Zujie Wen, Dangyang Chen, Feida Zhu
Multi-view Hypergraph Contrastive Policy Learning for Conversational Recommendation
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conversational recommendation systems (CRS) aim to interactively acquire user preferences and accordingly recommend items to users. Accurately learning the dynamic user preferences is of crucial importance for CRS. Previous works learn the user preferences with pairwise relations from the interactive conversation and item knowledge, while largely ignoring the fact that factors for a relationship in CRS are multiplex. Specifically, the user likes/dislikes the items that satisfy some attributes (Like/Dislike view). Moreover social influence is another important factor that affects user preference towards the item (Social view), while is largely ignored by previous works in CRS. The user preferences from these three views are inherently different but also correlated as a whole. The user preferences from the same views should be more similar than that from different views. The user preferences from Like View should be similar to Social View while different from Dislike View. To this end, we propose a novel model, namely Multi-view Hypergraph Contrastive Policy Learning (MHCPL). Specifically, MHCPL timely chooses useful social information according to the interactive history and builds a dynamic hypergraph with three types of multiplex relations from different views. The multiplex relations in each view are successively connected according to their generation order.
[ { "created": "Wed, 26 Jul 2023 08:08:05 GMT", "version": "v1" } ]
2023-07-27
[ [ "Zhao", "Sen", "" ], [ "Wei", "Wei", "" ], [ "Mao", "Xian-Ling", "" ], [ "Zhu", "Shuai", "" ], [ "Yang", "Minghui", "" ], [ "Wen", "Zujie", "" ], [ "Chen", "Dangyang", "" ], [ "Zhu", "Feida", "" ] ]
Conversational recommendation systems (CRS) aim to interactively acquire user preferences and accordingly recommend items to users. Accurately learning the dynamic user preferences is of crucial importance for CRS. Previous works learn the user preferences with pairwise relations from the interactive conversation and item knowledge, while largely ignoring the fact that factors for a relationship in CRS are multiplex. Specifically, the user likes/dislikes the items that satisfy some attributes (Like/Dislike view). Moreover social influence is another important factor that affects user preference towards the item (Social view), while is largely ignored by previous works in CRS. The user preferences from these three views are inherently different but also correlated as a whole. The user preferences from the same views should be more similar than that from different views. The user preferences from Like View should be similar to Social View while different from Dislike View. To this end, we propose a novel model, namely Multi-view Hypergraph Contrastive Policy Learning (MHCPL). Specifically, MHCPL timely chooses useful social information according to the interactive history and builds a dynamic hypergraph with three types of multiplex relations from different views. The multiplex relations in each view are successively connected according to their generation order.
2207.09655
Fakhri Momeni
Fakhri Momeni, Philipp Mayr and Stefan Dietze
Investigating the contribution of author- and publication-specific features to scholars' h-index prediction
14 pages, 1 figure
EPJ Data Science 2023
10.1140/epjds/s13688-023-00421-6
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evaluation of researchers' output is vital for hiring committees and funding bodies, and it is usually measured via their scientific productivity, citations, or a combined metric such as h-index. Assessing young researchers is more critical because it takes a while to get citations and increment of h-index. Hence, predicting the h-index can help to discover the researchers' scientific impact. In addition, identifying the influential factors to predict the scientific impact is helpful for researchers seeking solutions to improve it. This study investigates the effect of author, paper and venue-specific features on the future h-index. For this purpose, we used machine learning methods to predict the h-index and feature analysis techniques to advance the understanding of feature impact. Utilizing the bibliometric data in Scopus, we defined and extracted two main groups of features. The first relates to prior scientific impact, and we name it 'prior impact-based features' and includes the number of publications, received citations, and h-index. The second group is 'non-impact-based features' and contains the features related to author, co-authorship, paper, and venue characteristics. We explored their importance in predicting h-index for researchers in three different career phases. Also, we examine the temporal dimension of predicting performance for different feature categories to find out which features are more reliable for long- and short-term prediction. We referred to the gender of the authors to examine the role of this author's characteristics in the prediction task. Our findings showed that gender has a very slight effect in predicting the h-index. We found that non-impact-based features are more robust predictors for younger scholars than seniors in the short term. Also, prior impact-based features lose their power to predict more than other features in the long-term.
[ { "created": "Wed, 20 Jul 2022 05:12:26 GMT", "version": "v1" }, { "created": "Thu, 21 Jul 2022 15:50:19 GMT", "version": "v2" }, { "created": "Tue, 7 Mar 2023 08:44:10 GMT", "version": "v3" }, { "created": "Mon, 27 Mar 2023 10:10:09 GMT", "version": "v4" }, { "created": "Mon, 7 Aug 2023 13:13:56 GMT", "version": "v5" }, { "created": "Wed, 9 Aug 2023 09:08:48 GMT", "version": "v6" } ]
2023-10-10
[ [ "Momeni", "Fakhri", "" ], [ "Mayr", "Philipp", "" ], [ "Dietze", "Stefan", "" ] ]
Evaluation of researchers' output is vital for hiring committees and funding bodies, and it is usually measured via their scientific productivity, citations, or a combined metric such as h-index. Assessing young researchers is more critical because it takes a while to get citations and increment of h-index. Hence, predicting the h-index can help to discover the researchers' scientific impact. In addition, identifying the influential factors to predict the scientific impact is helpful for researchers seeking solutions to improve it. This study investigates the effect of author, paper and venue-specific features on the future h-index. For this purpose, we used machine learning methods to predict the h-index and feature analysis techniques to advance the understanding of feature impact. Utilizing the bibliometric data in Scopus, we defined and extracted two main groups of features. The first relates to prior scientific impact, and we name it 'prior impact-based features' and includes the number of publications, received citations, and h-index. The second group is 'non-impact-based features' and contains the features related to author, co-authorship, paper, and venue characteristics. We explored their importance in predicting h-index for researchers in three different career phases. Also, we examine the temporal dimension of predicting performance for different feature categories to find out which features are more reliable for long- and short-term prediction. We referred to the gender of the authors to examine the role of this author's characteristics in the prediction task. Our findings showed that gender has a very slight effect in predicting the h-index. We found that non-impact-based features are more robust predictors for younger scholars than seniors in the short term. Also, prior impact-based features lose their power to predict more than other features in the long-term.
2310.18974
Ashutosh Modi
Ashutosh Dwivedi, Pradhyumna Lavania, Ashutosh Modi
EtiCor: Corpus for Analyzing LLMs for Etiquettes
Accepted at EMNLP 2023, Main Conference
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Etiquettes are an essential ingredient of day-to-day interactions among people. Moreover, etiquettes are region-specific, and etiquettes in one region might contradict those in other regions. In this paper, we propose EtiCor, an Etiquettes Corpus, having texts about social norms from five different regions across the globe. The corpus provides a test bed for evaluating LLMs for knowledge and understanding of region-specific etiquettes. Additionally, we propose the task of Etiquette Sensitivity. We experiment with state-of-the-art LLMs (Delphi, Falcon40B, and GPT-3.5). Initial results indicate that LLMs, mostly fail to understand etiquettes from regions from non-Western world.
[ { "created": "Sun, 29 Oct 2023 10:47:23 GMT", "version": "v1" } ]
2023-10-31
[ [ "Dwivedi", "Ashutosh", "" ], [ "Lavania", "Pradhyumna", "" ], [ "Modi", "Ashutosh", "" ] ]
Etiquettes are an essential ingredient of day-to-day interactions among people. Moreover, etiquettes are region-specific, and etiquettes in one region might contradict those in other regions. In this paper, we propose EtiCor, an Etiquettes Corpus, having texts about social norms from five different regions across the globe. The corpus provides a test bed for evaluating LLMs for knowledge and understanding of region-specific etiquettes. Additionally, we propose the task of Etiquette Sensitivity. We experiment with state-of-the-art LLMs (Delphi, Falcon40B, and GPT-3.5). Initial results indicate that LLMs, mostly fail to understand etiquettes from regions from non-Western world.
2202.02892
Berivan Isik
Berivan Isik, Tsachy Weissman
Lossy Compression of Noisy Data for Private and Data-Efficient Learning
Published at the IEEE Journal on Selected Areas in Information Theory (JSAIT). Preliminary version was presented at the IEEE International Symposium on Information Theory (ISIT), 2022, with a slightly different title, "Learning under Storage and Privacy Constraints."
null
null
null
cs.IT cs.LG eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Storage-efficient privacy-preserving learning is crucial due to increasing amounts of sensitive user data required for modern learning tasks. We propose a framework for reducing the storage cost of user data while at the same time providing privacy guarantees, without essential loss in the utility of the data for learning. Our method comprises noise injection followed by lossy compression. We show that, when appropriately matching the lossy compression to the distribution of the added noise, the compressed examples converge, in distribution, to that of the noise-free training data as the sample size of the training data (or the dimension of the training data) increases. In this sense, the utility of the data for learning is essentially maintained, while reducing storage and privacy leakage by quantifiable amounts. We present experimental results on the CelebA dataset for gender classification and find that our suggested pipeline delivers in practice on the promise of the theory: the individuals in the images are unrecognizable (or less recognizable, depending on the noise level), overall storage of the data is substantially reduced, with no essential loss (and in some cases a slight boost) to the classification accuracy. As an added bonus, our experiments suggest that our method yields a substantial boost to robustness in the face of adversarial test data.
[ { "created": "Mon, 7 Feb 2022 00:15:03 GMT", "version": "v1" }, { "created": "Tue, 3 Jan 2023 12:21:32 GMT", "version": "v2" }, { "created": "Mon, 20 Mar 2023 21:06:06 GMT", "version": "v3" }, { "created": "Wed, 22 Mar 2023 05:23:53 GMT", "version": "v4" } ]
2023-03-23
[ [ "Isik", "Berivan", "" ], [ "Weissman", "Tsachy", "" ] ]
Storage-efficient privacy-preserving learning is crucial due to increasing amounts of sensitive user data required for modern learning tasks. We propose a framework for reducing the storage cost of user data while at the same time providing privacy guarantees, without essential loss in the utility of the data for learning. Our method comprises noise injection followed by lossy compression. We show that, when appropriately matching the lossy compression to the distribution of the added noise, the compressed examples converge, in distribution, to that of the noise-free training data as the sample size of the training data (or the dimension of the training data) increases. In this sense, the utility of the data for learning is essentially maintained, while reducing storage and privacy leakage by quantifiable amounts. We present experimental results on the CelebA dataset for gender classification and find that our suggested pipeline delivers in practice on the promise of the theory: the individuals in the images are unrecognizable (or less recognizable, depending on the noise level), overall storage of the data is substantially reduced, with no essential loss (and in some cases a slight boost) to the classification accuracy. As an added bonus, our experiments suggest that our method yields a substantial boost to robustness in the face of adversarial test data.
2308.01978
Sohil Lal Shrestha
Sohil Lal Shrestha and Shafiul Azam Chowdhury and Christoph Csallner
Replicability Study: Corpora For Understanding Simulink Models & Projects
Changed A4 paper to letter paper size
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Background: Empirical studies on widely used model-based development tools such as MATLAB/Simulink are limited despite the tools' importance in various industries. Aims: The aim of this paper is to investigate the reproducibility of previous empirical studies that used Simulink model corpora and to evaluate the generalizability of their results to a newer and larger corpus, including a comparison with proprietary models. Method: The study reviews methodologies and data sources employed in prior Simulink model studies and replicates the previous analysis using SLNET. In addition, we propose a heuristic for determining code-generating Simulink models and assess the open-source models' similarity to proprietary models. Results: Our analysis of SLNET confirms and contradicts earlier findings and highlights its potential as a valuable resource for model-based development research. We found that open-source Simulink models follow good modeling practices and contain models comparable in size and properties to proprietary models. We also collected and distribute 208 git repositories with over 9k commits, facilitating studies on model evolution. Conclusions: The replication study offers actionable insights and lessons learned from the reproduction process, including valuable information on the generalizability of research findings based on earlier open-source corpora to the newer and larger SLNET corpus. The study sheds light on noteworthy attributes of SLNET, which is self-contained and redistributable.
[ { "created": "Thu, 3 Aug 2023 18:14:54 GMT", "version": "v1" }, { "created": "Wed, 9 Aug 2023 14:25:26 GMT", "version": "v2" } ]
2023-08-10
[ [ "Shrestha", "Sohil Lal", "" ], [ "Chowdhury", "Shafiul Azam", "" ], [ "Csallner", "Christoph", "" ] ]
Background: Empirical studies on widely used model-based development tools such as MATLAB/Simulink are limited despite the tools' importance in various industries. Aims: The aim of this paper is to investigate the reproducibility of previous empirical studies that used Simulink model corpora and to evaluate the generalizability of their results to a newer and larger corpus, including a comparison with proprietary models. Method: The study reviews methodologies and data sources employed in prior Simulink model studies and replicates the previous analysis using SLNET. In addition, we propose a heuristic for determining code-generating Simulink models and assess the open-source models' similarity to proprietary models. Results: Our analysis of SLNET confirms and contradicts earlier findings and highlights its potential as a valuable resource for model-based development research. We found that open-source Simulink models follow good modeling practices and contain models comparable in size and properties to proprietary models. We also collected and distribute 208 git repositories with over 9k commits, facilitating studies on model evolution. Conclusions: The replication study offers actionable insights and lessons learned from the reproduction process, including valuable information on the generalizability of research findings based on earlier open-source corpora to the newer and larger SLNET corpus. The study sheds light on noteworthy attributes of SLNET, which is self-contained and redistributable.
2102.04235
Fernando Almeida Dr.
Fernando Almeida and Jos\'e Monteiro
The Challenges of Assessing and Evaluating the Students at Distance
8 pages, 10 references
Journal of Online Higher Education, 2021
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
The COVID-19 pandemic has caused a strong effect on higher education institutions with the closure of classroom teaching activities. In this unprecedented crisis, of global proportion, educators and families had to deal with unpredictability and learn new ways of teaching. This short essay aims to explore the challenges posed to Portuguese higher education institutions and to analyze the challenges posed to evaluation models. To this end, the relevance of formative and summative assessment models in distance education is explored and the perception of teachers and students about the practices adopted in remote assessment is discussed. On the teachers' side, there is a high concern about adopting fraud-free models, and an excessive focus on the summative assessment component that in the distance learning model has less preponderance when compared to the gradual monitoring and assessment processes of the students, while on the students' side, problems arise regarding equipment to follow the teaching sessions and concerns about their privacy, particularly when intrusive IT solutions request the access to their cameras, audio, and desktop.
[ { "created": "Sat, 30 Jan 2021 13:13:45 GMT", "version": "v1" } ]
2021-02-09
[ [ "Almeida", "Fernando", "" ], [ "Monteiro", "José", "" ] ]
The COVID-19 pandemic has caused a strong effect on higher education institutions with the closure of classroom teaching activities. In this unprecedented crisis, of global proportion, educators and families had to deal with unpredictability and learn new ways of teaching. This short essay aims to explore the challenges posed to Portuguese higher education institutions and to analyze the challenges posed to evaluation models. To this end, the relevance of formative and summative assessment models in distance education is explored and the perception of teachers and students about the practices adopted in remote assessment is discussed. On the teachers' side, there is a high concern about adopting fraud-free models, and an excessive focus on the summative assessment component that in the distance learning model has less preponderance when compared to the gradual monitoring and assessment processes of the students, while on the students' side, problems arise regarding equipment to follow the teaching sessions and concerns about their privacy, particularly when intrusive IT solutions request the access to their cameras, audio, and desktop.
2106.06731
Ning Shi
Ning Shi, Wei Wang, Boxin Wang, Jinfeng Li, Xiangyu Liu and Zhouhan Lin
Incorporating External POS Tagger for Punctuation Restoration
Accepted to Interspeech 2021
null
10.21437/Interspeech.2021-1708
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Punctuation restoration is an important post-processing step in automatic speech recognition. Among other kinds of external information, part-of-speech (POS) taggers provide informative tags, suggesting each input token's syntactic role, which has been shown to be beneficial for the punctuation restoration task. In this work, we incorporate an external POS tagger and fuse its predicted labels into the existing language model to provide syntactic information. Besides, we propose sequence boundary sampling (SBS) to learn punctuation positions more efficiently as a sequence tagging task. Experimental results show that our methods can consistently obtain performance gains and achieve a new state-of-the-art on the common IWSLT benchmark. Further ablation studies illustrate that both large pre-trained language models and the external POS tagger take essential parts to improve the model's performance.
[ { "created": "Sat, 12 Jun 2021 09:58:06 GMT", "version": "v1" } ]
2021-09-08
[ [ "Shi", "Ning", "" ], [ "Wang", "Wei", "" ], [ "Wang", "Boxin", "" ], [ "Li", "Jinfeng", "" ], [ "Liu", "Xiangyu", "" ], [ "Lin", "Zhouhan", "" ] ]
Punctuation restoration is an important post-processing step in automatic speech recognition. Among other kinds of external information, part-of-speech (POS) taggers provide informative tags, suggesting each input token's syntactic role, which has been shown to be beneficial for the punctuation restoration task. In this work, we incorporate an external POS tagger and fuse its predicted labels into the existing language model to provide syntactic information. Besides, we propose sequence boundary sampling (SBS) to learn punctuation positions more efficiently as a sequence tagging task. Experimental results show that our methods can consistently obtain performance gains and achieve a new state-of-the-art on the common IWSLT benchmark. Further ablation studies illustrate that both large pre-trained language models and the external POS tagger take essential parts to improve the model's performance.
2105.10037
Dripta S. Raychaudhuri
Dripta S. Raychaudhuri, Sujoy Paul, Jeroen van Baar, Amit K. Roy-Chowdhury
Cross-domain Imitation from Observations
Accepted at ICML 2021 as a long presentation
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Imitation learning seeks to circumvent the difficulty in designing proper reward functions for training agents by utilizing expert behavior. With environments modeled as Markov Decision Processes (MDP), most of the existing imitation algorithms are contingent on the availability of expert demonstrations in the same MDP as the one in which a new imitation policy is to be learned. In this paper, we study the problem of how to imitate tasks when there exist discrepancies between the expert and agent MDP. These discrepancies across domains could include differing dynamics, viewpoint, or morphology; we present a novel framework to learn correspondences across such domains. Importantly, in contrast to prior works, we use unpaired and unaligned trajectories containing only states in the expert domain, to learn this correspondence. We utilize a cycle-consistency constraint on both the state space and a domain agnostic latent space to do this. In addition, we enforce consistency on the temporal position of states via a normalized position estimator function, to align the trajectories across the two domains. Once this correspondence is found, we can directly transfer the demonstrations on one domain to the other and use it for imitation. Experiments across a wide variety of challenging domains demonstrate the efficacy of our approach.
[ { "created": "Thu, 20 May 2021 21:08:25 GMT", "version": "v1" } ]
2021-05-24
[ [ "Raychaudhuri", "Dripta S.", "" ], [ "Paul", "Sujoy", "" ], [ "van Baar", "Jeroen", "" ], [ "Roy-Chowdhury", "Amit K.", "" ] ]
Imitation learning seeks to circumvent the difficulty in designing proper reward functions for training agents by utilizing expert behavior. With environments modeled as Markov Decision Processes (MDP), most of the existing imitation algorithms are contingent on the availability of expert demonstrations in the same MDP as the one in which a new imitation policy is to be learned. In this paper, we study the problem of how to imitate tasks when there exist discrepancies between the expert and agent MDP. These discrepancies across domains could include differing dynamics, viewpoint, or morphology; we present a novel framework to learn correspondences across such domains. Importantly, in contrast to prior works, we use unpaired and unaligned trajectories containing only states in the expert domain, to learn this correspondence. We utilize a cycle-consistency constraint on both the state space and a domain agnostic latent space to do this. In addition, we enforce consistency on the temporal position of states via a normalized position estimator function, to align the trajectories across the two domains. Once this correspondence is found, we can directly transfer the demonstrations on one domain to the other and use it for imitation. Experiments across a wide variety of challenging domains demonstrate the efficacy of our approach.
1605.00462
Jesper Nederlof
Per Austrin, Petteri Kaski, Mikko Koivisto, Jesper Nederlof
Sharper Upper Bounds for Unbalanced Uniquely Decodable Code Pairs
11 pages; to appear at ISIT 2016
null
null
null
cs.IT cs.DM math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two sets $A, B \subseteq \{0, 1\}^n$ form a Uniquely Decodable Code Pair (UDCP) if every pair $a \in A$, $b \in B$ yields a distinct sum $a+b$, where the addition is over $\mathbb{Z}^n$. We show that every UDCP $A, B$, with $|A| = 2^{(1-\epsilon)n}$ and $|B| = 2^{\beta n}$, satisfies $\beta \leq 0.4228 +\sqrt{\epsilon}$. For sufficiently small $\epsilon$, this bound significantly improves previous bounds by Urbanke and Li~[Information Theory Workshop '98] and Ordentlich and Shayevitz~[2014, arXiv:1412.8415], which upper bound $\beta$ by $0.4921$ and $0.4798$, respectively, as $\epsilon$ approaches $0$.
[ { "created": "Mon, 2 May 2016 12:58:13 GMT", "version": "v1" } ]
2016-05-03
[ [ "Austrin", "Per", "" ], [ "Kaski", "Petteri", "" ], [ "Koivisto", "Mikko", "" ], [ "Nederlof", "Jesper", "" ] ]
Two sets $A, B \subseteq \{0, 1\}^n$ form a Uniquely Decodable Code Pair (UDCP) if every pair $a \in A$, $b \in B$ yields a distinct sum $a+b$, where the addition is over $\mathbb{Z}^n$. We show that every UDCP $A, B$, with $|A| = 2^{(1-\epsilon)n}$ and $|B| = 2^{\beta n}$, satisfies $\beta \leq 0.4228 +\sqrt{\epsilon}$. For sufficiently small $\epsilon$, this bound significantly improves previous bounds by Urbanke and Li~[Information Theory Workshop '98] and Ordentlich and Shayevitz~[2014, arXiv:1412.8415], which upper bound $\beta$ by $0.4921$ and $0.4798$, respectively, as $\epsilon$ approaches $0$.
2011.02277
Subasish Das
Subasish Das
Ridesharing Services and Car-Seats: Technological Perceptions and Usage Patterns
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Children are one of the most vulnerable groups in traffic crashes. Child safety seats (CSSs) can decrease the severity of crash outcomes for children. The usage of CSSs has significantly improved in the U.S. over the last 40 years, but it is anticipated that the usage of CSSs in popular ridesharing services (RSSs), such as Uber and Lyft, is not widespread. This paper used a publicly available nationwide internet survey that was designed to gain an understanding of riders and drivers perception toward child passenger safety in regard to technological perception on RSSs. This study performed a rigorous exploratory data analysis to identify the key psychological insights of the survey participants. Additionally, a recently developed dimension-reduction method has been applied to understand the co-occurrence patterns of the responses to gain intuitive insights. It is found that urban-dwelling parents with higher education degrees eventually use RSSs often due to their familiarity of the technological advantages. On the other hand, non-urban and moderately educated parents and guardians are dismissive in using RSSs while having kids with them to ride due to less trust on the technology.
[ { "created": "Mon, 2 Nov 2020 06:52:33 GMT", "version": "v1" } ]
2020-11-05
[ [ "Das", "Subasish", "" ] ]
Children are one of the most vulnerable groups in traffic crashes. Child safety seats (CSSs) can decrease the severity of crash outcomes for children. The usage of CSSs has significantly improved in the U.S. over the last 40 years, but it is anticipated that the usage of CSSs in popular ridesharing services (RSSs), such as Uber and Lyft, is not widespread. This paper used a publicly available nationwide internet survey that was designed to gain an understanding of riders and drivers perception toward child passenger safety in regard to technological perception on RSSs. This study performed a rigorous exploratory data analysis to identify the key psychological insights of the survey participants. Additionally, a recently developed dimension-reduction method has been applied to understand the co-occurrence patterns of the responses to gain intuitive insights. It is found that urban-dwelling parents with higher education degrees eventually use RSSs often due to their familiarity of the technological advantages. On the other hand, non-urban and moderately educated parents and guardians are dismissive in using RSSs while having kids with them to ride due to less trust on the technology.
1708.01680
Amir Saeidi
Amir Saeidi (Utrecht University, Netherlands), Jurriaan Hage (Universiteit Utrecht, Netherlands), Ravi Khadka (Utrecht University, Netherlands), Slinger Jansen (Utrecht University, Netherlands)
On the Effect of Semantically Enriched Context Models on Software Modularization
null
The Art, Science, and Engineering of Programming, 2018, Vol. 2, Issue 1, Article 2
10.22152/programming-journal.org/2018/2/2
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many of the existing approaches for program comprehension rely on the linguistic information found in source code, such as identifier names and comments. Semantic clustering is one such technique for modularization of the system that relies on the informal semantics of the program, encoded in the vocabulary used in the source code. Treating the source code as a collection of tokens loses the semantic information embedded within the identifiers. We try to overcome this problem by introducing context models for source code identifiers to obtain a semantic kernel, which can be used for both deriving the topics that run through the system as well as their clustering. In the first model, we abstract an identifier to its type representation and build on this notion of context to construct contextual vector representation of the source code. The second notion of context is defined based on the flow of data between identifiers to represent a module as a dependency graph where the nodes correspond to identifiers and the edges represent the data dependencies between pairs of identifiers. We have applied our approach to 10 medium-sized open source Java projects, and show that by introducing contexts for identifiers, the quality of the modularization of the software systems is improved. Both of the context models give results that are superior to the plain vector representation of documents. In some cases, the authoritativeness of decompositions is improved by 67%. Furthermore, a more detailed evaluation of our approach on JEdit, an open source editor, demonstrates that inferred topics through performing topic analysis on the contextual representations are more meaningful compared to the plain representation of the documents. The proposed approach in introducing a context model for source code identifiers paves the way for building tools that support developers in program comprehension tasks such as application and domain concept location, software modularization and topic analysis.
[ { "created": "Fri, 4 Aug 2017 23:08:52 GMT", "version": "v1" } ]
2017-08-08
[ [ "Saeidi", "Amir", "", "Utrecht University, Netherlands" ], [ "Hage", "Jurriaan", "", "Universiteit Utrecht, Netherlands" ], [ "Khadka", "Ravi", "", "Utrecht University,\n Netherlands" ], [ "Jansen", "Slinger", "", "Utrecht University, Netherlands" ] ]
Many of the existing approaches for program comprehension rely on the linguistic information found in source code, such as identifier names and comments. Semantic clustering is one such technique for modularization of the system that relies on the informal semantics of the program, encoded in the vocabulary used in the source code. Treating the source code as a collection of tokens loses the semantic information embedded within the identifiers. We try to overcome this problem by introducing context models for source code identifiers to obtain a semantic kernel, which can be used for both deriving the topics that run through the system as well as their clustering. In the first model, we abstract an identifier to its type representation and build on this notion of context to construct contextual vector representation of the source code. The second notion of context is defined based on the flow of data between identifiers to represent a module as a dependency graph where the nodes correspond to identifiers and the edges represent the data dependencies between pairs of identifiers. We have applied our approach to 10 medium-sized open source Java projects, and show that by introducing contexts for identifiers, the quality of the modularization of the software systems is improved. Both of the context models give results that are superior to the plain vector representation of documents. In some cases, the authoritativeness of decompositions is improved by 67%. Furthermore, a more detailed evaluation of our approach on JEdit, an open source editor, demonstrates that inferred topics through performing topic analysis on the contextual representations are more meaningful compared to the plain representation of the documents. The proposed approach in introducing a context model for source code identifiers paves the way for building tools that support developers in program comprehension tasks such as application and domain concept location, software modularization and topic analysis.
1809.04771
EPTCS
Ekaterina Komendantskaya Dr (Heriot-Watt University), Yue Li (Heriot-Watt University)
Towards Coinductive Theory Exploration in Horn Clause Logic: Position Paper
In Proceedings HCVS 2018, arXiv:1809.04554
EPTCS 278, 2018, pp. 27-33
10.4204/EPTCS.278.5
null
cs.LO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coinduction occurs in two guises in Horn clause logic: in proofs of self-referencing properties and relations, and in proofs involving construction of (possibly irregular) infinite data. Both instances of coinductive reasoning appeared in the literature before, but a systematic analysis of these two kinds of proofs and of their relation was lacking. We propose a general proof-theoretic framework for handling both kinds of coinduction arising in Horn clause logic. To this aim, we propose a coinductive extension of Miller et al's framework of uniform proofs and prove its soundness relative to coinductive models of Horn clause logic.
[ { "created": "Thu, 13 Sep 2018 04:48:51 GMT", "version": "v1" } ]
2018-09-14
[ [ "Dr", "Ekaterina Komendantskaya", "", "Heriot-Watt University" ], [ "Li", "Yue", "", "Heriot-Watt University" ] ]
Coinduction occurs in two guises in Horn clause logic: in proofs of self-referencing properties and relations, and in proofs involving construction of (possibly irregular) infinite data. Both instances of coinductive reasoning appeared in the literature before, but a systematic analysis of these two kinds of proofs and of their relation was lacking. We propose a general proof-theoretic framework for handling both kinds of coinduction arising in Horn clause logic. To this aim, we propose a coinductive extension of Miller et al's framework of uniform proofs and prove its soundness relative to coinductive models of Horn clause logic.
2212.00227
Maojun Zhang
Maojun Zhang, Yang Li, Zezhong Zhang, Guangxu Zhu, Caijun Zhong
Wireless Image Transmission with Semantic and Security Awareness
Submitted to IEEE WCL for possible publication
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Semantic communication is an increasingly popular framework for wireless image transmission due to its high communication efficiency. With the aid of the joint-source-and-channel (JSC) encoder implemented by neural network, semantic communication directly maps original images into symbol sequences containing semantic information. Compared with the traditional separate source and channel coding design used in bitlevel communication systems, semantic communication systems are known to be more efficient and accurate especially in the low signal-to-the-noise ratio (SNR) regime. This thus prompts an critical while yet to be tackled issue of security in semantic communication: it makes the eavesdropper more easier to crack the semantic information as it can be decoded even in a quite noisy channel. In this letter, we develop a semantic communication framework that accounts for both semantic meaning decoding efficiency and its risk of privacy leakage. To achieve this, targeting wireless image transmission, we on the one hand propose an JSC autoencoder featuring residual for efficient semantic meaning extraction and transmission, and on the other hand, propose a data-driven scheme that balances the efficiency-privacy tradeoff. Extensive experimental results are provided to show the effectiveness and robustness of the proposed scheme.
[ { "created": "Thu, 1 Dec 2022 02:22:08 GMT", "version": "v1" } ]
2022-12-02
[ [ "Zhang", "Maojun", "" ], [ "Li", "Yang", "" ], [ "Zhang", "Zezhong", "" ], [ "Zhu", "Guangxu", "" ], [ "Zhong", "Caijun", "" ] ]
Semantic communication is an increasingly popular framework for wireless image transmission due to its high communication efficiency. With the aid of the joint-source-and-channel (JSC) encoder implemented by neural network, semantic communication directly maps original images into symbol sequences containing semantic information. Compared with the traditional separate source and channel coding design used in bitlevel communication systems, semantic communication systems are known to be more efficient and accurate especially in the low signal-to-the-noise ratio (SNR) regime. This thus prompts an critical while yet to be tackled issue of security in semantic communication: it makes the eavesdropper more easier to crack the semantic information as it can be decoded even in a quite noisy channel. In this letter, we develop a semantic communication framework that accounts for both semantic meaning decoding efficiency and its risk of privacy leakage. To achieve this, targeting wireless image transmission, we on the one hand propose an JSC autoencoder featuring residual for efficient semantic meaning extraction and transmission, and on the other hand, propose a data-driven scheme that balances the efficiency-privacy tradeoff. Extensive experimental results are provided to show the effectiveness and robustness of the proposed scheme.
1909.01201
Mihailo Stojnic
Mihailo Stojnic
Starting CLuP with polytope relaxation
null
null
null
null
cs.IT math.IT math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Controlled Loosening-up (CLuP) mechanism that we recently introduced in \cite{Stojnicclupint19} is a generic concept that can be utilized to solve a large class of problems in polynomial time. Since it relies in its core on an iterative procedure, the key to its excellent performance lies in a typically very small number of iterations needed to execute the entire algorithm. In a separate paper \cite{Stojnicclupcmpl19}, we presented a detailed complexity analysis that indeed confirms the relatively small number of iterations. Since both papers, \cite{Stojnicclupint19} and \cite{Stojnicclupcmpl19} are the introductory papers on the topic we made sure to limit the initial discussion just to the core of the algorithm and consequently focused only on the algorithm's most basic version. On numerous occasions though, we emphasized that various improvements and further upgrades are possible. In this paper we present a first step in this direction and discuss a very simple upgrade that can be introduced on top of the basic CLuP mechanism. It relates to the starting of the CLuP and suggests the well-known so-called polytope-relaxation heuristic (see, e.g. \cite{StojnicBBSD05,StojnicBBSD08}) as the starting point. We refer to this variant of CLuP as the CLuP-plt and proceed with the presentation of its complexity analysis. As in \cite{Stojnicclupcmpl19}, a particular \textbf{\emph{complexity analysis per iteration level}} type of complexity analysis is chosen and presented through the algorithm's application on the well-known MIMO ML detection problem. As expected, the analysis confirms that CLuP-plt performs even better than the original CLuP. In some of the most interesting regimes it often achieves within the \textbf{\emph{first three iterations}} an excellent performance. We also complement the theoretical findings with a solid set of numerical experiments.
[ { "created": "Tue, 3 Sep 2019 14:13:29 GMT", "version": "v1" } ]
2019-09-04
[ [ "Stojnic", "Mihailo", "" ] ]
The Controlled Loosening-up (CLuP) mechanism that we recently introduced in \cite{Stojnicclupint19} is a generic concept that can be utilized to solve a large class of problems in polynomial time. Since it relies in its core on an iterative procedure, the key to its excellent performance lies in a typically very small number of iterations needed to execute the entire algorithm. In a separate paper \cite{Stojnicclupcmpl19}, we presented a detailed complexity analysis that indeed confirms the relatively small number of iterations. Since both papers, \cite{Stojnicclupint19} and \cite{Stojnicclupcmpl19} are the introductory papers on the topic we made sure to limit the initial discussion just to the core of the algorithm and consequently focused only on the algorithm's most basic version. On numerous occasions though, we emphasized that various improvements and further upgrades are possible. In this paper we present a first step in this direction and discuss a very simple upgrade that can be introduced on top of the basic CLuP mechanism. It relates to the starting of the CLuP and suggests the well-known so-called polytope-relaxation heuristic (see, e.g. \cite{StojnicBBSD05,StojnicBBSD08}) as the starting point. We refer to this variant of CLuP as the CLuP-plt and proceed with the presentation of its complexity analysis. As in \cite{Stojnicclupcmpl19}, a particular \textbf{\emph{complexity analysis per iteration level}} type of complexity analysis is chosen and presented through the algorithm's application on the well-known MIMO ML detection problem. As expected, the analysis confirms that CLuP-plt performs even better than the original CLuP. In some of the most interesting regimes it often achieves within the \textbf{\emph{first three iterations}} an excellent performance. We also complement the theoretical findings with a solid set of numerical experiments.
2301.02654
Song Bian
Song Bian, Dacheng Li, Hongyi Wang, Eric P. Xing, Shivaram Venkataraman
Does compressing activations help model parallel training?
16 pages, 5 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale Transformer models are known for their exceptional performance in a range of tasks, but training them can be difficult due to the requirement for communication-intensive model parallelism. One way to improve training speed is to compress the message size in communication. Previous approaches have primarily focused on compressing gradients in a data parallelism setting, but compression in a model-parallel setting is an understudied area. We have discovered that model parallelism has fundamentally different characteristics than data parallelism. In this work, we present the first empirical study on the effectiveness of compression methods for model parallelism. We implement and evaluate three common classes of compression algorithms - pruning-based, learning-based, and quantization-based - using a popular Transformer training framework. We evaluate these methods across more than 160 settings and 8 popular datasets, taking into account different hyperparameters, hardware, and both fine-tuning and pre-training stages. We also provide analysis when the model is scaled up. Finally, we provide insights for future development of model parallelism compression algorithms.
[ { "created": "Fri, 6 Jan 2023 18:58:09 GMT", "version": "v1" } ]
2023-01-09
[ [ "Bian", "Song", "" ], [ "Li", "Dacheng", "" ], [ "Wang", "Hongyi", "" ], [ "Xing", "Eric P.", "" ], [ "Venkataraman", "Shivaram", "" ] ]
Large-scale Transformer models are known for their exceptional performance in a range of tasks, but training them can be difficult due to the requirement for communication-intensive model parallelism. One way to improve training speed is to compress the message size in communication. Previous approaches have primarily focused on compressing gradients in a data parallelism setting, but compression in a model-parallel setting is an understudied area. We have discovered that model parallelism has fundamentally different characteristics than data parallelism. In this work, we present the first empirical study on the effectiveness of compression methods for model parallelism. We implement and evaluate three common classes of compression algorithms - pruning-based, learning-based, and quantization-based - using a popular Transformer training framework. We evaluate these methods across more than 160 settings and 8 popular datasets, taking into account different hyperparameters, hardware, and both fine-tuning and pre-training stages. We also provide analysis when the model is scaled up. Finally, we provide insights for future development of model parallelism compression algorithms.
2401.00445
Chenxi Zhao
Chenxi Zhao, Min Sheng, Junyu Liu, Tianshu Chu, Jiandong Li
Energy-Efficient Power Control for Multiple-Task Split Inference in UAVs: A Tiny Learning-Based Approach
null
null
null
null
cs.LG cs.MA cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The limited energy and computing resources of unmanned aerial vehicles (UAVs) hinder the application of aerial artificial intelligence. The utilization of split inference in UAVs garners significant attention due to its effectiveness in mitigating computing and energy requirements. However, achieving energy-efficient split inference in UAVs remains complex considering of various crucial parameters such as energy level and delay constraints, especially involving multiple tasks. In this paper, we present a two-timescale approach for energy minimization in split inference, where discrete and continuous variables are segregated into two timescales to reduce the size of action space and computational complexity. This segregation enables the utilization of tiny reinforcement learning (TRL) for selecting discrete transmission modes for sequential tasks. Moreover, optimization programming (OP) is embedded between TRL's output and reward function to optimize the continuous transmit power. Specifically, we replace the optimization of transmit power with that of transmission time to decrease the computational complexity of OP since we reveal that energy consumption monotonically decreases with increasing transmission time. The replacement significantly reduces the feasible region and enables a fast solution according to the closed-form expression for optimal transmit power. Simulation results show that the proposed algorithm can achieve a higher probability of successful task completion with lower energy consumption.
[ { "created": "Sun, 31 Dec 2023 10:16:59 GMT", "version": "v1" } ]
2024-01-02
[ [ "Zhao", "Chenxi", "" ], [ "Sheng", "Min", "" ], [ "Liu", "Junyu", "" ], [ "Chu", "Tianshu", "" ], [ "Li", "Jiandong", "" ] ]
The limited energy and computing resources of unmanned aerial vehicles (UAVs) hinder the application of aerial artificial intelligence. The utilization of split inference in UAVs garners significant attention due to its effectiveness in mitigating computing and energy requirements. However, achieving energy-efficient split inference in UAVs remains complex considering of various crucial parameters such as energy level and delay constraints, especially involving multiple tasks. In this paper, we present a two-timescale approach for energy minimization in split inference, where discrete and continuous variables are segregated into two timescales to reduce the size of action space and computational complexity. This segregation enables the utilization of tiny reinforcement learning (TRL) for selecting discrete transmission modes for sequential tasks. Moreover, optimization programming (OP) is embedded between TRL's output and reward function to optimize the continuous transmit power. Specifically, we replace the optimization of transmit power with that of transmission time to decrease the computational complexity of OP since we reveal that energy consumption monotonically decreases with increasing transmission time. The replacement significantly reduces the feasible region and enables a fast solution according to the closed-form expression for optimal transmit power. Simulation results show that the proposed algorithm can achieve a higher probability of successful task completion with lower energy consumption.
1512.06338
Yinglei Song
Yinglei Song
Lower Bounds for the Domination Numbers of Connected Graphs without Short Cycles
null
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we obtain lower bounds for the domination numbers of connected graphs with girth at least $7$. We show that the domination number of a connected graph with girth at least $7$ is either $1$ or at least $\frac{1}{2}(3+\sqrt{8(m-n)+9})$, where $n$ is the number of vertices in the graph and $m$ is the number of edges in the graph. For graphs with minimum degree $2$ and girth at least $7$, the lower bound can be improved to $\max{\{\sqrt{n}, \sqrt{\frac{2m}{3}}\}}$, where $n$ and $m$ are the numbers of vertices and edges in the graph respectively. In cases where the graph is of minimum degree $2$ and its girth $g$ is at least $12$, the lower bound can be further improved to $\max{\{\sqrt{n}, \sqrt{\frac{\lfloor \frac{g}{3} \rfloor-1}{3}m}\}}$.
[ { "created": "Sun, 20 Dec 2015 09:12:46 GMT", "version": "v1" }, { "created": "Mon, 4 Jan 2016 10:49:17 GMT", "version": "v2" } ]
2016-01-05
[ [ "Song", "Yinglei", "" ] ]
In this paper, we obtain lower bounds for the domination numbers of connected graphs with girth at least $7$. We show that the domination number of a connected graph with girth at least $7$ is either $1$ or at least $\frac{1}{2}(3+\sqrt{8(m-n)+9})$, where $n$ is the number of vertices in the graph and $m$ is the number of edges in the graph. For graphs with minimum degree $2$ and girth at least $7$, the lower bound can be improved to $\max{\{\sqrt{n}, \sqrt{\frac{2m}{3}}\}}$, where $n$ and $m$ are the numbers of vertices and edges in the graph respectively. In cases where the graph is of minimum degree $2$ and its girth $g$ is at least $12$, the lower bound can be further improved to $\max{\{\sqrt{n}, \sqrt{\frac{\lfloor \frac{g}{3} \rfloor-1}{3}m}\}}$.
2011.03607
Charlie Dickens
Charlie Dickens
Ridge Regression with Frequent Directions: Statistical and Optimization Perspectives
null
null
null
null
cs.LG cs.DS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite its impressive theory \& practical performance, Frequent Directions (\acrshort{fd}) has not been widely adopted for large-scale regression tasks. Prior work has shown randomized sketches (i) perform worse in estimating the covariance matrix of the data than \acrshort{fd}; (ii) incur high error when estimating the bias and/or variance on sketched ridge regression. We give the first constant factor relative error bounds on the bias \& variance for sketched ridge regression using \acrshort{fd}. We complement these statistical results by showing that \acrshort{fd} can be used in the optimization setting through an iterative scheme which yields high-accuracy solutions. This improves on randomized approaches which need to compromise the need for a new sketch every iteration with speed of convergence. In both settings, we also show using \emph{Robust Frequent Directions} further enhances performance.
[ { "created": "Fri, 6 Nov 2020 21:40:38 GMT", "version": "v1" } ]
2020-11-10
[ [ "Dickens", "Charlie", "" ] ]
Despite its impressive theory \& practical performance, Frequent Directions (\acrshort{fd}) has not been widely adopted for large-scale regression tasks. Prior work has shown randomized sketches (i) perform worse in estimating the covariance matrix of the data than \acrshort{fd}; (ii) incur high error when estimating the bias and/or variance on sketched ridge regression. We give the first constant factor relative error bounds on the bias \& variance for sketched ridge regression using \acrshort{fd}. We complement these statistical results by showing that \acrshort{fd} can be used in the optimization setting through an iterative scheme which yields high-accuracy solutions. This improves on randomized approaches which need to compromise the need for a new sketch every iteration with speed of convergence. In both settings, we also show using \emph{Robust Frequent Directions} further enhances performance.
2407.01007
Tinghui Zhao
Huijie Fan, Tinghui Zhao, Qiang Wang, Baojie Fan, Yandong Tang, LianQing Liu
GMT: A Robust Global Association Model for Multi-Target Multi-Camera Tracking
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the task of multi-target multi-camera (MTMC) tracking of pedestrians, the data association problem is a key issue and main challenge, especially with complications arising from camera movements, lighting variations, and obstructions. However, most MTMC models adopt two-step approaches, thus heavily depending on the results of the first-step tracking in practical applications. Moreover, the same targets crossing different cameras may exhibit significant appearance variations, which further increases the difficulty of cross-camera matching. To address the aforementioned issues, we propose a global online MTMC tracking model that addresses the dependency on the first tracking stage in two-step methods and enhances cross-camera matching. Specifically, we propose a transformer-based global MTMC association module to explore target associations across different cameras and frames, generating global trajectories directly. Additionally, to integrate the appearance and spatio-temporal features of targets, we propose a feature extraction and fusion module for MTMC tracking. This module enhances feature representation and establishes correlations between the features of targets across multiple cameras. To accommodate high scene diversity and complex lighting condition variations, we have established the VisionTrack dataset, which enables the development of models that are more generalized and robust to various environments. Our model demonstrates significant improvements over comparison methods on the VisionTrack dataset and others.
[ { "created": "Mon, 1 Jul 2024 06:39:14 GMT", "version": "v1" } ]
2024-07-02
[ [ "Fan", "Huijie", "" ], [ "Zhao", "Tinghui", "" ], [ "Wang", "Qiang", "" ], [ "Fan", "Baojie", "" ], [ "Tang", "Yandong", "" ], [ "Liu", "LianQing", "" ] ]
In the task of multi-target multi-camera (MTMC) tracking of pedestrians, the data association problem is a key issue and main challenge, especially with complications arising from camera movements, lighting variations, and obstructions. However, most MTMC models adopt two-step approaches, thus heavily depending on the results of the first-step tracking in practical applications. Moreover, the same targets crossing different cameras may exhibit significant appearance variations, which further increases the difficulty of cross-camera matching. To address the aforementioned issues, we propose a global online MTMC tracking model that addresses the dependency on the first tracking stage in two-step methods and enhances cross-camera matching. Specifically, we propose a transformer-based global MTMC association module to explore target associations across different cameras and frames, generating global trajectories directly. Additionally, to integrate the appearance and spatio-temporal features of targets, we propose a feature extraction and fusion module for MTMC tracking. This module enhances feature representation and establishes correlations between the features of targets across multiple cameras. To accommodate high scene diversity and complex lighting condition variations, we have established the VisionTrack dataset, which enables the development of models that are more generalized and robust to various environments. Our model demonstrates significant improvements over comparison methods on the VisionTrack dataset and others.
2205.12515
Yi Wan
Yi Wan, Richard S. Sutton
Toward Discovering Options that Achieve Faster Planning
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new objective for option discovery that emphasizes the computational advantage of using options in planning. In a sequential machine, the speed of planning is proportional to the number of elementary operations used to achieve a good policy. For episodic tasks, the number of elementary operations depends on the number of options composed by the policy in an episode and the number of options being considered at each decision point. To reduce the amount of computation in planning, for a given set of episodic tasks and a given number of options, our objective prefers options with which it is possible to achieve a high return by composing few options, and also prefers a smaller set of options to choose from at each decision point. We develop an algorithm that optimizes the proposed objective. In a variant of the classic four-room domain, we show that 1) a higher objective value is typically associated with fewer number of elementary planning operations used by the option-value iteration algorithm to obtain a near-optimal value function, 2) our algorithm achieves an objective value that matches it achieved by two human-designed options 3) the amount of computation used by option-value iteration with options discovered by our algorithm matches it with the human-designed options, 4) the options produced by our algorithm also make intuitive sense--they seem to move to and terminate at the entrances of rooms.
[ { "created": "Wed, 25 May 2022 06:10:10 GMT", "version": "v1" }, { "created": "Thu, 29 Sep 2022 23:30:44 GMT", "version": "v2" } ]
2022-10-03
[ [ "Wan", "Yi", "" ], [ "Sutton", "Richard S.", "" ] ]
We propose a new objective for option discovery that emphasizes the computational advantage of using options in planning. In a sequential machine, the speed of planning is proportional to the number of elementary operations used to achieve a good policy. For episodic tasks, the number of elementary operations depends on the number of options composed by the policy in an episode and the number of options being considered at each decision point. To reduce the amount of computation in planning, for a given set of episodic tasks and a given number of options, our objective prefers options with which it is possible to achieve a high return by composing few options, and also prefers a smaller set of options to choose from at each decision point. We develop an algorithm that optimizes the proposed objective. In a variant of the classic four-room domain, we show that 1) a higher objective value is typically associated with fewer number of elementary planning operations used by the option-value iteration algorithm to obtain a near-optimal value function, 2) our algorithm achieves an objective value that matches it achieved by two human-designed options 3) the amount of computation used by option-value iteration with options discovered by our algorithm matches it with the human-designed options, 4) the options produced by our algorithm also make intuitive sense--they seem to move to and terminate at the entrances of rooms.
2109.11782
Nithin Nagaraj
Abhsihek Nandekar, Preeth Khona, Rajani M. B., Anindya Sinha, Nithin Nagaraj
Causal Analysis of Carnatic Music: A Preliminary Study
22 pages, 12 figures
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
The musicological analysis of Carnatic music is challenging, owing to its rich structure and complexity. Automated \textit{r\=aga} classification, pitch detection, tonal analysis, modelling and information retrieval of this form of southern Indian classical music have, however, made significant progress in recent times. A causal analysis to investigate the musicological structure of Carnatic compositions and the identification of the relationships embedded in them have never been previously attempted. In this study, we propose a novel framework for causal discovery, using a compression-complexity measure. Owing to the limited number of compositions available, however, we generated surrogates to further facilitate the analysis of the prevailing causal relationships. Our analysis indicates that the context-free grammar, inferred from more complex compositions, such as the \textit{M\=e\d{l}akarta} \textit{r\=aga}, are a \textit{structural cause} for the \textit{Janya} \textit{r\=aga}. We also analyse certain special cases of the \textit{Janya r\=aga} in order to understand their origins and structure better.
[ { "created": "Fri, 24 Sep 2021 07:24:12 GMT", "version": "v1" } ]
2021-09-27
[ [ "Nandekar", "Abhsihek", "" ], [ "Khona", "Preeth", "" ], [ "B.", "Rajani M.", "" ], [ "Sinha", "Anindya", "" ], [ "Nagaraj", "Nithin", "" ] ]
The musicological analysis of Carnatic music is challenging, owing to its rich structure and complexity. Automated \textit{r\=aga} classification, pitch detection, tonal analysis, modelling and information retrieval of this form of southern Indian classical music have, however, made significant progress in recent times. A causal analysis to investigate the musicological structure of Carnatic compositions and the identification of the relationships embedded in them have never been previously attempted. In this study, we propose a novel framework for causal discovery, using a compression-complexity measure. Owing to the limited number of compositions available, however, we generated surrogates to further facilitate the analysis of the prevailing causal relationships. Our analysis indicates that the context-free grammar, inferred from more complex compositions, such as the \textit{M\=e\d{l}akarta} \textit{r\=aga}, are a \textit{structural cause} for the \textit{Janya} \textit{r\=aga}. We also analyse certain special cases of the \textit{Janya r\=aga} in order to understand their origins and structure better.
2310.20396
EPTCS
Pascal Krapf (Syscience), S\'ebastien Berthier (Syscience), Nicole Levy (CEDRIC-CNAM)
Product Line Management with Graphical MBSE Views
In Proceedings TiCSA 2023, arXiv:2310.18720
EPTCS 392, 2023, pp. 53-65
10.4204/EPTCS.392.4
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Reducing the cost and delay and improving quality are major issues for product and software development, especially in the automotive domain. Product line engineering is a wellknown approach to engineer systems with the aim to reduce costs and development time as well as to improve the product quality. Feature models enable to make logical selection of features and obtain a filtered set of assets that compose the product. We propose to use a color code in feature models to make possible decisions visual in the feature tree. The color code is explained and its use is illustrated. The completeness of the approach is discussed.
[ { "created": "Tue, 31 Oct 2023 12:17:31 GMT", "version": "v1" } ]
2023-11-01
[ [ "Krapf", "Pascal", "", "Syscience" ], [ "Berthier", "Sébastien", "", "Syscience" ], [ "Levy", "Nicole", "", "CEDRIC-CNAM" ] ]
Reducing the cost and delay and improving quality are major issues for product and software development, especially in the automotive domain. Product line engineering is a wellknown approach to engineer systems with the aim to reduce costs and development time as well as to improve the product quality. Feature models enable to make logical selection of features and obtain a filtered set of assets that compose the product. We propose to use a color code in feature models to make possible decisions visual in the feature tree. The color code is explained and its use is illustrated. The completeness of the approach is discussed.
2307.10169
Jean Kaddour
Jean Kaddour, Joshua Harris, Maximilian Mozes, Herbie Bradley, Roberta Raileanu, Robert McHardy
Challenges and Applications of Large Language Models
72 pages. v01. Work in progress. Feedback and comments are highly appreciated!
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) went from non-existent to ubiquitous in the machine learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify the remaining challenges and already fruitful application areas. In this paper, we aim to establish a systematic set of open problems and application successes so that ML researchers can comprehend the field's current state more quickly and become productive.
[ { "created": "Wed, 19 Jul 2023 17:55:13 GMT", "version": "v1" } ]
2023-07-20
[ [ "Kaddour", "Jean", "" ], [ "Harris", "Joshua", "" ], [ "Mozes", "Maximilian", "" ], [ "Bradley", "Herbie", "" ], [ "Raileanu", "Roberta", "" ], [ "McHardy", "Robert", "" ] ]
Large Language Models (LLMs) went from non-existent to ubiquitous in the machine learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify the remaining challenges and already fruitful application areas. In this paper, we aim to establish a systematic set of open problems and application successes so that ML researchers can comprehend the field's current state more quickly and become productive.
1909.02165
Nilesh Pandey
Nilesh Pandey and Andreas Savakis
Poly-GAN: Multi-Conditioned GAN for Fashion Synthesis
null
null
null
null
cs.CV cs.GR eess.IV
http://creativecommons.org/licenses/by/4.0/
We present Poly-GAN, a novel conditional GAN architecture that is motivated by Fashion Synthesis, an application where garments are automatically placed on images of human models at an arbitrary pose. Poly-GAN allows conditioning on multiple inputs and is suitable for many tasks, including image alignment, image stitching, and inpainting. Existing methods have a similar pipeline where three different networks are used to first align garments with the human pose, then perform stitching of the aligned garment and finally refine the results. Poly-GAN is the first instance where a common architecture is used to perform all three tasks. Our novel architecture enforces the conditions at all layers of the encoder and utilizes skip connections from the coarse layers of the encoder to the respective layers of the decoder. Poly-GAN is able to perform a spatial transformation of the garment based on the RGB skeleton of the model at an arbitrary pose. Additionally, Poly-GAN can perform image stitching, regardless of the garment orientation, and inpainting on the garment mask when it contains irregular holes. Our system achieves state-of-the-art quantitative results on Structural Similarity Index metric and Inception Score metric using the DeepFashion dataset.
[ { "created": "Thu, 5 Sep 2019 00:29:39 GMT", "version": "v1" } ]
2019-09-06
[ [ "Pandey", "Nilesh", "" ], [ "Savakis", "Andreas", "" ] ]
We present Poly-GAN, a novel conditional GAN architecture that is motivated by Fashion Synthesis, an application where garments are automatically placed on images of human models at an arbitrary pose. Poly-GAN allows conditioning on multiple inputs and is suitable for many tasks, including image alignment, image stitching, and inpainting. Existing methods have a similar pipeline where three different networks are used to first align garments with the human pose, then perform stitching of the aligned garment and finally refine the results. Poly-GAN is the first instance where a common architecture is used to perform all three tasks. Our novel architecture enforces the conditions at all layers of the encoder and utilizes skip connections from the coarse layers of the encoder to the respective layers of the decoder. Poly-GAN is able to perform a spatial transformation of the garment based on the RGB skeleton of the model at an arbitrary pose. Additionally, Poly-GAN can perform image stitching, regardless of the garment orientation, and inpainting on the garment mask when it contains irregular holes. Our system achieves state-of-the-art quantitative results on Structural Similarity Index metric and Inception Score metric using the DeepFashion dataset.
2404.10865
Matthew Inkawhich
Matthew Inkawhich, Nathan Inkawhich, Hao Yang, Jingyang Zhang, Randolph Linderman, Yiran Chen
OSR-ViT: A Simple and Modular Framework for Open-Set Object Detection and Discovery
28 pages, 8 figures, 7 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
An object detector's ability to detect and flag \textit{novel} objects during open-world deployments is critical for many real-world applications. Unfortunately, much of the work in open object detection today is disjointed and fails to adequately address applications that prioritize unknown object recall \textit{in addition to} known-class accuracy. To close this gap, we present a new task called Open-Set Object Detection and Discovery (OSODD) and as a solution propose the Open-Set Regions with ViT features (OSR-ViT) detection framework. OSR-ViT combines a class-agnostic proposal network with a powerful ViT-based classifier. Its modular design simplifies optimization and allows users to easily swap proposal solutions and feature extractors to best suit their application. Using our multifaceted evaluation protocol, we show that OSR-ViT obtains performance levels that far exceed state-of-the-art supervised methods. Our method also excels in low-data settings, outperforming supervised baselines using a fraction of the training data.
[ { "created": "Tue, 16 Apr 2024 19:29:27 GMT", "version": "v1" } ]
2024-04-18
[ [ "Inkawhich", "Matthew", "" ], [ "Inkawhich", "Nathan", "" ], [ "Yang", "Hao", "" ], [ "Zhang", "Jingyang", "" ], [ "Linderman", "Randolph", "" ], [ "Chen", "Yiran", "" ] ]
An object detector's ability to detect and flag \textit{novel} objects during open-world deployments is critical for many real-world applications. Unfortunately, much of the work in open object detection today is disjointed and fails to adequately address applications that prioritize unknown object recall \textit{in addition to} known-class accuracy. To close this gap, we present a new task called Open-Set Object Detection and Discovery (OSODD) and as a solution propose the Open-Set Regions with ViT features (OSR-ViT) detection framework. OSR-ViT combines a class-agnostic proposal network with a powerful ViT-based classifier. Its modular design simplifies optimization and allows users to easily swap proposal solutions and feature extractors to best suit their application. Using our multifaceted evaluation protocol, we show that OSR-ViT obtains performance levels that far exceed state-of-the-art supervised methods. Our method also excels in low-data settings, outperforming supervised baselines using a fraction of the training data.
1812.09433
Dong Zhao
Dong Zhao, Huadong Ma and Xinna Ji
Generalized Lottery Trees: Budget-Consistent Incentive Tree Mechanisms for Crowdsourcing
14 pages, 22 figures
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Incentive mechanism design has aroused extensive attention for crowdsourcing applications in recent years. Most research assumes that participants are already in the system and aware of the existence of crowdsourcing tasks. Whereas in real life scenarios without this assumption, it is a more effective way to leverage incentive tree mechanisms that incentivize both users' direct contributions and solicitations to other users. Although some such mechanisms have been investigated, we are the first to propose budget-consistent incentive tree mechanisms, called generalized lottrees, which require the total payout to all participants to be consistent with the announced budget, while guaranteeing several other desirable properties including continuing contribution incentive, continuing solicitation incentive, value proportional to contribution, unprofitable solicitor bypassing, and unprofitable sybil attack. Moreover, we present three types of generalized lottree mechanisms, 1-Pachira, K-Pachira, and Sharing-Pachira, which support more diversified requirements. A solid theoretical guidance to the mechanism selection is provided as well based on the Cumulative Prospect Theory. Both extensive simulations and realistic experiments with 82 users have been conducted to confirm our theoretical analysis.
[ { "created": "Sat, 22 Dec 2018 02:02:46 GMT", "version": "v1" } ]
2018-12-27
[ [ "Zhao", "Dong", "" ], [ "Ma", "Huadong", "" ], [ "Ji", "Xinna", "" ] ]
Incentive mechanism design has aroused extensive attention for crowdsourcing applications in recent years. Most research assumes that participants are already in the system and aware of the existence of crowdsourcing tasks. Whereas in real life scenarios without this assumption, it is a more effective way to leverage incentive tree mechanisms that incentivize both users' direct contributions and solicitations to other users. Although some such mechanisms have been investigated, we are the first to propose budget-consistent incentive tree mechanisms, called generalized lottrees, which require the total payout to all participants to be consistent with the announced budget, while guaranteeing several other desirable properties including continuing contribution incentive, continuing solicitation incentive, value proportional to contribution, unprofitable solicitor bypassing, and unprofitable sybil attack. Moreover, we present three types of generalized lottree mechanisms, 1-Pachira, K-Pachira, and Sharing-Pachira, which support more diversified requirements. A solid theoretical guidance to the mechanism selection is provided as well based on the Cumulative Prospect Theory. Both extensive simulations and realistic experiments with 82 users have been conducted to confirm our theoretical analysis.
1401.3492
Frank Hutter
Frank Hutter, Thomas Stuetzle, Kevin Leyton-Brown, Holger H. Hoos
ParamILS: An Automatic Algorithm Configuration Framework
null
Journal Of Artificial Intelligence Research, Volume 36, pages 267-306, 2009
10.1613/jair.2861
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The identification of performance-optimizing parameter settings is an important part of the development and application of algorithms. We describe an automatic framework for this algorithm configuration problem. More formally, we provide methods for optimizing a target algorithm's performance on a given class of problem instances by varying a set of ordinal and/or categorical parameters. We review a family of local-search-based algorithm configuration procedures and present novel techniques for accelerating them by adaptively limiting the time spent for evaluating individual configurations. We describe the results of a comprehensive experimental evaluation of our methods, based on the configuration of prominent complete and incomplete algorithms for SAT. We also present what is, to our knowledge, the first published work on automatically configuring the CPLEX mixed integer programming solver. All the algorithms we considered had default parameter settings that were manually identified with considerable effort. Nevertheless, using our automated algorithm configuration procedures, we achieved substantial and consistent performance improvements.
[ { "created": "Wed, 15 Jan 2014 05:40:11 GMT", "version": "v1" } ]
2014-01-16
[ [ "Hutter", "Frank", "" ], [ "Stuetzle", "Thomas", "" ], [ "Leyton-Brown", "Kevin", "" ], [ "Hoos", "Holger H.", "" ] ]
The identification of performance-optimizing parameter settings is an important part of the development and application of algorithms. We describe an automatic framework for this algorithm configuration problem. More formally, we provide methods for optimizing a target algorithm's performance on a given class of problem instances by varying a set of ordinal and/or categorical parameters. We review a family of local-search-based algorithm configuration procedures and present novel techniques for accelerating them by adaptively limiting the time spent for evaluating individual configurations. We describe the results of a comprehensive experimental evaluation of our methods, based on the configuration of prominent complete and incomplete algorithms for SAT. We also present what is, to our knowledge, the first published work on automatically configuring the CPLEX mixed integer programming solver. All the algorithms we considered had default parameter settings that were manually identified with considerable effort. Nevertheless, using our automated algorithm configuration procedures, we achieved substantial and consistent performance improvements.
2405.03615
Razieh Sheikhpour
Farid Saberi-Movahed, Kamal Berahman, Razieh Sheikhpour, Yuefeng Li, Shirui Pan
Nonnegative Matrix Factorization in Dimensionality Reduction: A Survey
10 Paes, 2 figures, to be appear in acm computing survey
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Dimensionality Reduction plays a pivotal role in improving feature learning accuracy and reducing training time by eliminating redundant features, noise, and irrelevant data. Nonnegative Matrix Factorization (NMF) has emerged as a popular and powerful method for dimensionality reduction. Despite its extensive use, there remains a need for a comprehensive analysis of NMF in the context of dimensionality reduction. To address this gap, this paper presents a comprehensive survey of NMF, focusing on its applications in both feature extraction and feature selection. We introduce a classification of dimensionality reduction, enhancing understanding of the underlying concepts. Subsequently, we delve into a thorough summary of diverse NMF approaches used for feature extraction and selection. Furthermore, we discuss the latest research trends and potential future directions of NMF in dimensionality reduction, aiming to highlight areas that need further exploration and development.
[ { "created": "Mon, 6 May 2024 16:32:01 GMT", "version": "v1" } ]
2024-05-07
[ [ "Saberi-Movahed", "Farid", "" ], [ "Berahman", "Kamal", "" ], [ "Sheikhpour", "Razieh", "" ], [ "Li", "Yuefeng", "" ], [ "Pan", "Shirui", "" ] ]
Dimensionality Reduction plays a pivotal role in improving feature learning accuracy and reducing training time by eliminating redundant features, noise, and irrelevant data. Nonnegative Matrix Factorization (NMF) has emerged as a popular and powerful method for dimensionality reduction. Despite its extensive use, there remains a need for a comprehensive analysis of NMF in the context of dimensionality reduction. To address this gap, this paper presents a comprehensive survey of NMF, focusing on its applications in both feature extraction and feature selection. We introduce a classification of dimensionality reduction, enhancing understanding of the underlying concepts. Subsequently, we delve into a thorough summary of diverse NMF approaches used for feature extraction and selection. Furthermore, we discuss the latest research trends and potential future directions of NMF in dimensionality reduction, aiming to highlight areas that need further exploration and development.
2107.06990
Tazin Afrin
Tazin Afrin, Elaine Wang, Diane Litman, Lindsay C. Matsumura, Richard Correnti
Annotation and Classification of Evidence and Reasoning Revisions in Argumentative Writing
10 pages, 11 tables, 15th Workshop on Innovative Use of NLP for Building Educational Applications
null
10.18653/v1/2020.bea-1.7
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Automated writing evaluation systems can improve students' writing insofar as students attend to the feedback provided and revise their essay drafts in ways aligned with such feedback. Existing research on revision of argumentative writing in such systems, however, has focused on the types of revisions students make (e.g., surface vs. content) rather than the extent to which revisions actually respond to the feedback provided and improve the essay. We introduce an annotation scheme to capture the nature of sentence-level revisions of evidence use and reasoning (the `RER' scheme) and apply it to 5th- and 6th-grade students' argumentative essays. We show that reliable manual annotation can be achieved and that revision annotations correlate with a holistic assessment of essay improvement in line with the feedback provided. Furthermore, we explore the feasibility of automatically classifying revisions according to our scheme.
[ { "created": "Wed, 14 Jul 2021 20:58:26 GMT", "version": "v1" } ]
2021-07-16
[ [ "Afrin", "Tazin", "" ], [ "Wang", "Elaine", "" ], [ "Litman", "Diane", "" ], [ "Matsumura", "Lindsay C.", "" ], [ "Correnti", "Richard", "" ] ]
Automated writing evaluation systems can improve students' writing insofar as students attend to the feedback provided and revise their essay drafts in ways aligned with such feedback. Existing research on revision of argumentative writing in such systems, however, has focused on the types of revisions students make (e.g., surface vs. content) rather than the extent to which revisions actually respond to the feedback provided and improve the essay. We introduce an annotation scheme to capture the nature of sentence-level revisions of evidence use and reasoning (the `RER' scheme) and apply it to 5th- and 6th-grade students' argumentative essays. We show that reliable manual annotation can be achieved and that revision annotations correlate with a holistic assessment of essay improvement in line with the feedback provided. Furthermore, we explore the feasibility of automatically classifying revisions according to our scheme.
1001.1974
Rdv Ijcsis
Malik Sikandar Hayat Khiyal, Aihab Khan, Sehrish Amjad, M. Shahid Khalil
Evaluating Effectiveness of Tamper Proofing on Dynamic Graph Software Watermarks
7 pages IEEE format, International Journal of Computer Science and Information Security, IJCSIS December 2009, ISSN 1947 5500, http://sites.google.com/site/ijcsis/
International Journal of Computer Science and Information Security, IJCSIS, Vol. 6, No. 3, pp. 057-063, December 2009, USA
null
Volume 6, No. 3, ISSN 1947 5500
cs.MM cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For enhancing the protection level of dynamic graph software watermarks and for the purpose of conducting the analysis which evaluates the effect of integrating two software protection techniques such as software watermarking and tamper proofing, constant encoding technique along with the enhancement through the idea of constant splitting is proposed. In this paper Thomborson technique has been implemented with the scheme of breaking constants which enables to encode all constants without having any consideration about their values with respect to the value of watermark tree. Experimental analysis which have been conducted and provided in this paper concludes that the constant encoding process significantly increases the code size, heap space usage, and execution time, while making the tamper proofed code resilient to variety of semantic preserving program transformation attacks.
[ { "created": "Tue, 12 Jan 2010 18:34:24 GMT", "version": "v1" } ]
2010-01-13
[ [ "Khiyal", "Malik Sikandar Hayat", "" ], [ "Khan", "Aihab", "" ], [ "Amjad", "Sehrish", "" ], [ "Khalil", "M. Shahid", "" ] ]
For enhancing the protection level of dynamic graph software watermarks and for the purpose of conducting the analysis which evaluates the effect of integrating two software protection techniques such as software watermarking and tamper proofing, constant encoding technique along with the enhancement through the idea of constant splitting is proposed. In this paper Thomborson technique has been implemented with the scheme of breaking constants which enables to encode all constants without having any consideration about their values with respect to the value of watermark tree. Experimental analysis which have been conducted and provided in this paper concludes that the constant encoding process significantly increases the code size, heap space usage, and execution time, while making the tamper proofed code resilient to variety of semantic preserving program transformation attacks.
1609.03415
Muhammet Bastan
Muhammet Bastan and S. Saqib Bukhari and Thomas M. Breuel
Active Canny: Edge Detection and Recovery with Open Active Contour Models
null
null
null
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce an edge detection and recovery framework based on open active contour models (snakelets). This is motivated by the noisy or broken edges output by standard edge detection algorithms, like Canny. The idea is to utilize the local continuity and smoothness cues provided by strong edges and grow them to recover the missing edges. This way, the strong edges are used to recover weak or missing edges by considering the local edge structures, instead of blindly linking them if gradient magnitudes are above some threshold. We initialize short snakelets on the gradient magnitudes or binary edges automatically and then deform and grow them under the influence of gradient vector flow. The output snakelets are able to recover most of the breaks or weak edges, and they provide a smooth edge representation of the image; they can also be used for higher level analysis, like contour segmentation.
[ { "created": "Mon, 12 Sep 2016 14:13:26 GMT", "version": "v1" } ]
2016-09-13
[ [ "Bastan", "Muhammet", "" ], [ "Bukhari", "S. Saqib", "" ], [ "Breuel", "Thomas M.", "" ] ]
We introduce an edge detection and recovery framework based on open active contour models (snakelets). This is motivated by the noisy or broken edges output by standard edge detection algorithms, like Canny. The idea is to utilize the local continuity and smoothness cues provided by strong edges and grow them to recover the missing edges. This way, the strong edges are used to recover weak or missing edges by considering the local edge structures, instead of blindly linking them if gradient magnitudes are above some threshold. We initialize short snakelets on the gradient magnitudes or binary edges automatically and then deform and grow them under the influence of gradient vector flow. The output snakelets are able to recover most of the breaks or weak edges, and they provide a smooth edge representation of the image; they can also be used for higher level analysis, like contour segmentation.
2403.08703
Guido Cera
Davide Guidobene, Guido Cera
Improved Dynamics for the Maximum Common Subgraph Problem
6 pages, 5 figures
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Maximum Common Subgraph (MCS) problem plays a crucial role across various domains, bridging theoretical exploration and practical applications in fields like bioinformatics and social network analysis. Despite its wide applicability, MCS is notoriously challenging and is classified as an NP-Complete (NPC) problem. This study introduces new heuristics aimed at mitigating these challenges through the reformulation of the MCS problem as the Maximum Clique and its complement, the Maximum Independent Set. Our first heuristic leverages the Motzkin-Straus theorem to reformulate the Maximum Clique Problem as a constrained optimization problem, continuing the work of Pelillo in Replicator Equations, Maximal Cliques, and Graph Isomorphism (1999) with replicator dynamics and introducing annealed imitation heuristics as in Dominant Sets and Hierarchical Clustering (Pavan and Pelillo, 2003) to improve chances of convergence to better local optima. The second technique applies heuristics drawn upon strategies for the Maximum Independent Set problem to efficiently reduce graph sizes as used by Akiwa and Iwata in 2014. This enables faster computation and, in many instances, yields near-optimal solutions. Furthermore we look at the implementation of both techniques in a single algorithm and find that it is a promising approach. Our techniques were tested on randomly generated Erd\H{o}s-R\'enyi graph pairs. Results indicate the potential for application and substantial impact on future research directions.
[ { "created": "Wed, 13 Mar 2024 17:08:36 GMT", "version": "v1" }, { "created": "Sun, 24 Mar 2024 20:48:08 GMT", "version": "v2" } ]
2024-03-26
[ [ "Guidobene", "Davide", "" ], [ "Cera", "Guido", "" ] ]
The Maximum Common Subgraph (MCS) problem plays a crucial role across various domains, bridging theoretical exploration and practical applications in fields like bioinformatics and social network analysis. Despite its wide applicability, MCS is notoriously challenging and is classified as an NP-Complete (NPC) problem. This study introduces new heuristics aimed at mitigating these challenges through the reformulation of the MCS problem as the Maximum Clique and its complement, the Maximum Independent Set. Our first heuristic leverages the Motzkin-Straus theorem to reformulate the Maximum Clique Problem as a constrained optimization problem, continuing the work of Pelillo in Replicator Equations, Maximal Cliques, and Graph Isomorphism (1999) with replicator dynamics and introducing annealed imitation heuristics as in Dominant Sets and Hierarchical Clustering (Pavan and Pelillo, 2003) to improve chances of convergence to better local optima. The second technique applies heuristics drawn upon strategies for the Maximum Independent Set problem to efficiently reduce graph sizes as used by Akiwa and Iwata in 2014. This enables faster computation and, in many instances, yields near-optimal solutions. Furthermore we look at the implementation of both techniques in a single algorithm and find that it is a promising approach. Our techniques were tested on randomly generated Erd\H{o}s-R\'enyi graph pairs. Results indicate the potential for application and substantial impact on future research directions.
1501.01178
Benjamin Negrevergne
Benjamin Negrevergne and Tias Guns
Constraint-based sequence mining using constraint programming
In Integration of AI and OR Techniques in Constraint Programming (CPAIOR), 2015
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of constraint-based sequence mining is to find sequences of symbols that are included in a large number of input sequences and that satisfy some constraints specified by the user. Many constraints have been proposed in the literature, but a general framework is still missing. We investigate the use of constraint programming as general framework for this task. We first identify four categories of constraints that are applicable to sequence mining. We then propose two constraint programming formulations. The first formulation introduces a new global constraint called exists-embedding. This formulation is the most efficient but does not support one type of constraint. To support such constraints, we develop a second formulation that is more general but incurs more overhead. Both formulations can use the projected database technique used in specialised algorithms. Experiments demonstrate the flexibility towards constraint-based settings and compare the approach to existing methods.
[ { "created": "Tue, 6 Jan 2015 13:47:24 GMT", "version": "v1" }, { "created": "Thu, 8 Jan 2015 13:50:53 GMT", "version": "v2" }, { "created": "Wed, 25 Feb 2015 16:31:27 GMT", "version": "v3" } ]
2015-02-26
[ [ "Negrevergne", "Benjamin", "" ], [ "Guns", "Tias", "" ] ]
The goal of constraint-based sequence mining is to find sequences of symbols that are included in a large number of input sequences and that satisfy some constraints specified by the user. Many constraints have been proposed in the literature, but a general framework is still missing. We investigate the use of constraint programming as general framework for this task. We first identify four categories of constraints that are applicable to sequence mining. We then propose two constraint programming formulations. The first formulation introduces a new global constraint called exists-embedding. This formulation is the most efficient but does not support one type of constraint. To support such constraints, we develop a second formulation that is more general but incurs more overhead. Both formulations can use the projected database technique used in specialised algorithms. Experiments demonstrate the flexibility towards constraint-based settings and compare the approach to existing methods.
2308.14965
Umar Khalid
Umar Khalid, Hasan Iqbal, Saeed Vahidian, Jing Hua, Chen Chen
CEFHRI: A Communication Efficient Federated Learning Framework for Recognizing Industrial Human-Robot Interaction
Accepted in IROS 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human-robot interaction (HRI) is a rapidly growing field that encompasses social and industrial applications. Machine learning plays a vital role in industrial HRI by enhancing the adaptability and autonomy of robots in complex environments. However, data privacy is a crucial concern in the interaction between humans and robots, as companies need to protect sensitive data while machine learning algorithms require access to large datasets. Federated Learning (FL) offers a solution by enabling the distributed training of models without sharing raw data. Despite extensive research on Federated learning (FL) for tasks such as natural language processing (NLP) and image classification, the question of how to use FL for HRI remains an open research problem. The traditional FL approach involves transmitting large neural network parameter matrices between the server and clients, which can lead to high communication costs and often becomes a bottleneck in FL. This paper proposes a communication-efficient FL framework for human-robot interaction (CEFHRI) to address the challenges of data heterogeneity and communication costs. The framework leverages pre-trained models and introduces a trainable spatiotemporal adapter for video understanding tasks in HRI. Experimental results on three human-robot interaction benchmark datasets: HRI30, InHARD, and COIN demonstrate the superiority of CEFHRI over full fine-tuning in terms of communication costs. The proposed methodology provides a secure and efficient approach to HRI federated learning, particularly in industrial environments with data privacy concerns and limited communication bandwidth. Our code is available at https://github.com/umarkhalidAI/CEFHRI-Efficient-Federated-Learning.
[ { "created": "Tue, 29 Aug 2023 01:34:33 GMT", "version": "v1" } ]
2023-08-30
[ [ "Khalid", "Umar", "" ], [ "Iqbal", "Hasan", "" ], [ "Vahidian", "Saeed", "" ], [ "Hua", "Jing", "" ], [ "Chen", "Chen", "" ] ]
Human-robot interaction (HRI) is a rapidly growing field that encompasses social and industrial applications. Machine learning plays a vital role in industrial HRI by enhancing the adaptability and autonomy of robots in complex environments. However, data privacy is a crucial concern in the interaction between humans and robots, as companies need to protect sensitive data while machine learning algorithms require access to large datasets. Federated Learning (FL) offers a solution by enabling the distributed training of models without sharing raw data. Despite extensive research on Federated learning (FL) for tasks such as natural language processing (NLP) and image classification, the question of how to use FL for HRI remains an open research problem. The traditional FL approach involves transmitting large neural network parameter matrices between the server and clients, which can lead to high communication costs and often becomes a bottleneck in FL. This paper proposes a communication-efficient FL framework for human-robot interaction (CEFHRI) to address the challenges of data heterogeneity and communication costs. The framework leverages pre-trained models and introduces a trainable spatiotemporal adapter for video understanding tasks in HRI. Experimental results on three human-robot interaction benchmark datasets: HRI30, InHARD, and COIN demonstrate the superiority of CEFHRI over full fine-tuning in terms of communication costs. The proposed methodology provides a secure and efficient approach to HRI federated learning, particularly in industrial environments with data privacy concerns and limited communication bandwidth. Our code is available at https://github.com/umarkhalidAI/CEFHRI-Efficient-Federated-Learning.
2403.01481
Kinshuk Vasisht
Kinshuk Vasisht, Balaji Ganesan, Vikas Kumar, Vasudha Bhatnagar
Infusing Knowledge into Large Language Models with Contextual Prompts
5 pages, 1 figure, In Proceedings of ICON 2023
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge infusion is a promising method for enhancing Large Language Models for domain-specific NLP tasks rather than pre-training models over large data from scratch. These augmented LLMs typically depend on additional pre-training or knowledge prompts from an existing knowledge graph, which is impractical in many applications. In contrast, knowledge infusion directly from relevant documents is more generalisable and alleviates the need for structured knowledge graphs while also being useful for entities that are usually not found in any knowledge graph. With this motivation, we propose a simple yet generalisable approach for knowledge infusion by generating prompts from the context in the input text. Our experiments show the effectiveness of our approach which we evaluate by probing the fine-tuned LLMs.
[ { "created": "Sun, 3 Mar 2024 11:19:26 GMT", "version": "v1" } ]
2024-03-05
[ [ "Vasisht", "Kinshuk", "" ], [ "Ganesan", "Balaji", "" ], [ "Kumar", "Vikas", "" ], [ "Bhatnagar", "Vasudha", "" ] ]
Knowledge infusion is a promising method for enhancing Large Language Models for domain-specific NLP tasks rather than pre-training models over large data from scratch. These augmented LLMs typically depend on additional pre-training or knowledge prompts from an existing knowledge graph, which is impractical in many applications. In contrast, knowledge infusion directly from relevant documents is more generalisable and alleviates the need for structured knowledge graphs while also being useful for entities that are usually not found in any knowledge graph. With this motivation, we propose a simple yet generalisable approach for knowledge infusion by generating prompts from the context in the input text. Our experiments show the effectiveness of our approach which we evaluate by probing the fine-tuned LLMs.
2401.12751
Wanjuan Su
Wanjuan Su, Chen Zhang, Qingshan Xu, Wenbing Tao
PSDF: Prior-Driven Neural Implicit Surface Learning for Multi-view Reconstruction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Surface reconstruction has traditionally relied on the Multi-View Stereo (MVS)-based pipeline, which often suffers from noisy and incomplete geometry. This is due to that although MVS has been proven to be an effective way to recover the geometry of the scenes, especially for locally detailed areas with rich textures, it struggles to deal with areas with low texture and large variations of illumination where the photometric consistency is unreliable. Recently, Neural Implicit Surface Reconstruction (NISR) combines surface rendering and volume rendering techniques and bypasses the MVS as an intermediate step, which has emerged as a promising alternative to overcome the limitations of traditional pipelines. While NISR has shown impressive results on simple scenes, it remains challenging to recover delicate geometry from uncontrolled real-world scenes which is caused by its underconstrained optimization. To this end, the framework PSDF is proposed which resorts to external geometric priors from a pretrained MVS network and internal geometric priors inherent in the NISR model to facilitate high-quality neural implicit surface learning. Specifically, the visibility-aware feature consistency loss and depth prior-assisted sampling based on external geometric priors are introduced. These proposals provide powerfully geometric consistency constraints and aid in locating surface intersection points, thereby significantly improving the accuracy and delicate reconstruction of NISR. Meanwhile, the internal prior-guided importance rendering is presented to enhance the fidelity of the reconstructed surface mesh by mitigating the biased rendering issue in NISR. Extensive experiments on the Tanks and Temples dataset show that PSDF achieves state-of-the-art performance on complex uncontrolled scenes.
[ { "created": "Tue, 23 Jan 2024 13:30:43 GMT", "version": "v1" } ]
2024-01-24
[ [ "Su", "Wanjuan", "" ], [ "Zhang", "Chen", "" ], [ "Xu", "Qingshan", "" ], [ "Tao", "Wenbing", "" ] ]
Surface reconstruction has traditionally relied on the Multi-View Stereo (MVS)-based pipeline, which often suffers from noisy and incomplete geometry. This is due to that although MVS has been proven to be an effective way to recover the geometry of the scenes, especially for locally detailed areas with rich textures, it struggles to deal with areas with low texture and large variations of illumination where the photometric consistency is unreliable. Recently, Neural Implicit Surface Reconstruction (NISR) combines surface rendering and volume rendering techniques and bypasses the MVS as an intermediate step, which has emerged as a promising alternative to overcome the limitations of traditional pipelines. While NISR has shown impressive results on simple scenes, it remains challenging to recover delicate geometry from uncontrolled real-world scenes which is caused by its underconstrained optimization. To this end, the framework PSDF is proposed which resorts to external geometric priors from a pretrained MVS network and internal geometric priors inherent in the NISR model to facilitate high-quality neural implicit surface learning. Specifically, the visibility-aware feature consistency loss and depth prior-assisted sampling based on external geometric priors are introduced. These proposals provide powerfully geometric consistency constraints and aid in locating surface intersection points, thereby significantly improving the accuracy and delicate reconstruction of NISR. Meanwhile, the internal prior-guided importance rendering is presented to enhance the fidelity of the reconstructed surface mesh by mitigating the biased rendering issue in NISR. Extensive experiments on the Tanks and Temples dataset show that PSDF achieves state-of-the-art performance on complex uncontrolled scenes.
2403.05448
Daisuke Mashima
Zhiang Li, Daisuke Mashima, Wen Shei Ong, Ertem Esiner, Zbigniew Kalbarczyk, Ee-Chien Chang
On Practicality of Using ARM TrustZone Trusted Execution Environment for Securing Programmable Logic Controllers
To appear at ACM AsiaCCS 2024
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Programmable logic controllers (PLCs) are crucial devices for implementing automated control in various industrial control systems (ICS), such as smart power grids, water treatment systems, manufacturing, and transportation systems. Owing to their importance, PLCs are often the target of cyber attackers that are aiming at disrupting the operation of ICS, including the nation's critical infrastructure, by compromising the integrity of control logic execution. While a wide range of cybersecurity solutions for ICS have been proposed, they cannot counter strong adversaries with a foothold on the PLC devices, which could manipulate memory, I/O interface, or PLC logic itself. These days, many ICS devices in the market, including PLCs, run on ARM-based processors, and there is a promising security technology called ARM TrustZone, to offer a Trusted Execution Environment (TEE) on embedded devices. Envisioning that such a hardware-assisted security feature becomes available for ICS devices in the near future, this paper investigates the application of the ARM TrustZone TEE technology for enhancing the security of PLC. Our aim is to evaluate the feasibility and practicality of the TEE-based PLCs through the proof-of-concept design and implementation using open-source software such as OP-TEE and OpenPLC. Our evaluation assesses the performance and resource consumption in real-world ICS configurations, and based on the results, we discuss bottlenecks in the OP-TEE secure OS towards a large-scale ICS and desired changes for its application on ICS devices. Our implementation is made available to public for further study and research.
[ { "created": "Fri, 8 Mar 2024 16:55:20 GMT", "version": "v1" } ]
2024-03-11
[ [ "Li", "Zhiang", "" ], [ "Mashima", "Daisuke", "" ], [ "Ong", "Wen Shei", "" ], [ "Esiner", "Ertem", "" ], [ "Kalbarczyk", "Zbigniew", "" ], [ "Chang", "Ee-Chien", "" ] ]
Programmable logic controllers (PLCs) are crucial devices for implementing automated control in various industrial control systems (ICS), such as smart power grids, water treatment systems, manufacturing, and transportation systems. Owing to their importance, PLCs are often the target of cyber attackers that are aiming at disrupting the operation of ICS, including the nation's critical infrastructure, by compromising the integrity of control logic execution. While a wide range of cybersecurity solutions for ICS have been proposed, they cannot counter strong adversaries with a foothold on the PLC devices, which could manipulate memory, I/O interface, or PLC logic itself. These days, many ICS devices in the market, including PLCs, run on ARM-based processors, and there is a promising security technology called ARM TrustZone, to offer a Trusted Execution Environment (TEE) on embedded devices. Envisioning that such a hardware-assisted security feature becomes available for ICS devices in the near future, this paper investigates the application of the ARM TrustZone TEE technology for enhancing the security of PLC. Our aim is to evaluate the feasibility and practicality of the TEE-based PLCs through the proof-of-concept design and implementation using open-source software such as OP-TEE and OpenPLC. Our evaluation assesses the performance and resource consumption in real-world ICS configurations, and based on the results, we discuss bottlenecks in the OP-TEE secure OS towards a large-scale ICS and desired changes for its application on ICS devices. Our implementation is made available to public for further study and research.
2305.17644
Jin Sun
Jin Sun, Xiaoshuang Shi, Zhiyuan Wang, Kaidi Xu, Heng Tao Shen and Xiaofeng Zhu
Caterpillar: A Pure-MLP Architecture with Shifted-Pillars-Concatenation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Modeling in Computer Vision has evolved to MLPs. Vision MLPs naturally lack local modeling capability, to which the simplest treatment is combined with convolutional layers. Convolution, famous for its sliding window scheme, also suffers from this scheme of redundancy and low computational efficiency. In this paper, we seek to dispense with the windowing scheme and introduce a more elaborate and effective approach to exploiting locality. To this end, we propose a new MLP module, namely Shifted-Pillars-Concatenation (SPC), that consists of two steps of processes: (1) Pillars-Shift, which generates four neighboring maps by shifting the input image along four directions, and (2) Pillars-Concatenation, which applies linear transformations and concatenation on the maps to aggregate local features. SPC module offers superior local modeling power and performance gains, making it a promising alternative to the convolutional layer. Then, we build a pure-MLP architecture called Caterpillar by replacing the convolutional layer with the SPC module in a hybrid model of sMLPNet. Extensive experiments show Caterpillar's excellent performance and scalability on both ImageNet-1K and small-scale classification benchmarks.
[ { "created": "Sun, 28 May 2023 06:19:36 GMT", "version": "v1" }, { "created": "Thu, 30 Nov 2023 14:06:42 GMT", "version": "v2" } ]
2023-12-01
[ [ "Sun", "Jin", "" ], [ "Shi", "Xiaoshuang", "" ], [ "Wang", "Zhiyuan", "" ], [ "Xu", "Kaidi", "" ], [ "Shen", "Heng Tao", "" ], [ "Zhu", "Xiaofeng", "" ] ]
Modeling in Computer Vision has evolved to MLPs. Vision MLPs naturally lack local modeling capability, to which the simplest treatment is combined with convolutional layers. Convolution, famous for its sliding window scheme, also suffers from this scheme of redundancy and low computational efficiency. In this paper, we seek to dispense with the windowing scheme and introduce a more elaborate and effective approach to exploiting locality. To this end, we propose a new MLP module, namely Shifted-Pillars-Concatenation (SPC), that consists of two steps of processes: (1) Pillars-Shift, which generates four neighboring maps by shifting the input image along four directions, and (2) Pillars-Concatenation, which applies linear transformations and concatenation on the maps to aggregate local features. SPC module offers superior local modeling power and performance gains, making it a promising alternative to the convolutional layer. Then, we build a pure-MLP architecture called Caterpillar by replacing the convolutional layer with the SPC module in a hybrid model of sMLPNet. Extensive experiments show Caterpillar's excellent performance and scalability on both ImageNet-1K and small-scale classification benchmarks.
2207.01705
Rebecca Moussa
Rebecca Moussa and Federica Sarro
Do Not Take It for Granted: Comparing Open-Source Libraries for Software Development Effort Estimation
null
null
null
null
cs.SE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past two decades, several Machine Learning (ML) libraries have become freely available. Many studies have used such libraries to carry out empirical investigations on predictive Software Engineering (SE) tasks. However, the differences stemming from using one library over another have been overlooked, implicitly assuming that using any of these libraries would provide the user with the same or very similar results. This paper aims at raising awareness of the differences incurred when using different ML libraries for software development effort estimation (SEE), one of most widely studied SE prediction tasks. To this end, we investigate 4 deterministic machine learners as provided by 3 of the most popular ML open-source libraries written in different languages (namely, Scikit-Learn, Caret and Weka). We carry out a thorough empirical study comparing the performance of the machine learners on 5 SEE datasets in the two most common SEE scenarios (i.e., out-of-the-box-ml and tuned-ml) as well as an in-depth analysis of the documentation and code of their APIs. The results of our study reveal that the predictions provided by the 3 libraries differ in 95% of the cases on average across a total of 105 cases studied. These differences are significantly large in most cases and yield misestimations of up to approx. 3,000 hours per project. Moreover, our API analysis reveals that these libraries provide the user with different levels of control on the parameters one can manipulate, and a lack of clarity and consistency, overall, which might mislead users. Our findings highlight that the ML library is an important design choice for SEE studies, which can lead to a difference in performance. However, such a difference is under-documented. We conclude by highlighting open-challenges with suggestions for the developers of libraries as well as for the researchers and practitioners using them.
[ { "created": "Mon, 4 Jul 2022 20:06:40 GMT", "version": "v1" } ]
2022-07-06
[ [ "Moussa", "Rebecca", "" ], [ "Sarro", "Federica", "" ] ]
In the past two decades, several Machine Learning (ML) libraries have become freely available. Many studies have used such libraries to carry out empirical investigations on predictive Software Engineering (SE) tasks. However, the differences stemming from using one library over another have been overlooked, implicitly assuming that using any of these libraries would provide the user with the same or very similar results. This paper aims at raising awareness of the differences incurred when using different ML libraries for software development effort estimation (SEE), one of most widely studied SE prediction tasks. To this end, we investigate 4 deterministic machine learners as provided by 3 of the most popular ML open-source libraries written in different languages (namely, Scikit-Learn, Caret and Weka). We carry out a thorough empirical study comparing the performance of the machine learners on 5 SEE datasets in the two most common SEE scenarios (i.e., out-of-the-box-ml and tuned-ml) as well as an in-depth analysis of the documentation and code of their APIs. The results of our study reveal that the predictions provided by the 3 libraries differ in 95% of the cases on average across a total of 105 cases studied. These differences are significantly large in most cases and yield misestimations of up to approx. 3,000 hours per project. Moreover, our API analysis reveals that these libraries provide the user with different levels of control on the parameters one can manipulate, and a lack of clarity and consistency, overall, which might mislead users. Our findings highlight that the ML library is an important design choice for SEE studies, which can lead to a difference in performance. However, such a difference is under-documented. We conclude by highlighting open-challenges with suggestions for the developers of libraries as well as for the researchers and practitioners using them.
1509.04037
Samin Aref
Samin Aref and Mark C. Wilson
Measuring Partial Balance in Signed Networks
Peer-reviewed author copy, 31 pages, 6 figures, 5 tables
Journal of Complex Networks 6, 4 (2018), 566-595
10.1093/comnet/cnx044
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Is the enemy of an enemy necessarily a friend? If not, to what extent does this tend to hold? Such questions were formulated in terms of signed (social) networks and necessary and sufficient conditions for a network to be "balanced" were obtained around 1960. Since then the idea that signed networks tend over time to become more balanced has been widely used in several application areas. However, investigation of this hypothesis has been complicated by the lack of a standard measure of partial balance, since complete balance is almost never achieved in practice. We formalize the concept of a measure of partial balance, discuss various measures, compare the measures on synthetic datasets, and investigate their axiomatic properties. The synthetic data involves Erd\H{o}s-R\'enyi and specially structured random graphs. We show that some measures behave better than others in terms of axioms and ability to differentiate between graphs. We also use well-known datasets from the sociology and biology literature, such as Read's New Guinean tribes, gene regulatory networks related to two organisms, and a network involving senate bill co-sponsorship. Our results show that substantially different levels of partial balance is observed under cycle-based, eigenvalue-based, and frustration-based measures. We make some recommendations for measures to be used in future work.
[ { "created": "Mon, 14 Sep 2015 11:23:49 GMT", "version": "v1" }, { "created": "Wed, 11 May 2016 05:01:07 GMT", "version": "v2" }, { "created": "Wed, 28 Sep 2016 00:26:50 GMT", "version": "v3" }, { "created": "Fri, 14 Apr 2017 11:24:50 GMT", "version": "v4" }, { "created": "Tue, 1 Aug 2017 04:43:38 GMT", "version": "v5" }, { "created": "Mon, 20 Aug 2018 04:41:38 GMT", "version": "v6" } ]
2018-08-21
[ [ "Aref", "Samin", "" ], [ "Wilson", "Mark C.", "" ] ]
Is the enemy of an enemy necessarily a friend? If not, to what extent does this tend to hold? Such questions were formulated in terms of signed (social) networks and necessary and sufficient conditions for a network to be "balanced" were obtained around 1960. Since then the idea that signed networks tend over time to become more balanced has been widely used in several application areas. However, investigation of this hypothesis has been complicated by the lack of a standard measure of partial balance, since complete balance is almost never achieved in practice. We formalize the concept of a measure of partial balance, discuss various measures, compare the measures on synthetic datasets, and investigate their axiomatic properties. The synthetic data involves Erd\H{o}s-R\'enyi and specially structured random graphs. We show that some measures behave better than others in terms of axioms and ability to differentiate between graphs. We also use well-known datasets from the sociology and biology literature, such as Read's New Guinean tribes, gene regulatory networks related to two organisms, and a network involving senate bill co-sponsorship. Our results show that substantially different levels of partial balance is observed under cycle-based, eigenvalue-based, and frustration-based measures. We make some recommendations for measures to be used in future work.
1503.07341
Catarina Moreira
Catarina Moreira
An Experiment on Using Bayesian Networks for Process Mining
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Process mining is a technique that performs an automatic analysis of business processes from a log of events with the promise of understanding how processes are executed in an organisation. Several models have been proposed to address this problem, however, here we propose a different approach to deal with uncertainty. By uncertainty, we mean estimating the probability of some sequence of tasks occurring in a business process, given that only a subset of tasks may be observable. In this sense, this work proposes a new approach to perform process mining using Bayesian Networks. These structures can take into account the probability of a task being present or absent in the business process. Moreover, Bayesian Networks are able to automatically learn these probabilities through mechanisms such as the maximum likelihood estimate and EM clustering. Experiments made over a Loan Application Case study suggest that Bayesian Networks are adequate structures for process mining and enable a deep analysis of the business process model that can be used to answer queries about that process.
[ { "created": "Wed, 25 Mar 2015 11:34:31 GMT", "version": "v1" } ]
2015-03-26
[ [ "Moreira", "Catarina", "" ] ]
Process mining is a technique that performs an automatic analysis of business processes from a log of events with the promise of understanding how processes are executed in an organisation. Several models have been proposed to address this problem, however, here we propose a different approach to deal with uncertainty. By uncertainty, we mean estimating the probability of some sequence of tasks occurring in a business process, given that only a subset of tasks may be observable. In this sense, this work proposes a new approach to perform process mining using Bayesian Networks. These structures can take into account the probability of a task being present or absent in the business process. Moreover, Bayesian Networks are able to automatically learn these probabilities through mechanisms such as the maximum likelihood estimate and EM clustering. Experiments made over a Loan Application Case study suggest that Bayesian Networks are adequate structures for process mining and enable a deep analysis of the business process model that can be used to answer queries about that process.
1702.01786
Manuel Bravo
Chathuri Gunawardhana, Manuel Bravo, Lu\'is Rodrigues
Unobtrusive Deferred Update Stabilization for Efficient Geo-Replication
null
null
null
null
cs.DC cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a novel approach to manage the throughput vs latency tradeoff that emerges when managing updates in geo-replicated systems. Our approach consists in allowing full concurrency when processing local updates and using a deferred local serialisation procedure before shipping updates to remote datacenters. This strategy allows to implement inexpensive mechanisms to ensure system consistency requirements while avoiding intrusive effects on update operations, a major performance limitation of previous systems. We have implemented our approach as a variant of Riak KV. Our extensive evaluation shows that we outperform sequencer-based approaches by almost an order of magnitude in the maximum achievable throughput. Furthermore, unlike previous sequencer-free solutions, our approach reaches nearly optimal remote update visibility latencies without limiting throughput.
[ { "created": "Mon, 6 Feb 2017 20:21:32 GMT", "version": "v1" } ]
2017-02-08
[ [ "Gunawardhana", "Chathuri", "" ], [ "Bravo", "Manuel", "" ], [ "Rodrigues", "Luís", "" ] ]
In this paper we propose a novel approach to manage the throughput vs latency tradeoff that emerges when managing updates in geo-replicated systems. Our approach consists in allowing full concurrency when processing local updates and using a deferred local serialisation procedure before shipping updates to remote datacenters. This strategy allows to implement inexpensive mechanisms to ensure system consistency requirements while avoiding intrusive effects on update operations, a major performance limitation of previous systems. We have implemented our approach as a variant of Riak KV. Our extensive evaluation shows that we outperform sequencer-based approaches by almost an order of magnitude in the maximum achievable throughput. Furthermore, unlike previous sequencer-free solutions, our approach reaches nearly optimal remote update visibility latencies without limiting throughput.
1307.3822
Bang Ye Wu
Bang Ye Wu
A simple approximation algorithm for the internal Steiner minimum tree
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For a metric graph $G=(V,E)$ and $R\subset V$, the internal Steiner minimum tree problem asks for a minimum weight Steiner tree spanning $R$ such that every vertex in $R$ is not a leaf. This note shows a simple polynomial-time $2\rho$-approximation algorithm, in which $\rho$ is the approximation ratio for the Steiner minimum tree problem. The result improves the previous best approximation ratio $2\rho+1$ for the problem. The ratio is not currently best but the algorithm is very simple.
[ { "created": "Mon, 15 Jul 2013 05:56:54 GMT", "version": "v1" }, { "created": "Wed, 17 Jul 2013 08:12:23 GMT", "version": "v2" } ]
2013-07-18
[ [ "Wu", "Bang Ye", "" ] ]
For a metric graph $G=(V,E)$ and $R\subset V$, the internal Steiner minimum tree problem asks for a minimum weight Steiner tree spanning $R$ such that every vertex in $R$ is not a leaf. This note shows a simple polynomial-time $2\rho$-approximation algorithm, in which $\rho$ is the approximation ratio for the Steiner minimum tree problem. The result improves the previous best approximation ratio $2\rho+1$ for the problem. The ratio is not currently best but the algorithm is very simple.
2303.07992
Dehai Min
Yiming Tan, Dehai Min, Yu Li, Wenbo Li, Nan Hu, Yongrui Chen, Guilin Qi
Can ChatGPT Replace Traditional KBQA Models? An In-depth Analysis of the Question Answering Performance of the GPT LLM Family
To be published in Proceedings of ISWC 2023, 22nd International Semantic Web Conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge. Therefore, there is growing interest in exploring whether ChatGPT can replace traditional knowledge-based question answering (KBQA) models. Although there have been some works analyzing the question answering performance of ChatGPT, there is still a lack of large-scale, comprehensive testing of various types of complex questions to analyze the limitations of the model. In this paper, we present a framework that follows the black-box testing specifications of CheckList proposed by Ribeiro et. al. We evaluate ChatGPT and its family of LLMs on eight real-world KB-based complex question answering datasets, which include six English datasets and two multilingual datasets. The total number of test cases is approximately 190,000. In addition to the GPT family of LLMs, we also evaluate the well-known FLAN-T5 to identify commonalities between the GPT family and other LLMs. The dataset and code are available at https://github.com/tan92hl/Complex-Question-Answering-Evaluation-of-GPT-family.git
[ { "created": "Tue, 14 Mar 2023 15:46:28 GMT", "version": "v1" }, { "created": "Fri, 4 Aug 2023 10:25:35 GMT", "version": "v2" }, { "created": "Wed, 20 Sep 2023 05:25:22 GMT", "version": "v3" } ]
2023-09-21
[ [ "Tan", "Yiming", "" ], [ "Min", "Dehai", "" ], [ "Li", "Yu", "" ], [ "Li", "Wenbo", "" ], [ "Hu", "Nan", "" ], [ "Chen", "Yongrui", "" ], [ "Qi", "Guilin", "" ] ]
ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge. Therefore, there is growing interest in exploring whether ChatGPT can replace traditional knowledge-based question answering (KBQA) models. Although there have been some works analyzing the question answering performance of ChatGPT, there is still a lack of large-scale, comprehensive testing of various types of complex questions to analyze the limitations of the model. In this paper, we present a framework that follows the black-box testing specifications of CheckList proposed by Ribeiro et. al. We evaluate ChatGPT and its family of LLMs on eight real-world KB-based complex question answering datasets, which include six English datasets and two multilingual datasets. The total number of test cases is approximately 190,000. In addition to the GPT family of LLMs, we also evaluate the well-known FLAN-T5 to identify commonalities between the GPT family and other LLMs. The dataset and code are available at https://github.com/tan92hl/Complex-Question-Answering-Evaluation-of-GPT-family.git
2309.03710
Theodore Moskovitz
Ted Moskovitz, Samo Hromadka, Ahmed Touati, Diana Borsa, Maneesh Sahani
A State Representation for Diminishing Rewards
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
A common setting in multitask reinforcement learning (RL) demands that an agent rapidly adapt to various stationary reward functions randomly sampled from a fixed distribution. In such situations, the successor representation (SR) is a popular framework which supports rapid policy evaluation by decoupling a policy's expected discounted, cumulative state occupancies from a specific reward function. However, in the natural world, sequential tasks are rarely independent, and instead reflect shifting priorities based on the availability and subjective perception of rewarding stimuli. Reflecting this disjunction, in this paper we study the phenomenon of diminishing marginal utility and introduce a novel state representation, the $\lambda$ representation ($\lambda$R) which, surprisingly, is required for policy evaluation in this setting and which generalizes the SR as well as several other state representations from the literature. We establish the $\lambda$R's formal properties and examine its normative advantages in the context of machine learning, as well as its usefulness for studying natural behaviors, particularly foraging.
[ { "created": "Thu, 7 Sep 2023 13:38:36 GMT", "version": "v1" } ]
2023-09-08
[ [ "Moskovitz", "Ted", "" ], [ "Hromadka", "Samo", "" ], [ "Touati", "Ahmed", "" ], [ "Borsa", "Diana", "" ], [ "Sahani", "Maneesh", "" ] ]
A common setting in multitask reinforcement learning (RL) demands that an agent rapidly adapt to various stationary reward functions randomly sampled from a fixed distribution. In such situations, the successor representation (SR) is a popular framework which supports rapid policy evaluation by decoupling a policy's expected discounted, cumulative state occupancies from a specific reward function. However, in the natural world, sequential tasks are rarely independent, and instead reflect shifting priorities based on the availability and subjective perception of rewarding stimuli. Reflecting this disjunction, in this paper we study the phenomenon of diminishing marginal utility and introduce a novel state representation, the $\lambda$ representation ($\lambda$R) which, surprisingly, is required for policy evaluation in this setting and which generalizes the SR as well as several other state representations from the literature. We establish the $\lambda$R's formal properties and examine its normative advantages in the context of machine learning, as well as its usefulness for studying natural behaviors, particularly foraging.
2312.10620
Muhammad Hamza
Muhammad Hamza, Dominik Siemon, Muhammad Azeem Akbar, Tahsinur Rahman
Human AI Collaboration in Software Engineering: Lessons Learned from a Hands On Workshop
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
This paper investigates the dynamics of human AI collaboration in software engineering, focusing on the use of ChatGPT. Through a thematic analysis of a hands on workshop in which 22 professional software engineers collaborated for three hours with ChatGPT, we explore the transition of AI from a mere tool to a collaborative partner. The study identifies key themes such as the evolving nature of human AI interaction, the capabilities of AI in software engineering tasks, and the challenges and limitations of integrating AI in this domain. The findings show that while AI, particularly ChatGPT, improves the efficiency of code generation and optimization, human oversight remains crucial, especially in areas requiring complex problem solving and security considerations. This research contributes to the theoretical understanding of human AI collaboration in software engineering and provides practical insights for effectively integrating AI tools into development processes. It highlights the need for clear role allocation, effective communication, and balanced AI human collaboration to realize the full potential of AI in software engineering.
[ { "created": "Sun, 17 Dec 2023 06:31:05 GMT", "version": "v1" } ]
2023-12-19
[ [ "Hamza", "Muhammad", "" ], [ "Siemon", "Dominik", "" ], [ "Akbar", "Muhammad Azeem", "" ], [ "Rahman", "Tahsinur", "" ] ]
This paper investigates the dynamics of human AI collaboration in software engineering, focusing on the use of ChatGPT. Through a thematic analysis of a hands on workshop in which 22 professional software engineers collaborated for three hours with ChatGPT, we explore the transition of AI from a mere tool to a collaborative partner. The study identifies key themes such as the evolving nature of human AI interaction, the capabilities of AI in software engineering tasks, and the challenges and limitations of integrating AI in this domain. The findings show that while AI, particularly ChatGPT, improves the efficiency of code generation and optimization, human oversight remains crucial, especially in areas requiring complex problem solving and security considerations. This research contributes to the theoretical understanding of human AI collaboration in software engineering and provides practical insights for effectively integrating AI tools into development processes. It highlights the need for clear role allocation, effective communication, and balanced AI human collaboration to realize the full potential of AI in software engineering.
2101.02174
Reza Karegar
Reza Karegar, Parke Godfrey, Lukasz Golab, Mehdi Kargar, Divesh Srivastava, Jaroslaw Szlichta
Efficient Discovery of Approximate Order Dependencies
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
Order dependencies (ODs) capture relationships between ordered domains of attributes. Approximate ODs (AODs) capture such relationships even when there exist exceptions in the data. During automated discovery of ODs, validation is the process of verifying whether an OD holds. We present an algorithm for validating approximate ODs with significantly improved runtime performance over existing methods for AODs, and prove that it is correct and has optimal runtime. By replacing the validation step in a leading algorithm for approximate OD discovery with ours, we achieve orders-of-magnitude improvements in performance.
[ { "created": "Wed, 6 Jan 2021 18:22:52 GMT", "version": "v1" } ]
2021-01-07
[ [ "Karegar", "Reza", "" ], [ "Godfrey", "Parke", "" ], [ "Golab", "Lukasz", "" ], [ "Kargar", "Mehdi", "" ], [ "Srivastava", "Divesh", "" ], [ "Szlichta", "Jaroslaw", "" ] ]
Order dependencies (ODs) capture relationships between ordered domains of attributes. Approximate ODs (AODs) capture such relationships even when there exist exceptions in the data. During automated discovery of ODs, validation is the process of verifying whether an OD holds. We present an algorithm for validating approximate ODs with significantly improved runtime performance over existing methods for AODs, and prove that it is correct and has optimal runtime. By replacing the validation step in a leading algorithm for approximate OD discovery with ours, we achieve orders-of-magnitude improvements in performance.
2112.10551
Anastasia-Maria Leventi-Peetz
A.-M. Leventi-Peetz, T. \"Ostreich, W. Lennartz, K. Weber
Scope and Sense of Explainability for AI-Systems
Version 2 : Improved hyphenations in references
Arai K. (eds) Intelligent Systems and Applications. IntelliSys 2021. Lecture Notes in Networks and Systems, vol 294. Springer, Cham
10.1007/978-3-030-82193-7_19
null
cs.LG cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Certain aspects of the explainability of AI systems will be critically discussed. This especially with focus on the feasibility of the task of making every AI system explainable. Emphasis will be given to difficulties related to the explainability of highly complex and efficient AI systems which deliver decisions whose explanation defies classical logical schemes of cause and effect. AI systems have provably delivered unintelligible solutions which in retrospect were characterized as ingenious (for example move 37 of the game 2 of AlphaGo). It will be elaborated on arguments supporting the notion that if AI-solutions were to be discarded in advance because of their not being thoroughly comprehensible, a great deal of the potentiality of intelligent systems would be wasted.
[ { "created": "Mon, 20 Dec 2021 14:25:05 GMT", "version": "v1" }, { "created": "Wed, 22 Dec 2021 14:18:33 GMT", "version": "v2" } ]
2021-12-23
[ [ "Leventi-Peetz", "A. -M.", "" ], [ "Östreich", "T.", "" ], [ "Lennartz", "W.", "" ], [ "Weber", "K.", "" ] ]
Certain aspects of the explainability of AI systems will be critically discussed. This especially with focus on the feasibility of the task of making every AI system explainable. Emphasis will be given to difficulties related to the explainability of highly complex and efficient AI systems which deliver decisions whose explanation defies classical logical schemes of cause and effect. AI systems have provably delivered unintelligible solutions which in retrospect were characterized as ingenious (for example move 37 of the game 2 of AlphaGo). It will be elaborated on arguments supporting the notion that if AI-solutions were to be discarded in advance because of their not being thoroughly comprehensible, a great deal of the potentiality of intelligent systems would be wasted.
2308.09198
Mehdi Azabou
Mehdi Azabou, Venkataramana Ganesh, Shantanu Thakoor, Chi-Heng Lin, Lakshmi Sathidevi, Ran Liu, Michal Valko, Petar Veli\v{c}kovi\'c, Eva L. Dyer
Half-Hop: A graph upsampling approach for slowing down message passing
Published as a conference paper at ICML 2023
null
null
null
cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Message passing neural networks have shown a lot of success on graph-structured data. However, there are many instances where message passing can lead to over-smoothing or fail when neighboring nodes belong to different classes. In this work, we introduce a simple yet general framework for improving learning in message passing neural networks. Our approach essentially upsamples edges in the original graph by adding "slow nodes" at each edge that can mediate communication between a source and a target node. Our method only modifies the input graph, making it plug-and-play and easy to use with existing models. To understand the benefits of slowing down message passing, we provide theoretical and empirical analyses. We report results on several supervised and self-supervised benchmarks, and show improvements across the board, notably in heterophilic conditions where adjacent nodes are more likely to have different labels. Finally, we show how our approach can be used to generate augmentations for self-supervised learning, where slow nodes are randomly introduced into different edges in the graph to generate multi-scale views with variable path lengths.
[ { "created": "Thu, 17 Aug 2023 22:24:15 GMT", "version": "v1" } ]
2023-08-21
[ [ "Azabou", "Mehdi", "" ], [ "Ganesh", "Venkataramana", "" ], [ "Thakoor", "Shantanu", "" ], [ "Lin", "Chi-Heng", "" ], [ "Sathidevi", "Lakshmi", "" ], [ "Liu", "Ran", "" ], [ "Valko", "Michal", "" ], [ "Veličković", "Petar", "" ], [ "Dyer", "Eva L.", "" ] ]
Message passing neural networks have shown a lot of success on graph-structured data. However, there are many instances where message passing can lead to over-smoothing or fail when neighboring nodes belong to different classes. In this work, we introduce a simple yet general framework for improving learning in message passing neural networks. Our approach essentially upsamples edges in the original graph by adding "slow nodes" at each edge that can mediate communication between a source and a target node. Our method only modifies the input graph, making it plug-and-play and easy to use with existing models. To understand the benefits of slowing down message passing, we provide theoretical and empirical analyses. We report results on several supervised and self-supervised benchmarks, and show improvements across the board, notably in heterophilic conditions where adjacent nodes are more likely to have different labels. Finally, we show how our approach can be used to generate augmentations for self-supervised learning, where slow nodes are randomly introduced into different edges in the graph to generate multi-scale views with variable path lengths.
1911.04975
Tomasz Arodz
Aliakbar Panahi, Seyran Saeedi, Tom Arodz
word2ket: Space-efficient Word Embeddings inspired by Quantum Entanglement
null
International Conference on Learning Representations 2020
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning natural language processing models often use vector word embeddings, such as word2vec or GloVe, to represent words. A discrete sequence of words can be much more easily integrated with downstream neural layers if it is represented as a sequence of continuous vectors. Also, semantic relationships between words, learned from a text corpus, can be encoded in the relative configurations of the embedding vectors. However, storing and accessing embedding vectors for all words in a dictionary requires large amount of space, and may stain systems with limited GPU memory. Here, we used approaches inspired by quantum computing to propose two related methods, {\em word2ket} and {\em word2ketXS}, for storing word embedding matrix during training and inference in a highly efficient way. Our approach achieves a hundred-fold or more reduction in the space required to store the embeddings with almost no relative drop in accuracy in practical natural language processing tasks.
[ { "created": "Tue, 12 Nov 2019 16:06:50 GMT", "version": "v1" }, { "created": "Mon, 10 Feb 2020 12:23:59 GMT", "version": "v2" }, { "created": "Tue, 3 Mar 2020 14:08:07 GMT", "version": "v3" } ]
2020-03-04
[ [ "Panahi", "Aliakbar", "" ], [ "Saeedi", "Seyran", "" ], [ "Arodz", "Tom", "" ] ]
Deep learning natural language processing models often use vector word embeddings, such as word2vec or GloVe, to represent words. A discrete sequence of words can be much more easily integrated with downstream neural layers if it is represented as a sequence of continuous vectors. Also, semantic relationships between words, learned from a text corpus, can be encoded in the relative configurations of the embedding vectors. However, storing and accessing embedding vectors for all words in a dictionary requires large amount of space, and may stain systems with limited GPU memory. Here, we used approaches inspired by quantum computing to propose two related methods, {\em word2ket} and {\em word2ketXS}, for storing word embedding matrix during training and inference in a highly efficient way. Our approach achieves a hundred-fold or more reduction in the space required to store the embeddings with almost no relative drop in accuracy in practical natural language processing tasks.
1801.05643
Felix Martin Schuhknecht
Ankur Sharma, Felix Martin Schuhknecht, Jens Dittrich
The Case for Automatic Database Administration using Deep Reinforcement Learning
null
null
null
null
cs.DB cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Like any large software system, a full-fledged DBMS offers an overwhelming amount of configuration knobs. These range from static initialisation parameters like buffer sizes, degree of concurrency, or level of replication to complex runtime decisions like creating a secondary index on a particular column or reorganising the physical layout of the store. To simplify the configuration, industry grade DBMSs are usually shipped with various advisory tools, that provide recommendations for given workloads and machines. However, reality shows that the actual configuration, tuning, and maintenance is usually still done by a human administrator, relying on intuition and experience. Recent work on deep reinforcement learning has shown very promising results in solving problems, that require such a sense of intuition. For instance, it has been applied very successfully in learning how to play complicated games with enormous search spaces. Motivated by these achievements, in this work we explore how deep reinforcement learning can be used to administer a DBMS. First, we will describe how deep reinforcement learning can be used to automatically tune an arbitrary software system like a DBMS by defining a problem environment. Second, we showcase our concept of NoDBA at the concrete example of index selection and evaluate how well it recommends indexes for given workloads.
[ { "created": "Wed, 17 Jan 2018 12:51:01 GMT", "version": "v1" } ]
2018-01-18
[ [ "Sharma", "Ankur", "" ], [ "Schuhknecht", "Felix Martin", "" ], [ "Dittrich", "Jens", "" ] ]
Like any large software system, a full-fledged DBMS offers an overwhelming amount of configuration knobs. These range from static initialisation parameters like buffer sizes, degree of concurrency, or level of replication to complex runtime decisions like creating a secondary index on a particular column or reorganising the physical layout of the store. To simplify the configuration, industry grade DBMSs are usually shipped with various advisory tools, that provide recommendations for given workloads and machines. However, reality shows that the actual configuration, tuning, and maintenance is usually still done by a human administrator, relying on intuition and experience. Recent work on deep reinforcement learning has shown very promising results in solving problems, that require such a sense of intuition. For instance, it has been applied very successfully in learning how to play complicated games with enormous search spaces. Motivated by these achievements, in this work we explore how deep reinforcement learning can be used to administer a DBMS. First, we will describe how deep reinforcement learning can be used to automatically tune an arbitrary software system like a DBMS by defining a problem environment. Second, we showcase our concept of NoDBA at the concrete example of index selection and evaluate how well it recommends indexes for given workloads.
2311.14766
Feiyang Han
Feiyang Han and Yimin Wei and Zhaofeng Liu and Yanxing Qi
Reinforcement Learning from Statistical Feedback: the Journey from AB Testing to ANT Testing
null
null
null
null
cs.LG math.ST stat.ME stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement Learning from Human Feedback (RLHF) has played a crucial role in the success of large models such as ChatGPT. RLHF is a reinforcement learning framework which combines human feedback to improve learning effectiveness and performance. However, obtaining preferences feedback manually is quite expensive in commercial applications. Some statistical commercial indicators are usually more valuable and always ignored in RLHF. There exists a gap between commercial target and model training. In our research, we will attempt to fill this gap with statistical business feedback instead of human feedback, using AB testing which is a well-established statistical method. Reinforcement Learning from Statistical Feedback (RLSF) based on AB testing is proposed. Statistical inference methods are used to obtain preferences for training the reward network, which fine-tunes the pre-trained model in reinforcement learning framework, achieving greater business value. Furthermore, we extend AB testing with double selections at a single time-point to ANT testing with multiple selections at different feedback time points. Moreover, we design numerical experiences to validate the effectiveness of our algorithm framework.
[ { "created": "Fri, 24 Nov 2023 07:50:52 GMT", "version": "v1" } ]
2023-11-28
[ [ "Han", "Feiyang", "" ], [ "Wei", "Yimin", "" ], [ "Liu", "Zhaofeng", "" ], [ "Qi", "Yanxing", "" ] ]
Reinforcement Learning from Human Feedback (RLHF) has played a crucial role in the success of large models such as ChatGPT. RLHF is a reinforcement learning framework which combines human feedback to improve learning effectiveness and performance. However, obtaining preferences feedback manually is quite expensive in commercial applications. Some statistical commercial indicators are usually more valuable and always ignored in RLHF. There exists a gap between commercial target and model training. In our research, we will attempt to fill this gap with statistical business feedback instead of human feedback, using AB testing which is a well-established statistical method. Reinforcement Learning from Statistical Feedback (RLSF) based on AB testing is proposed. Statistical inference methods are used to obtain preferences for training the reward network, which fine-tunes the pre-trained model in reinforcement learning framework, achieving greater business value. Furthermore, we extend AB testing with double selections at a single time-point to ANT testing with multiple selections at different feedback time points. Moreover, we design numerical experiences to validate the effectiveness of our algorithm framework.
2302.05599
Yujia Mu
Yujia Mu, Cong Shen
Communication and Storage Efficient Federated Split Learning
null
null
null
null
cs.IT cs.LG eess.SP math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) is a popular distributed machine learning (ML) paradigm, but is often limited by significant communication costs and edge device computation capabilities. Federated Split Learning (FSL) preserves the parallel model training principle of FL, with a reduced device computation requirement thanks to splitting the ML model between the server and clients. However, FSL still incurs very high communication overhead due to transmitting the smashed data and gradients between the clients and the server in each global round. Furthermore, the server has to maintain separate models for every client, resulting in a significant computation and storage requirement that grows linearly with the number of clients. This paper tries to solve these two issues by proposing a communication and storage efficient federated and split learning (CSE-FSL) strategy, which utilizes an auxiliary network to locally update the client models while keeping only a single model at the server, hence avoiding the communication of gradients from the server and greatly reducing the server resource requirement. Communication cost is further reduced by only sending the smashed data in selected epochs from the clients. We provide a rigorous theoretical analysis of CSE-FSL that guarantees its convergence for non-convex loss functions. Extensive experimental results demonstrate that CSE-FSL has a significant communication reduction over existing FSL techniques while achieving state-of-the-art convergence and model accuracy, using several real-world FL tasks.
[ { "created": "Sat, 11 Feb 2023 04:44:29 GMT", "version": "v1" } ]
2023-02-14
[ [ "Mu", "Yujia", "" ], [ "Shen", "Cong", "" ] ]
Federated learning (FL) is a popular distributed machine learning (ML) paradigm, but is often limited by significant communication costs and edge device computation capabilities. Federated Split Learning (FSL) preserves the parallel model training principle of FL, with a reduced device computation requirement thanks to splitting the ML model between the server and clients. However, FSL still incurs very high communication overhead due to transmitting the smashed data and gradients between the clients and the server in each global round. Furthermore, the server has to maintain separate models for every client, resulting in a significant computation and storage requirement that grows linearly with the number of clients. This paper tries to solve these two issues by proposing a communication and storage efficient federated and split learning (CSE-FSL) strategy, which utilizes an auxiliary network to locally update the client models while keeping only a single model at the server, hence avoiding the communication of gradients from the server and greatly reducing the server resource requirement. Communication cost is further reduced by only sending the smashed data in selected epochs from the clients. We provide a rigorous theoretical analysis of CSE-FSL that guarantees its convergence for non-convex loss functions. Extensive experimental results demonstrate that CSE-FSL has a significant communication reduction over existing FSL techniques while achieving state-of-the-art convergence and model accuracy, using several real-world FL tasks.
2405.18187
Longx He
Longxiang He, Li Shen, Junbo Tan, Xueqian Wang
AlignIQL: Policy Alignment in Implicit Q-Learning through Constrained Optimization
19 pages, 3 figures, 4 tables
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Implicit Q-learning (IQL) serves as a strong baseline for offline RL, which learns the value function using only dataset actions through quantile regression. However, it is unclear how to recover the implicit policy from the learned implicit Q-function and why IQL can utilize weighted regression for policy extraction. IDQL reinterprets IQL as an actor-critic method and gets weights of implicit policy, however, this weight only holds for the optimal value function. In this work, we introduce a different way to solve the implicit policy-finding problem (IPF) by formulating this problem as an optimization problem. Based on this optimization problem, we further propose two practical algorithms AlignIQL and AlignIQL-hard, which inherit the advantages of decoupling actor from critic in IQL and provide insights into why IQL can use weighted regression for policy extraction. Compared with IQL and IDQL, we find our method keeps the simplicity of IQL and solves the implicit policy-finding problem. Experimental results on D4RL datasets show that our method achieves competitive or superior results compared with other SOTA offline RL methods. Especially in complex sparse reward tasks like Antmaze and Adroit, our method outperforms IQL and IDQL by a significant margin.
[ { "created": "Tue, 28 May 2024 14:01:03 GMT", "version": "v1" } ]
2024-05-29
[ [ "He", "Longxiang", "" ], [ "Shen", "Li", "" ], [ "Tan", "Junbo", "" ], [ "Wang", "Xueqian", "" ] ]
Implicit Q-learning (IQL) serves as a strong baseline for offline RL, which learns the value function using only dataset actions through quantile regression. However, it is unclear how to recover the implicit policy from the learned implicit Q-function and why IQL can utilize weighted regression for policy extraction. IDQL reinterprets IQL as an actor-critic method and gets weights of implicit policy, however, this weight only holds for the optimal value function. In this work, we introduce a different way to solve the implicit policy-finding problem (IPF) by formulating this problem as an optimization problem. Based on this optimization problem, we further propose two practical algorithms AlignIQL and AlignIQL-hard, which inherit the advantages of decoupling actor from critic in IQL and provide insights into why IQL can use weighted regression for policy extraction. Compared with IQL and IDQL, we find our method keeps the simplicity of IQL and solves the implicit policy-finding problem. Experimental results on D4RL datasets show that our method achieves competitive or superior results compared with other SOTA offline RL methods. Especially in complex sparse reward tasks like Antmaze and Adroit, our method outperforms IQL and IDQL by a significant margin.
2403.01579
Christoph Alt
Christoph Alt, Martin Lanser, Jonas Plewinski, Atin Janki, Axel Klawonn, Harald K\"ostler, Michael Selzer, Ulrich R\"ude
A Continuous Benchmarking Infrastructure for High-Performance Computing Applications
null
International Journal of Parallel, Emergent & Distributed Systems, 2024
10.1080/17445760.2024.2360190
null
cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For scientific software, especially those used for large-scale simulations, achieving good performance and efficiently using the available hardware resources is essential. It is important to regularly perform benchmarks to ensure the efficient use of hardware and software when systems are changing and the software evolves. However, this can become quickly very tedious when many options for parameters, solvers, and hardware architectures are available. We present a continuous benchmarking strategy that automates benchmarking new code changes on high-performance computing clusters. This makes it possible to track how each code change affects the performance and how it evolves.
[ { "created": "Sun, 3 Mar 2024 18:03:20 GMT", "version": "v1" } ]
2024-06-12
[ [ "Alt", "Christoph", "" ], [ "Lanser", "Martin", "" ], [ "Plewinski", "Jonas", "" ], [ "Janki", "Atin", "" ], [ "Klawonn", "Axel", "" ], [ "Köstler", "Harald", "" ], [ "Selzer", "Michael", "" ], [ "Rüde", "Ulrich", "" ] ]
For scientific software, especially those used for large-scale simulations, achieving good performance and efficiently using the available hardware resources is essential. It is important to regularly perform benchmarks to ensure the efficient use of hardware and software when systems are changing and the software evolves. However, this can become quickly very tedious when many options for parameters, solvers, and hardware architectures are available. We present a continuous benchmarking strategy that automates benchmarking new code changes on high-performance computing clusters. This makes it possible to track how each code change affects the performance and how it evolves.
1902.00159
Angeline Aguinaldo
Angeline Aguinaldo, Ping-Yeh Chiang, Alex Gain, Ameya Patil, Kolten Pearson, Soheil Feizi
Compressing GANs using Knowledge Distillation
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative Adversarial Networks (GANs) have been used in several machine learning tasks such as domain transfer, super resolution, and synthetic data generation. State-of-the-art GANs often use tens of millions of parameters, making them expensive to deploy for applications in low SWAP (size, weight, and power) hardware, such as mobile devices, and for applications with real time capabilities. There has been no work found to reduce the number of parameters used in GANs. Therefore, we propose a method to compress GANs using knowledge distillation techniques, in which a smaller "student" GAN learns to mimic a larger "teacher" GAN. We show that the distillation methods used on MNIST, CIFAR-10, and Celeb-A datasets can compress teacher GANs at ratios of 1669:1, 58:1, and 87:1, respectively, while retaining the quality of the generated image. From our experiments, we observe a qualitative limit for GAN's compression. Moreover, we observe that, with a fixed parameter budget, compressed GANs outperform GANs trained using standard training methods. We conjecture that this is partially owing to the optimization landscape of over-parameterized GANs which allows efficient training using alternating gradient descent. Thus, training an over-parameterized GAN followed by our proposed compression scheme provides a high quality generative model with a small number of parameters.
[ { "created": "Fri, 1 Feb 2019 03:24:26 GMT", "version": "v1" } ]
2019-02-04
[ [ "Aguinaldo", "Angeline", "" ], [ "Chiang", "Ping-Yeh", "" ], [ "Gain", "Alex", "" ], [ "Patil", "Ameya", "" ], [ "Pearson", "Kolten", "" ], [ "Feizi", "Soheil", "" ] ]
Generative Adversarial Networks (GANs) have been used in several machine learning tasks such as domain transfer, super resolution, and synthetic data generation. State-of-the-art GANs often use tens of millions of parameters, making them expensive to deploy for applications in low SWAP (size, weight, and power) hardware, such as mobile devices, and for applications with real time capabilities. There has been no work found to reduce the number of parameters used in GANs. Therefore, we propose a method to compress GANs using knowledge distillation techniques, in which a smaller "student" GAN learns to mimic a larger "teacher" GAN. We show that the distillation methods used on MNIST, CIFAR-10, and Celeb-A datasets can compress teacher GANs at ratios of 1669:1, 58:1, and 87:1, respectively, while retaining the quality of the generated image. From our experiments, we observe a qualitative limit for GAN's compression. Moreover, we observe that, with a fixed parameter budget, compressed GANs outperform GANs trained using standard training methods. We conjecture that this is partially owing to the optimization landscape of over-parameterized GANs which allows efficient training using alternating gradient descent. Thus, training an over-parameterized GAN followed by our proposed compression scheme provides a high quality generative model with a small number of parameters.
1905.02157
Dongfang Zhao
Xinying Wang and Abdullah Al-Mamun and Feng Yan and Mohammad Sadoghi and Dongfang Zhao
BlockLite: A Lightweight Emulator for Public Blockchains
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Blockchain is an enabler of many emerging decentralized applications in areas of cryptocurrency, Internet of Things, smart healthcare, among many others. Although various open-source blockchain frameworks are available, the infrastructure is complex enough and difficult for many users to modify or test out new research ideas. To make it worse, many advantages of blockchain systems can be demonstrated only at large scales, e.g., thousands of nodes, which are not always available to researchers. This demo paper presents a lightweight single-node emulator of blockchain systems, namely \mbox{BlockLite}, designed to be executing real proof-of-work workload along with peer-to-peer network communications and hash-based immutability. BlockLite employs a preprocessing approach to avoid the per-node computation overhead at runtime and thus scales to thousands of nodes. Moreover, BlockLite offers an easy-to-use programming interface allowing for a Lego-like customization to the system, e.g. new ad-hoc consensus protocols.
[ { "created": "Mon, 6 May 2019 17:17:44 GMT", "version": "v1" } ]
2019-05-07
[ [ "Wang", "Xinying", "" ], [ "Al-Mamun", "Abdullah", "" ], [ "Yan", "Feng", "" ], [ "Sadoghi", "Mohammad", "" ], [ "Zhao", "Dongfang", "" ] ]
Blockchain is an enabler of many emerging decentralized applications in areas of cryptocurrency, Internet of Things, smart healthcare, among many others. Although various open-source blockchain frameworks are available, the infrastructure is complex enough and difficult for many users to modify or test out new research ideas. To make it worse, many advantages of blockchain systems can be demonstrated only at large scales, e.g., thousands of nodes, which are not always available to researchers. This demo paper presents a lightweight single-node emulator of blockchain systems, namely \mbox{BlockLite}, designed to be executing real proof-of-work workload along with peer-to-peer network communications and hash-based immutability. BlockLite employs a preprocessing approach to avoid the per-node computation overhead at runtime and thus scales to thousands of nodes. Moreover, BlockLite offers an easy-to-use programming interface allowing for a Lego-like customization to the system, e.g. new ad-hoc consensus protocols.
2209.03839
Minxue Tang
Minxue Tang, Jianyi Zhang, Mingyuan Ma, Louis DiValentin, Aolin Ding, Amin Hassanzadeh, Hai Li, Yiran Chen
FADE: Enabling Federated Adversarial Training on Heterogeneous Resource-Constrained Edge Devices
Preprint version
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated adversarial training can effectively complement adversarial robustness into the privacy-preserving federated learning systems. However, the high demand for memory capacity and computing power makes large-scale federated adversarial training infeasible on resource-constrained edge devices. Few previous studies in federated adversarial training have tried to tackle both memory and computational constraints simultaneously. In this paper, we propose a new framework named Federated Adversarial Decoupled Learning (FADE) to enable AT on heterogeneous resource-constrained edge devices. FADE differentially decouples the entire model into small modules to fit into the resource budget of each device, and each device only needs to perform AT on a single module in each communication round. We also propose an auxiliary weight decay to alleviate objective inconsistency and achieve better accuracy-robustness balance in FADE. FADE offers theoretical guarantees for convergence and adversarial robustness, and our experimental results show that FADE can significantly reduce the consumption of memory and computing power while maintaining accuracy and robustness.
[ { "created": "Thu, 8 Sep 2022 14:22:49 GMT", "version": "v1" }, { "created": "Wed, 26 Apr 2023 00:46:58 GMT", "version": "v2" } ]
2023-04-27
[ [ "Tang", "Minxue", "" ], [ "Zhang", "Jianyi", "" ], [ "Ma", "Mingyuan", "" ], [ "DiValentin", "Louis", "" ], [ "Ding", "Aolin", "" ], [ "Hassanzadeh", "Amin", "" ], [ "Li", "Hai", "" ], [ "Chen", "Yiran", "" ] ]
Federated adversarial training can effectively complement adversarial robustness into the privacy-preserving federated learning systems. However, the high demand for memory capacity and computing power makes large-scale federated adversarial training infeasible on resource-constrained edge devices. Few previous studies in federated adversarial training have tried to tackle both memory and computational constraints simultaneously. In this paper, we propose a new framework named Federated Adversarial Decoupled Learning (FADE) to enable AT on heterogeneous resource-constrained edge devices. FADE differentially decouples the entire model into small modules to fit into the resource budget of each device, and each device only needs to perform AT on a single module in each communication round. We also propose an auxiliary weight decay to alleviate objective inconsistency and achieve better accuracy-robustness balance in FADE. FADE offers theoretical guarantees for convergence and adversarial robustness, and our experimental results show that FADE can significantly reduce the consumption of memory and computing power while maintaining accuracy and robustness.
2407.08649
Juhani Kivim\"aki
Juhani Kivim\"aki, Jakub Bia{\l}ek, Jukka K. Nurminen and Wojtek Kuberski
Confidence-based Estimators for Predictive Performance in Model Monitoring
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
After a machine learning model has been deployed into production, its predictive performance needs to be monitored. Ideally, such monitoring can be carried out by comparing the model's predictions against ground truth labels. For this to be possible, the ground truth labels must be available relatively soon after inference. However, there are many use cases where ground truth labels are available only after a significant delay, or in the worst case, not at all. In such cases, directly monitoring the model's predictive performance is impossible. Recently, novel methods for estimating the predictive performance of a model when ground truth is unavailable have been developed. Many of these methods leverage model confidence or other uncertainty estimates and are experimentally compared against a naive baseline method, namely Average Confidence (AC), which estimates model accuracy as the average of confidence scores for a given set of predictions. However, until now the theoretical properties of the AC method have not been properly explored. In this paper, we try to fill this gap by reviewing the AC method and show that under certain general assumptions, it is an unbiased and consistent estimator of model accuracy with many desirable properties. We also compare this baseline estimator against some more complex estimators empirically and show that in many cases the AC method is able to beat the others, although the comparative quality of the different estimators is heavily case-dependent.
[ { "created": "Thu, 11 Jul 2024 16:28:31 GMT", "version": "v1" } ]
2024-07-12
[ [ "Kivimäki", "Juhani", "" ], [ "Białek", "Jakub", "" ], [ "Nurminen", "Jukka K.", "" ], [ "Kuberski", "Wojtek", "" ] ]
After a machine learning model has been deployed into production, its predictive performance needs to be monitored. Ideally, such monitoring can be carried out by comparing the model's predictions against ground truth labels. For this to be possible, the ground truth labels must be available relatively soon after inference. However, there are many use cases where ground truth labels are available only after a significant delay, or in the worst case, not at all. In such cases, directly monitoring the model's predictive performance is impossible. Recently, novel methods for estimating the predictive performance of a model when ground truth is unavailable have been developed. Many of these methods leverage model confidence or other uncertainty estimates and are experimentally compared against a naive baseline method, namely Average Confidence (AC), which estimates model accuracy as the average of confidence scores for a given set of predictions. However, until now the theoretical properties of the AC method have not been properly explored. In this paper, we try to fill this gap by reviewing the AC method and show that under certain general assumptions, it is an unbiased and consistent estimator of model accuracy with many desirable properties. We also compare this baseline estimator against some more complex estimators empirically and show that in many cases the AC method is able to beat the others, although the comparative quality of the different estimators is heavily case-dependent.
1512.00547
Nima Namvar
Nima Namvar, Niloofar Bahadori, and Fatemeh Afghah
Context-Aware D2D Peer Selection for Load Distribution in LTE Networks
49th Annual Asilomar Conference on Signals, Systems, and Computers, accepted
null
null
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a novel context-aware approach for resource allocation in device to device (D2D) communication networks which exploits context information about the users velocity and size of their demanded data to decide whether their traffic load can be transferred to D2D tier and which D2D users should be paired. The problem is modeled as a matching game with externalities and a novel algorithm is proposed to solve the game which converges to a stable matching between the D2D users. Simulation results demonstrate the effectiveness of our model in offloading the cellular networks traffic to the D2D tier.
[ { "created": "Wed, 2 Dec 2015 02:39:49 GMT", "version": "v1" } ]
2015-12-04
[ [ "Namvar", "Nima", "" ], [ "Bahadori", "Niloofar", "" ], [ "Afghah", "Fatemeh", "" ] ]
In this paper we propose a novel context-aware approach for resource allocation in device to device (D2D) communication networks which exploits context information about the users velocity and size of their demanded data to decide whether their traffic load can be transferred to D2D tier and which D2D users should be paired. The problem is modeled as a matching game with externalities and a novel algorithm is proposed to solve the game which converges to a stable matching between the D2D users. Simulation results demonstrate the effectiveness of our model in offloading the cellular networks traffic to the D2D tier.
2209.10866
Aleksandar Armacki
Aleksandar Armacki, Dragana Bajovic, Dusan Jakovetic, Soummya Kar
A One-shot Framework for Distributed Clustered Learning in Heterogeneous Environments
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper proposes a family of communication efficient methods for distributed learning in heterogeneous environments in which users obtain data from one of $K$ different distributions. In the proposed setup, the grouping of users (based on the data distributions they sample), as well as the underlying statistical properties of the distributions, are apriori unknown. A family of One-shot Distributed Clustered Learning methods (ODCL-$\mathcal{C}$) is proposed, parametrized by the set of admissible clustering algorithms $\mathcal{C}$, with the objective of learning the true model at each user. The admissible clustering methods include $K$-means (KM) and convex clustering (CC), giving rise to various one-shot methods within the proposed family, such as ODCL-KM and ODCL-CC. The proposed one-shot approach, based on local computations at the users and a clustering based aggregation step at the server is shown to provide strong learning guarantees. In particular, for strongly convex problems it is shown that, as long as the number of data points per user is above a threshold, the proposed approach achieves order-optimal mean-squared error (MSE) rates in terms of the sample size. An explicit characterization of the threshold is provided in terms of problem parameters. The trade-offs with respect to selecting various clustering methods (ODCL-CC, ODCL-KM) are discussed and significant improvements over state-of-the-art are demonstrated. Numerical experiments illustrate the findings and corroborate the performance of the proposed methods.
[ { "created": "Thu, 22 Sep 2022 09:04:10 GMT", "version": "v1" }, { "created": "Mon, 31 Oct 2022 20:47:05 GMT", "version": "v2" }, { "created": "Sun, 29 Jan 2023 21:25:53 GMT", "version": "v3" }, { "created": "Fri, 9 Jun 2023 07:07:51 GMT", "version": "v4" }, { "created": "Sun, 22 Oct 2023 03:09:54 GMT", "version": "v5" } ]
2023-10-24
[ [ "Armacki", "Aleksandar", "" ], [ "Bajovic", "Dragana", "" ], [ "Jakovetic", "Dusan", "" ], [ "Kar", "Soummya", "" ] ]
The paper proposes a family of communication efficient methods for distributed learning in heterogeneous environments in which users obtain data from one of $K$ different distributions. In the proposed setup, the grouping of users (based on the data distributions they sample), as well as the underlying statistical properties of the distributions, are apriori unknown. A family of One-shot Distributed Clustered Learning methods (ODCL-$\mathcal{C}$) is proposed, parametrized by the set of admissible clustering algorithms $\mathcal{C}$, with the objective of learning the true model at each user. The admissible clustering methods include $K$-means (KM) and convex clustering (CC), giving rise to various one-shot methods within the proposed family, such as ODCL-KM and ODCL-CC. The proposed one-shot approach, based on local computations at the users and a clustering based aggregation step at the server is shown to provide strong learning guarantees. In particular, for strongly convex problems it is shown that, as long as the number of data points per user is above a threshold, the proposed approach achieves order-optimal mean-squared error (MSE) rates in terms of the sample size. An explicit characterization of the threshold is provided in terms of problem parameters. The trade-offs with respect to selecting various clustering methods (ODCL-CC, ODCL-KM) are discussed and significant improvements over state-of-the-art are demonstrated. Numerical experiments illustrate the findings and corroborate the performance of the proposed methods.