id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2310.04809
Jan Reineke
Valentin Touzeau and Jan Reineke
Leveraging LLVM's ScalarEvolution for Symbolic Data Cache Analysis
Extended version of RTSS 2023 paper including definitions and proofs omitted in the conference version due to space constraints
null
null
null
cs.PL
http://creativecommons.org/licenses/by-nc-sa/4.0/
While instruction cache analysis is essentially a solved problem, data cache analysis is more challenging. In contrast to instruction fetches, the data accesses generated by a memory instruction may vary with the program's inputs and across dynamic occurrences of the same instruction in loops. We observe that the plain control-flow graph (CFG) abstraction employed in classical cache analyses is inadequate to capture the dynamic behavior of memory instructions. On top of plain CFGs, accurate analysis of the underlying program's cache behavior is impossible. Thus, our first contribution is the definition of a more expressive program abstraction coined symbolic control-flow graphs, which can be obtained from LLVM's ScalarEvolution analysis. To exploit this richer abstraction, our main contribution is the development of symbolic data cache analysis, a smooth generalization of classical LRU must analysis from plain to symbolic control-flow graphs. The experimental evaluation demonstrates that symbolic data cache analysis consistently outperforms classical LRU must analysis both in terms of accuracy and analysis runtime.
[ { "created": "Sat, 7 Oct 2023 14:02:55 GMT", "version": "v1" }, { "created": "Tue, 17 Oct 2023 09:05:30 GMT", "version": "v2" } ]
2023-10-18
[ [ "Touzeau", "Valentin", "" ], [ "Reineke", "Jan", "" ] ]
While instruction cache analysis is essentially a solved problem, data cache analysis is more challenging. In contrast to instruction fetches, the data accesses generated by a memory instruction may vary with the program's inputs and across dynamic occurrences of the same instruction in loops. We observe that the plain control-flow graph (CFG) abstraction employed in classical cache analyses is inadequate to capture the dynamic behavior of memory instructions. On top of plain CFGs, accurate analysis of the underlying program's cache behavior is impossible. Thus, our first contribution is the definition of a more expressive program abstraction coined symbolic control-flow graphs, which can be obtained from LLVM's ScalarEvolution analysis. To exploit this richer abstraction, our main contribution is the development of symbolic data cache analysis, a smooth generalization of classical LRU must analysis from plain to symbolic control-flow graphs. The experimental evaluation demonstrates that symbolic data cache analysis consistently outperforms classical LRU must analysis both in terms of accuracy and analysis runtime.
1805.02039
Josue Ortega
Josue Ortega
Integration in Social Networks
null
Physica A: Statistical Mechanics and its Applications; 2019
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose the notion of $k$-integration as a measure of equality of opportunity in social networks. A social network is $k$-integrated if there is a path of length at most $k$ between any two individuals, thus guaranteeing that everybody has the same network opportunities to find a job, a romantic partner, or valuable information. We compute the minimum number of bridges (i.e. edges between nodes belonging to different components) or central nodes (those which are endpoints to a bridge) required to ensure $k$-integration. The answer depends only linearly on the size of each component for $k=2$, and does not depend on the size of each component for $k \geq 3$. Our findings provide a simple and intuitive way to compare the equality of opportunity of real-life social networks.
[ { "created": "Sat, 5 May 2018 10:57:45 GMT", "version": "v1" }, { "created": "Fri, 24 May 2019 11:46:01 GMT", "version": "v2" } ]
2019-05-27
[ [ "Ortega", "Josue", "" ] ]
We propose the notion of $k$-integration as a measure of equality of opportunity in social networks. A social network is $k$-integrated if there is a path of length at most $k$ between any two individuals, thus guaranteeing that everybody has the same network opportunities to find a job, a romantic partner, or valuable information. We compute the minimum number of bridges (i.e. edges between nodes belonging to different components) or central nodes (those which are endpoints to a bridge) required to ensure $k$-integration. The answer depends only linearly on the size of each component for $k=2$, and does not depend on the size of each component for $k \geq 3$. Our findings provide a simple and intuitive way to compare the equality of opportunity of real-life social networks.
1807.04834
Yandong Wen
Yandong Wen, Mahmoud Al Ismail, Bhiksha Raj, Rita Singh
Optimal Strategies for Matching and Retrieval Problems by Comparing Covariates
support material for "Disjoint Mapping Network for Cross-modal Matching of Voices and Faces"
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many retrieval problems, where we must retrieve one or more entries from a gallery in response to a probe, it is common practice to learn to do by directly comparing the probe and gallery entries to one another. In many situations the gallery and probe have common covariates -- external variables that are common to both. In principle it is possible to perform the retrieval based merely on these covariates. The process, however, becomes gated by our ability to recognize the covariates for the probe and gallery entries correctly. In this paper we analyze optimal strategies for retrieval based only on matching covariates, when the recognition of the covariates is itself inaccurate. We investigate multiple problems: recovering one item from a gallery of $N$ entries, matching pairs of instances, and retrieval from large collections. We verify our analytical formulae through experiments to verify their correctness in practical settings.
[ { "created": "Thu, 12 Jul 2018 21:36:07 GMT", "version": "v1" }, { "created": "Mon, 16 Jul 2018 00:53:37 GMT", "version": "v2" } ]
2018-07-17
[ [ "Wen", "Yandong", "" ], [ "Ismail", "Mahmoud Al", "" ], [ "Raj", "Bhiksha", "" ], [ "Singh", "Rita", "" ] ]
In many retrieval problems, where we must retrieve one or more entries from a gallery in response to a probe, it is common practice to learn to do by directly comparing the probe and gallery entries to one another. In many situations the gallery and probe have common covariates -- external variables that are common to both. In principle it is possible to perform the retrieval based merely on these covariates. The process, however, becomes gated by our ability to recognize the covariates for the probe and gallery entries correctly. In this paper we analyze optimal strategies for retrieval based only on matching covariates, when the recognition of the covariates is itself inaccurate. We investigate multiple problems: recovering one item from a gallery of $N$ entries, matching pairs of instances, and retrieval from large collections. We verify our analytical formulae through experiments to verify their correctness in practical settings.
2103.15396
Ziyu Li
Ziyu Li, Yuncong Yao, Zhibin Quan, Wankou Yang, Jin Xie
SIENet: Spatial Information Enhancement Network for 3D Object Detection from Point Cloud
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LiDAR-based 3D object detection pushes forward an immense influence on autonomous vehicles. Due to the limitation of the intrinsic properties of LiDAR, fewer points are collected at the objects farther away from the sensor. This imbalanced density of point clouds degrades the detection accuracy but is generally neglected by previous works. To address the challenge, we propose a novel two-stage 3D object detection framework, named SIENet. Specifically, we design the Spatial Information Enhancement (SIE) module to predict the spatial shapes of the foreground points within proposals, and extract the structure information to learn the representative features for further box refinement. The predicted spatial shapes are complete and dense point sets, thus the extracted structure information contains more semantic representation. Besides, we design the Hybrid-Paradigm Region Proposal Network (HP-RPN) which includes multiple branches to learn discriminate features and generate accurate proposals for the SIE module. Extensive experiments on the KITTI 3D object detection benchmark show that our elaborately designed SIENet outperforms the state-of-the-art methods by a large margin.
[ { "created": "Mon, 29 Mar 2021 07:45:09 GMT", "version": "v1" }, { "created": "Thu, 1 Apr 2021 02:16:52 GMT", "version": "v2" } ]
2021-04-02
[ [ "Li", "Ziyu", "" ], [ "Yao", "Yuncong", "" ], [ "Quan", "Zhibin", "" ], [ "Yang", "Wankou", "" ], [ "Xie", "Jin", "" ] ]
LiDAR-based 3D object detection pushes forward an immense influence on autonomous vehicles. Due to the limitation of the intrinsic properties of LiDAR, fewer points are collected at the objects farther away from the sensor. This imbalanced density of point clouds degrades the detection accuracy but is generally neglected by previous works. To address the challenge, we propose a novel two-stage 3D object detection framework, named SIENet. Specifically, we design the Spatial Information Enhancement (SIE) module to predict the spatial shapes of the foreground points within proposals, and extract the structure information to learn the representative features for further box refinement. The predicted spatial shapes are complete and dense point sets, thus the extracted structure information contains more semantic representation. Besides, we design the Hybrid-Paradigm Region Proposal Network (HP-RPN) which includes multiple branches to learn discriminate features and generate accurate proposals for the SIE module. Extensive experiments on the KITTI 3D object detection benchmark show that our elaborately designed SIENet outperforms the state-of-the-art methods by a large margin.
1807.06107
Amita Misra
Mansurul Bhuiyan, Amita Misra, Saurabh Tripathy, Jalal Mahmud, Rama Akkiraju
Don't get Lost in Negation: An Effective Negation Handled Dialogue Acts Prediction Algorithm for Twitter Customer Service Conversations
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last several years, Twitter is being adopted by the companies as an alternative platform to interact with the customers to address their concerns. With the abundance of such unconventional conversation resources, push for developing effective virtual agents is more than ever. To address this challenge, a better understanding of such customer service conversations is required. Lately, there have been several works proposing a novel taxonomy for fine-grained dialogue acts as well as develop algorithms for automatic detection of these acts. The outcomes of these works are providing stepping stones for the ultimate goal of building efficient and effective virtual agents. But none of these works consider handling the notion of negation into the proposed algorithms. In this work, we developed an SVM-based dialogue acts prediction algorithm for Twitter customer service conversations where negation handling is an integral part of the end-to-end solution. For negation handling, we propose several efficient heuristics as well as adopt recent state-of- art third party machine learning based solutions. Empirically we show model's performance gain while handling negation compared to when we don't. Our experiments show that for the informal text such as tweets, the heuristic-based approach is more effective.
[ { "created": "Mon, 16 Jul 2018 21:01:52 GMT", "version": "v1" } ]
2018-07-18
[ [ "Bhuiyan", "Mansurul", "" ], [ "Misra", "Amita", "" ], [ "Tripathy", "Saurabh", "" ], [ "Mahmud", "Jalal", "" ], [ "Akkiraju", "Rama", "" ] ]
In the last several years, Twitter is being adopted by the companies as an alternative platform to interact with the customers to address their concerns. With the abundance of such unconventional conversation resources, push for developing effective virtual agents is more than ever. To address this challenge, a better understanding of such customer service conversations is required. Lately, there have been several works proposing a novel taxonomy for fine-grained dialogue acts as well as develop algorithms for automatic detection of these acts. The outcomes of these works are providing stepping stones for the ultimate goal of building efficient and effective virtual agents. But none of these works consider handling the notion of negation into the proposed algorithms. In this work, we developed an SVM-based dialogue acts prediction algorithm for Twitter customer service conversations where negation handling is an integral part of the end-to-end solution. For negation handling, we propose several efficient heuristics as well as adopt recent state-of- art third party machine learning based solutions. Empirically we show model's performance gain while handling negation compared to when we don't. Our experiments show that for the informal text such as tweets, the heuristic-based approach is more effective.
2010.10556
Peidong Wang
Peidong Wang, Zhuo Chen, DeLiang Wang, Jinyu Li, Yifan Gong
Speaker Separation Using Speaker Inventories and Estimated Speech
null
null
null
null
cs.SD cs.CL eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose speaker separation using speaker inventories and estimated speech (SSUSIES), a framework leveraging speaker profiles and estimated speech for speaker separation. SSUSIES contains two methods, speaker separation using speaker inventories (SSUSI) and speaker separation using estimated speech (SSUES). SSUSI performs speaker separation with the help of speaker inventory. By combining the advantages of permutation invariant training (PIT) and speech extraction, SSUSI significantly outperforms conventional approaches. SSUES is a widely applicable technique that can substantially improve speaker separation performance using the output of first-pass separation. We evaluate the models on both speaker separation and speech recognition metrics.
[ { "created": "Tue, 20 Oct 2020 18:15:45 GMT", "version": "v1" } ]
2020-10-22
[ [ "Wang", "Peidong", "" ], [ "Chen", "Zhuo", "" ], [ "Wang", "DeLiang", "" ], [ "Li", "Jinyu", "" ], [ "Gong", "Yifan", "" ] ]
We propose speaker separation using speaker inventories and estimated speech (SSUSIES), a framework leveraging speaker profiles and estimated speech for speaker separation. SSUSIES contains two methods, speaker separation using speaker inventories (SSUSI) and speaker separation using estimated speech (SSUES). SSUSI performs speaker separation with the help of speaker inventory. By combining the advantages of permutation invariant training (PIT) and speech extraction, SSUSI significantly outperforms conventional approaches. SSUES is a widely applicable technique that can substantially improve speaker separation performance using the output of first-pass separation. We evaluate the models on both speaker separation and speech recognition metrics.
2206.14532
Keshigeyan Chandrasegaran
Keshigeyan Chandrasegaran, Ngoc-Trung Tran, Yunqing Zhao, Ngai-Man Cheung
Revisiting Label Smoothing and Knowledge Distillation Compatibility: What was Missing?
ICML 2022; 27 pages
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work investigates the compatibility between label smoothing (LS) and knowledge distillation (KD). Contemporary findings addressing this thesis statement take dichotomous standpoints: Muller et al. (2019) and Shen et al. (2021b). Critically, there is no effort to understand and resolve these contradictory findings, leaving the primal question -- to smooth or not to smooth a teacher network? -- unanswered. The main contributions of our work are the discovery, analysis and validation of systematic diffusion as the missing concept which is instrumental in understanding and resolving these contradictory findings. This systematic diffusion essentially curtails the benefits of distilling from an LS-trained teacher, thereby rendering KD at increased temperatures ineffective. Our discovery is comprehensively supported by large-scale experiments, analyses and case studies including image classification, neural machine translation and compact student distillation tasks spanning across multiple datasets and teacher-student architectures. Based on our analysis, we suggest practitioners to use an LS-trained teacher with a low-temperature transfer to achieve high performance students. Code and models are available at https://keshik6.github.io/revisiting-ls-kd-compatibility/
[ { "created": "Wed, 29 Jun 2022 11:00:44 GMT", "version": "v1" } ]
2022-06-30
[ [ "Chandrasegaran", "Keshigeyan", "" ], [ "Tran", "Ngoc-Trung", "" ], [ "Zhao", "Yunqing", "" ], [ "Cheung", "Ngai-Man", "" ] ]
This work investigates the compatibility between label smoothing (LS) and knowledge distillation (KD). Contemporary findings addressing this thesis statement take dichotomous standpoints: Muller et al. (2019) and Shen et al. (2021b). Critically, there is no effort to understand and resolve these contradictory findings, leaving the primal question -- to smooth or not to smooth a teacher network? -- unanswered. The main contributions of our work are the discovery, analysis and validation of systematic diffusion as the missing concept which is instrumental in understanding and resolving these contradictory findings. This systematic diffusion essentially curtails the benefits of distilling from an LS-trained teacher, thereby rendering KD at increased temperatures ineffective. Our discovery is comprehensively supported by large-scale experiments, analyses and case studies including image classification, neural machine translation and compact student distillation tasks spanning across multiple datasets and teacher-student architectures. Based on our analysis, we suggest practitioners to use an LS-trained teacher with a low-temperature transfer to achieve high performance students. Code and models are available at https://keshik6.github.io/revisiting-ls-kd-compatibility/
2211.15751
Renjie Xu
Renjie Xu, Saiedeh Razavi and Rong Zheng
Edge Video Analytics: A Survey on Applications, Systems and Enabling Techniques
Accepted in IEEE Communications Surveys and Tutorials, 2023
null
10.1109/COMST.2023.3232091
null
cs.NI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. The basic concepts of EVA (e.g., definition, architectures) were not fully elucidated due to the rapid development of this domain. To fill these gaps, we provide a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. EVA systems and their enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
[ { "created": "Mon, 28 Nov 2022 20:11:37 GMT", "version": "v1" }, { "created": "Tue, 19 Sep 2023 18:21:35 GMT", "version": "v2" }, { "created": "Wed, 11 Oct 2023 15:13:02 GMT", "version": "v3" } ]
2023-10-12
[ [ "Xu", "Renjie", "" ], [ "Razavi", "Saiedeh", "" ], [ "Zheng", "Rong", "" ] ]
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. The basic concepts of EVA (e.g., definition, architectures) were not fully elucidated due to the rapid development of this domain. To fill these gaps, we provide a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. EVA systems and their enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
2109.01538
Somenath Chakraborty
Somenath Chakraborty, Beddhu Murali
Investigate the Correlation of Breast Cancer Dataset using Different Clustering Technique
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
The objectives of this paper are to explore ways to analyze breast cancer dataset in the context of unsupervised learning without prior training model. The paper investigates different ways of clustering techniques as well as preprocessing. This in-depth analysis builds the footprint which can further use for designing a most robust and accurate medical prognosis system. This paper also give emphasis on correlations of data points with different standard benchmark techniques. Keywords: Breast cancer dataset, Clustering Technique Hopkins Statistic, K-means Clustering, k-medoids or partitioning around medoids (PAM)
[ { "created": "Fri, 3 Sep 2021 14:02:17 GMT", "version": "v1" } ]
2021-09-06
[ [ "Chakraborty", "Somenath", "" ], [ "Murali", "Beddhu", "" ] ]
The objectives of this paper are to explore ways to analyze breast cancer dataset in the context of unsupervised learning without prior training model. The paper investigates different ways of clustering techniques as well as preprocessing. This in-depth analysis builds the footprint which can further use for designing a most robust and accurate medical prognosis system. This paper also give emphasis on correlations of data points with different standard benchmark techniques. Keywords: Breast cancer dataset, Clustering Technique Hopkins Statistic, K-means Clustering, k-medoids or partitioning around medoids (PAM)
2302.10900
Liang Qu
Liang Qu, Ningzhi Tang, Ruiqi Zheng, Quoc Viet Hung Nguyen, Zi Huang, Yuhui Shi, Hongzhi Yin
Semi-decentralized Federated Ego Graph Learning for Recommendation
null
null
null
null
cs.LG cs.AI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collaborative filtering (CF) based recommender systems are typically trained based on personal interaction data (e.g., clicks and purchases) that could be naturally represented as ego graphs. However, most existing recommendation methods collect these ego graphs from all users to compose a global graph to obtain high-order collaborative information between users and items, and these centralized CF recommendation methods inevitably lead to a high risk of user privacy leakage. Although recently proposed federated recommendation systems can mitigate the privacy problem, they either restrict the on-device local training to an isolated ego graph or rely on an additional third-party server to access other ego graphs resulting in a cumbersome pipeline, which is hard to work in practice. In addition, existing federated recommendation systems require resource-limited devices to maintain the entire embedding tables resulting in high communication costs. In light of this, we propose a semi-decentralized federated ego graph learning framework for on-device recommendations, named SemiDFEGL, which introduces new device-to-device collaborations to improve scalability and reduce communication costs and innovatively utilizes predicted interacted item nodes to connect isolated ego graphs to augment local subgraphs such that the high-order user-item collaborative information could be used in a privacy-preserving manner. Furthermore, the proposed framework is model-agnostic, meaning that it could be seamlessly integrated with existing graph neural network-based recommendation methods and privacy protection techniques. To validate the effectiveness of the proposed SemiDFEGL, extensive experiments are conducted on three public datasets, and the results demonstrate the superiority of the proposed SemiDFEGL compared to other federated recommendation methods.
[ { "created": "Fri, 10 Feb 2023 03:57:45 GMT", "version": "v1" } ]
2023-02-23
[ [ "Qu", "Liang", "" ], [ "Tang", "Ningzhi", "" ], [ "Zheng", "Ruiqi", "" ], [ "Nguyen", "Quoc Viet Hung", "" ], [ "Huang", "Zi", "" ], [ "Shi", "Yuhui", "" ], [ "Yin", "Hongzhi", "" ] ]
Collaborative filtering (CF) based recommender systems are typically trained based on personal interaction data (e.g., clicks and purchases) that could be naturally represented as ego graphs. However, most existing recommendation methods collect these ego graphs from all users to compose a global graph to obtain high-order collaborative information between users and items, and these centralized CF recommendation methods inevitably lead to a high risk of user privacy leakage. Although recently proposed federated recommendation systems can mitigate the privacy problem, they either restrict the on-device local training to an isolated ego graph or rely on an additional third-party server to access other ego graphs resulting in a cumbersome pipeline, which is hard to work in practice. In addition, existing federated recommendation systems require resource-limited devices to maintain the entire embedding tables resulting in high communication costs. In light of this, we propose a semi-decentralized federated ego graph learning framework for on-device recommendations, named SemiDFEGL, which introduces new device-to-device collaborations to improve scalability and reduce communication costs and innovatively utilizes predicted interacted item nodes to connect isolated ego graphs to augment local subgraphs such that the high-order user-item collaborative information could be used in a privacy-preserving manner. Furthermore, the proposed framework is model-agnostic, meaning that it could be seamlessly integrated with existing graph neural network-based recommendation methods and privacy protection techniques. To validate the effectiveness of the proposed SemiDFEGL, extensive experiments are conducted on three public datasets, and the results demonstrate the superiority of the proposed SemiDFEGL compared to other federated recommendation methods.
2401.04820
Furkan \c{C}olhak
Furkan \c{C}olhak, Mert \.Ilhan Ecevit, Bilal Emir U\c{c}ar, Reiner Creutzburg, Hasan Da\u{g}
Phishing Website Detection through Multi-Model Analysis of HTML Content
null
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The way we communicate and work has changed significantly with the rise of the Internet. While it has opened up new opportunities, it has also brought about an increase in cyber threats. One common and serious threat is phishing, where cybercriminals employ deceptive methods to steal sensitive information.This study addresses the pressing issue of phishing by introducing an advanced detection model that meticulously focuses on HTML content. Our proposed approach integrates a specialized Multi-Layer Perceptron (MLP) model for structured tabular data and two pretrained Natural Language Processing (NLP) models for analyzing textual features such as page titles and content. The embeddings from these models are harmoniously combined through a novel fusion process. The resulting fused embeddings are then input into a linear classifier. Recognizing the scarcity of recent datasets for comprehensive phishing research, our contribution extends to the creation of an up-to-date dataset, which we openly share with the community. The dataset is meticulously curated to reflect real-life phishing conditions, ensuring relevance and applicability. The research findings highlight the effectiveness of the proposed approach, with the CANINE demonstrating superior performance in analyzing page titles and the RoBERTa excelling in evaluating page content. The fusion of two NLP and one MLP model,termed MultiText-LP, achieves impressive results, yielding a 96.80 F1 score and a 97.18 accuracy score on our research dataset. Furthermore, our approach outperforms existing methods on the CatchPhish HTML dataset, showcasing its efficacies.
[ { "created": "Tue, 9 Jan 2024 21:08:13 GMT", "version": "v1" }, { "created": "Sun, 10 Mar 2024 11:13:32 GMT", "version": "v2" }, { "created": "Wed, 10 Jul 2024 10:47:07 GMT", "version": "v3" } ]
2024-07-11
[ [ "Çolhak", "Furkan", "" ], [ "Ecevit", "Mert İlhan", "" ], [ "Uçar", "Bilal Emir", "" ], [ "Creutzburg", "Reiner", "" ], [ "Dağ", "Hasan", "" ] ]
The way we communicate and work has changed significantly with the rise of the Internet. While it has opened up new opportunities, it has also brought about an increase in cyber threats. One common and serious threat is phishing, where cybercriminals employ deceptive methods to steal sensitive information.This study addresses the pressing issue of phishing by introducing an advanced detection model that meticulously focuses on HTML content. Our proposed approach integrates a specialized Multi-Layer Perceptron (MLP) model for structured tabular data and two pretrained Natural Language Processing (NLP) models for analyzing textual features such as page titles and content. The embeddings from these models are harmoniously combined through a novel fusion process. The resulting fused embeddings are then input into a linear classifier. Recognizing the scarcity of recent datasets for comprehensive phishing research, our contribution extends to the creation of an up-to-date dataset, which we openly share with the community. The dataset is meticulously curated to reflect real-life phishing conditions, ensuring relevance and applicability. The research findings highlight the effectiveness of the proposed approach, with the CANINE demonstrating superior performance in analyzing page titles and the RoBERTa excelling in evaluating page content. The fusion of two NLP and one MLP model,termed MultiText-LP, achieves impressive results, yielding a 96.80 F1 score and a 97.18 accuracy score on our research dataset. Furthermore, our approach outperforms existing methods on the CatchPhish HTML dataset, showcasing its efficacies.
1411.2405
Zahra Sabet Sarvestani
Zahra Sabetsarvestani, Hamidreza Amindavar
Sparse Estimation with Generalized Beta Mixture and the Horseshoe Prior
null
null
null
null
cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the use of the Generalized Beta Mixture (GBM) and Horseshoe distributions as priors in the Bayesian Compressive Sensing framework is proposed. The distributions are considered in a two-layer hierarchical model, making the corresponding inference problem amenable to Expectation Maximization (EM). We present an explicit, algebraic EM-update rule for the models, yielding two fast and experimentally validated algorithms for signal recovery. Experimental results show that our algorithms outperform state-of-the-art methods on a wide range of sparsity levels and amplitudes in terms of reconstruction accuracy, convergence rate and sparsity. The largest improvement can be observed for sparse signals with high amplitudes.
[ { "created": "Mon, 10 Nov 2014 13:01:10 GMT", "version": "v1" } ]
2014-11-11
[ [ "Sabetsarvestani", "Zahra", "" ], [ "Amindavar", "Hamidreza", "" ] ]
In this paper, the use of the Generalized Beta Mixture (GBM) and Horseshoe distributions as priors in the Bayesian Compressive Sensing framework is proposed. The distributions are considered in a two-layer hierarchical model, making the corresponding inference problem amenable to Expectation Maximization (EM). We present an explicit, algebraic EM-update rule for the models, yielding two fast and experimentally validated algorithms for signal recovery. Experimental results show that our algorithms outperform state-of-the-art methods on a wide range of sparsity levels and amplitudes in terms of reconstruction accuracy, convergence rate and sparsity. The largest improvement can be observed for sparse signals with high amplitudes.
1202.4128
Nadeem Javaid
N. Javaid, A. Bibi, Z. A. Khan, U. Khan, K. Djouani
Evaluating Wireless Proactive Routing Protocols under Scalability and Traffic Constraints
48th ICC-AHSN, 2012; Ottawa Canada
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we evaluate and analyze the impact of different network loads and varying no. of nodes on distance vector and link state routing algorithms. We select three well known proactive protocols; Destination Sequenced Distance Vector (DSDV) operates on distance vector routing, while Fisheye State Routing (FSR) and Optimized Link State Routing (OLSR) protocols are based on link state routing. Further, we evaluate and compare the effects on the performance of protocols by changing the routing strategies of routing algorithms. We also enhance selected protocols to achieve high performance. We take throughput, End-to-End Delay (E2ED) and Normalized Routing Load (NRL) as performance metrics for evaluation and comparison of chosen protocols both with default and enhanced versions. Based upon extensive simulations in NS-2, we compare and discuss performance trade-offs of the protocols, i.e., how a protocol achieves high packet delivery by paying some cost in the form of increased E2ED and/or routing overhead. FSR due to scope routing technique performs well in high data rates, while, OLSR is more scalable in denser networks due to limited retransmissions through Multi-Point Relays (MPRs).
[ { "created": "Sun, 19 Feb 2012 06:43:23 GMT", "version": "v1" } ]
2012-02-21
[ [ "Javaid", "N.", "" ], [ "Bibi", "A.", "" ], [ "Khan", "Z. A.", "" ], [ "Khan", "U.", "" ], [ "Djouani", "K.", "" ] ]
In this paper, we evaluate and analyze the impact of different network loads and varying no. of nodes on distance vector and link state routing algorithms. We select three well known proactive protocols; Destination Sequenced Distance Vector (DSDV) operates on distance vector routing, while Fisheye State Routing (FSR) and Optimized Link State Routing (OLSR) protocols are based on link state routing. Further, we evaluate and compare the effects on the performance of protocols by changing the routing strategies of routing algorithms. We also enhance selected protocols to achieve high performance. We take throughput, End-to-End Delay (E2ED) and Normalized Routing Load (NRL) as performance metrics for evaluation and comparison of chosen protocols both with default and enhanced versions. Based upon extensive simulations in NS-2, we compare and discuss performance trade-offs of the protocols, i.e., how a protocol achieves high packet delivery by paying some cost in the form of increased E2ED and/or routing overhead. FSR due to scope routing technique performs well in high data rates, while, OLSR is more scalable in denser networks due to limited retransmissions through Multi-Point Relays (MPRs).
2209.08827
Damien Hansen
Damien Hansen (CIRTI, GETALP), Pierre-Yves Houlmont (CIRTI)
A Snapshot into the Possibility of Video Game Machine Translation
null
The 15th Conference of the Association for Machine Translation in the Americas, AMTA, Sep 2022, Orlando (FL), United States. pp.257-269
null
null
cs.CL cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present in this article what we believe to be one of the first attempts at video game machine translation. Our study shows that models trained only with limited in-domain data surpass publicly available systems by a significant margin, and a subsequent human evaluation reveals interesting findings in the final translation. The first part of the article introduces some of the challenges of video game translation, some of the existing literature, as well as the systems and data sets used in this experiment. The last sections discuss our analysis of the resulting translation and the potential benefits of such an automated system. One such finding highlights the model's ability to learn typical rules and patterns of video game translations from English into French. Our conclusions therefore indicate that the specific case of video game machine translation could prove very much useful given the encouraging results, the highly repetitive nature of the work, and the often poor working conditions that translators face in this field. As with other use cases of MT in cultural sectors, however, we believe this is heavily dependent on the proper implementation of the tool, which should be used interactively by human translators to stimulate creativity instead of raw post-editing for the sake of productivity.
[ { "created": "Mon, 19 Sep 2022 08:16:59 GMT", "version": "v1" } ]
2022-09-20
[ [ "Hansen", "Damien", "", "CIRTI, GETALP" ], [ "Houlmont", "Pierre-Yves", "", "CIRTI" ] ]
We present in this article what we believe to be one of the first attempts at video game machine translation. Our study shows that models trained only with limited in-domain data surpass publicly available systems by a significant margin, and a subsequent human evaluation reveals interesting findings in the final translation. The first part of the article introduces some of the challenges of video game translation, some of the existing literature, as well as the systems and data sets used in this experiment. The last sections discuss our analysis of the resulting translation and the potential benefits of such an automated system. One such finding highlights the model's ability to learn typical rules and patterns of video game translations from English into French. Our conclusions therefore indicate that the specific case of video game machine translation could prove very much useful given the encouraging results, the highly repetitive nature of the work, and the often poor working conditions that translators face in this field. As with other use cases of MT in cultural sectors, however, we believe this is heavily dependent on the proper implementation of the tool, which should be used interactively by human translators to stimulate creativity instead of raw post-editing for the sake of productivity.
2404.09687
Leon Schiller
Johannes Lengler, Leon Schiller, Oliver Sieberling
Plus Strategies are Exponentially Slower for Planted Optima of Random Height
null
null
10.1145/3638529.3654088
null
cs.NE cs.AI math.PR
http://creativecommons.org/licenses/by/4.0/
We compare the $(1,\lambda)$-EA and the $(1 + \lambda)$-EA on the recently introduced benchmark DisOM, which is the OneMax function with randomly planted local optima. Previous work showed that if all local optima have the same relative height, then the plus strategy never loses more than a factor $O(n\log n)$ compared to the comma strategy. Here we show that even small random fluctuations in the heights of the local optima have a devastating effect for the plus strategy and lead to super-polynomial runtimes. On the other hand, due to their ability to escape local optima, comma strategies are unaffected by the height of the local optima and remain efficient. Our results hold for a broad class of possible distortions and show that the plus strategy, but not the comma strategy, is generally deceived by sparse unstructured fluctuations of a smooth landscape.
[ { "created": "Mon, 15 Apr 2024 11:37:47 GMT", "version": "v1" } ]
2024-04-16
[ [ "Lengler", "Johannes", "" ], [ "Schiller", "Leon", "" ], [ "Sieberling", "Oliver", "" ] ]
We compare the $(1,\lambda)$-EA and the $(1 + \lambda)$-EA on the recently introduced benchmark DisOM, which is the OneMax function with randomly planted local optima. Previous work showed that if all local optima have the same relative height, then the plus strategy never loses more than a factor $O(n\log n)$ compared to the comma strategy. Here we show that even small random fluctuations in the heights of the local optima have a devastating effect for the plus strategy and lead to super-polynomial runtimes. On the other hand, due to their ability to escape local optima, comma strategies are unaffected by the height of the local optima and remain efficient. Our results hold for a broad class of possible distortions and show that the plus strategy, but not the comma strategy, is generally deceived by sparse unstructured fluctuations of a smooth landscape.
2207.14381
Xing Nie
Xing Nie, Bolin Ni, Jianlong Chang, Gaomeng Meng, Chunlei Huo, Zhaoxiang Zhang, Shiming Xiang, Qi Tian, Chunhong Pan
Pro-tuning: Unified Prompt Tuning for Vision Tasks
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In computer vision, fine-tuning is the de-facto approach to leverage pre-trained vision models to perform downstream tasks. However, deploying it in practice is quite challenging, due to adopting parameter inefficient global update and heavily relying on high-quality downstream data. Recently, prompt-based learning, which adds a task-relevant prompt to adapt the downstream tasks to pre-trained models, has drastically boosted the performance of many natural language downstream tasks. In this work, we extend this notable transfer ability benefited from prompt into vision models as an alternative to fine-tuning. To this end, we propose parameter-efficient Prompt tuning (Pro-tuning) to adapt frozen vision models to various downstream vision tasks. The key to Pro-tuning is prompt-based tuning, i.e., learning task-specific vision prompts for downstream input images with the pre-trained model frozen. By only training a few additional parameters, it can work on diverse CNN-based and Transformer-based architectures. Extensive experiments evidence that Pro-tuning outperforms fine-tuning in a broad range of vision tasks and scenarios, including image classification (generic objects, class imbalance, image corruption, adversarial robustness, and out-of-distribution generalization), and dense prediction tasks such as object detection and semantic segmentation.
[ { "created": "Thu, 28 Jul 2022 21:09:31 GMT", "version": "v1" }, { "created": "Sun, 14 Aug 2022 18:16:35 GMT", "version": "v2" }, { "created": "Tue, 23 Aug 2022 03:39:05 GMT", "version": "v3" } ]
2022-08-24
[ [ "Nie", "Xing", "" ], [ "Ni", "Bolin", "" ], [ "Chang", "Jianlong", "" ], [ "Meng", "Gaomeng", "" ], [ "Huo", "Chunlei", "" ], [ "Zhang", "Zhaoxiang", "" ], [ "Xiang", "Shiming", "" ], [ "Tian", "Qi", "" ], [ "Pan", "Chunhong", "" ] ]
In computer vision, fine-tuning is the de-facto approach to leverage pre-trained vision models to perform downstream tasks. However, deploying it in practice is quite challenging, due to adopting parameter inefficient global update and heavily relying on high-quality downstream data. Recently, prompt-based learning, which adds a task-relevant prompt to adapt the downstream tasks to pre-trained models, has drastically boosted the performance of many natural language downstream tasks. In this work, we extend this notable transfer ability benefited from prompt into vision models as an alternative to fine-tuning. To this end, we propose parameter-efficient Prompt tuning (Pro-tuning) to adapt frozen vision models to various downstream vision tasks. The key to Pro-tuning is prompt-based tuning, i.e., learning task-specific vision prompts for downstream input images with the pre-trained model frozen. By only training a few additional parameters, it can work on diverse CNN-based and Transformer-based architectures. Extensive experiments evidence that Pro-tuning outperforms fine-tuning in a broad range of vision tasks and scenarios, including image classification (generic objects, class imbalance, image corruption, adversarial robustness, and out-of-distribution generalization), and dense prediction tasks such as object detection and semantic segmentation.
2303.15762
Shlomi Steinberg
Shlomi Steinberg, Ravi Ramamoorthi, Benedikt Bitterli, Eugene d'Eon, Ling-Qi Yan, Matt Pharr
A Generalized Ray Formulation For Wave-Optics Rendering
For additional information, see https://ssteinberg.xyz/2023/03/27/rtplt/
null
null
null
cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Under ray-optical light transport, the classical ray serves as a linear and local "point query" of light's behaviour. Linearity and locality are crucial to the formulation of sophisticated path tracing and sampling techniques, that enable efficient solutions to light transport problems in complex, real-world settings and environments. However, such formulations are firmly confined to the realm of ray optics, while many applications of interest -- in computer graphics and computational optics -- demand a more precise understanding of light: as waves. We rigorously formulate the generalized ray, which enables linear and weakly-local queries of arbitrary wave-optical distributions of light. Generalized rays arise from photodetection states, and therefore allow performing backward (sensor-to-source) wave-optical light transport. Our formulations are accurate and highly general: they facilitate the application of modern path tracing techniques for wave-optical rendering, with light of any state of coherence and any spectral properties. We improve upon the state-of-the-art in terms of the generality and accuracy of the formalism, ease of application, as well as performance. As a consequence, we are able to render large, complex scenes, as in Fig. 1, and even do interactive wave-optical light transport, none of which is possible with any existing method. We numerically validate our formalism, and make connection to partially-coherent light transport.
[ { "created": "Tue, 28 Mar 2023 06:42:52 GMT", "version": "v1" }, { "created": "Sun, 7 Jan 2024 22:05:44 GMT", "version": "v2" } ]
2024-01-09
[ [ "Steinberg", "Shlomi", "" ], [ "Ramamoorthi", "Ravi", "" ], [ "Bitterli", "Benedikt", "" ], [ "d'Eon", "Eugene", "" ], [ "Yan", "Ling-Qi", "" ], [ "Pharr", "Matt", "" ] ]
Under ray-optical light transport, the classical ray serves as a linear and local "point query" of light's behaviour. Linearity and locality are crucial to the formulation of sophisticated path tracing and sampling techniques, that enable efficient solutions to light transport problems in complex, real-world settings and environments. However, such formulations are firmly confined to the realm of ray optics, while many applications of interest -- in computer graphics and computational optics -- demand a more precise understanding of light: as waves. We rigorously formulate the generalized ray, which enables linear and weakly-local queries of arbitrary wave-optical distributions of light. Generalized rays arise from photodetection states, and therefore allow performing backward (sensor-to-source) wave-optical light transport. Our formulations are accurate and highly general: they facilitate the application of modern path tracing techniques for wave-optical rendering, with light of any state of coherence and any spectral properties. We improve upon the state-of-the-art in terms of the generality and accuracy of the formalism, ease of application, as well as performance. As a consequence, we are able to render large, complex scenes, as in Fig. 1, and even do interactive wave-optical light transport, none of which is possible with any existing method. We numerically validate our formalism, and make connection to partially-coherent light transport.
2406.09423
Yuxiao Li
Yuxiao Li, Xin Liang, Bei Wang, Yongfeng Qiu, Lin Yan, Hanqi Guo
MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors
null
null
null
null
cs.DC cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral line of each vertex. The key is to derive a series of edits during compression time; the edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we developed a workflow to fix extrema and integral lines alternatively until convergence within finite iterations; we accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a significant acceleration with an NVIDIA A100 GPU.
[ { "created": "Fri, 5 Apr 2024 20:32:51 GMT", "version": "v1" }, { "created": "Fri, 5 Jul 2024 04:42:10 GMT", "version": "v2" } ]
2024-07-08
[ [ "Li", "Yuxiao", "" ], [ "Liang", "Xin", "" ], [ "Wang", "Bei", "" ], [ "Qiu", "Yongfeng", "" ], [ "Yan", "Lin", "" ], [ "Guo", "Hanqi", "" ] ]
This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral line of each vertex. The key is to derive a series of edits during compression time; the edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we developed a workflow to fix extrema and integral lines alternatively until convergence within finite iterations; we accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a significant acceleration with an NVIDIA A100 GPU.
2006.06217
Abhishek Gupta
Abhishek Gupta (1 and 2), Camylle Lanteigne (1 and 3), and Sara Kingsley (4) ((1) Montreal AI Ethics Institute, (2) Microsoft, (3) McGill University, (4) Carnegie Mellon University)
SECure: A Social and Environmental Certificate for AI Systems
Accepted for presentation at the Canadian Society for Ecological Economics 2020 Research Symposium, Tracing the Veins 2020, ICML 2020 Deploying and Monitoring Machine Learning Systems workshop
null
null
null
cs.CY cs.AI cs.LG econ.GN q-fin.EC
http://creativecommons.org/licenses/by/4.0/
In a world increasingly dominated by AI applications, an understudied aspect is the carbon and social footprint of these power-hungry algorithms that require copious computation and a trove of data for training and prediction. While profitable in the short-term, these practices are unsustainable and socially extractive from both a data-use and energy-use perspective. This work proposes an ESG-inspired framework combining socio-technical measures to build eco-socially responsible AI systems. The framework has four pillars: compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certificate. Compute-efficient machine learning is the use of compressed network architectures that show marginal decreases in accuracy. Federated learning augments the first pillar's impact through the use of techniques that distribute computational loads across idle capacity on devices. This is paired with the third pillar of data sovereignty to ensure the privacy of user data via techniques like use-based privacy and differential privacy. The final pillar ties all these factors together and certifies products and services in a standardized manner on their environmental and social impacts, allowing consumers to align their purchase with their values.
[ { "created": "Thu, 11 Jun 2020 06:10:46 GMT", "version": "v1" }, { "created": "Sun, 19 Jul 2020 12:39:45 GMT", "version": "v2" } ]
2020-07-21
[ [ "Gupta", "Abhishek", "", "1 and 2" ], [ "Lanteigne", "Camylle", "", "1 and 3" ], [ "Kingsley", "Sara", "" ] ]
In a world increasingly dominated by AI applications, an understudied aspect is the carbon and social footprint of these power-hungry algorithms that require copious computation and a trove of data for training and prediction. While profitable in the short-term, these practices are unsustainable and socially extractive from both a data-use and energy-use perspective. This work proposes an ESG-inspired framework combining socio-technical measures to build eco-socially responsible AI systems. The framework has four pillars: compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certificate. Compute-efficient machine learning is the use of compressed network architectures that show marginal decreases in accuracy. Federated learning augments the first pillar's impact through the use of techniques that distribute computational loads across idle capacity on devices. This is paired with the third pillar of data sovereignty to ensure the privacy of user data via techniques like use-based privacy and differential privacy. The final pillar ties all these factors together and certifies products and services in a standardized manner on their environmental and social impacts, allowing consumers to align their purchase with their values.
2306.11572
Hyunsoo Yang
Jia Si, Shuhan Yang, Yunuo Cen, Jiaer Chen, Zhaoyang Yao, Dong-Jun Kim, Kaiming Cai, Jerald Yoo, Xuanyao Fong, Hyunsoo Yang
Energy-efficient superparamagnetic Ising machine and its application to traveling salesman problems
5 figures
null
null
null
cs.ET cond-mat.other physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growth of artificial intelligence and IoT has created a significant computational load for solving non-deterministic polynomial-time (NP)-hard problems, which are difficult to solve using conventional computers. The Ising computer, based on the Ising model and annealing process, has been highly sought for finding approximate solutions to NP-hard problems by observing the convergence of dynamic spin states. However, it faces several challenges, including high power consumption due to artificial spins and randomness emulated by complex circuits, as well as low scalability caused by the rapidly growing connectivity when considering large-scale problems. Here, we present an experimental Ising annealing computer based on superparamagnetic tunnel junctions (SMTJs) with all-to-all connections, which successfully solves a 70-city travelling salesman problem (4761-node Ising problem). By taking advantage of the intrinsic randomness of SMTJs, implementing a proper global annealing scheme, and using an efficient algorithm, our SMTJ-based Ising annealer shows superior performance in terms of power consumption and energy efficiency compared to other Ising schemes. Additionally, our approach provides a promising way to solve complex problems with limited hardware resources. Moreover, we propose a crossbar array architecture for scalable integration using conventional magnetic random access memories. Our results demonstrate that the SMTJ-based Ising annealing computer with high energy efficiency, speed, and scalability is a strong candidate for future unconventional computing schemes.
[ { "created": "Tue, 20 Jun 2023 14:40:34 GMT", "version": "v1" } ]
2023-06-21
[ [ "Si", "Jia", "" ], [ "Yang", "Shuhan", "" ], [ "Cen", "Yunuo", "" ], [ "Chen", "Jiaer", "" ], [ "Yao", "Zhaoyang", "" ], [ "Kim", "Dong-Jun", "" ], [ "Cai", "Kaiming", "" ], [ "Yoo", "Jerald", "" ], [ "Fong", "Xuanyao", "" ], [ "Yang", "Hyunsoo", "" ] ]
The growth of artificial intelligence and IoT has created a significant computational load for solving non-deterministic polynomial-time (NP)-hard problems, which are difficult to solve using conventional computers. The Ising computer, based on the Ising model and annealing process, has been highly sought for finding approximate solutions to NP-hard problems by observing the convergence of dynamic spin states. However, it faces several challenges, including high power consumption due to artificial spins and randomness emulated by complex circuits, as well as low scalability caused by the rapidly growing connectivity when considering large-scale problems. Here, we present an experimental Ising annealing computer based on superparamagnetic tunnel junctions (SMTJs) with all-to-all connections, which successfully solves a 70-city travelling salesman problem (4761-node Ising problem). By taking advantage of the intrinsic randomness of SMTJs, implementing a proper global annealing scheme, and using an efficient algorithm, our SMTJ-based Ising annealer shows superior performance in terms of power consumption and energy efficiency compared to other Ising schemes. Additionally, our approach provides a promising way to solve complex problems with limited hardware resources. Moreover, we propose a crossbar array architecture for scalable integration using conventional magnetic random access memories. Our results demonstrate that the SMTJ-based Ising annealing computer with high energy efficiency, speed, and scalability is a strong candidate for future unconventional computing schemes.
2408.00920
Binchi Zhang
Binchi Zhang, Yushun Dong, Tianhao Wang, Jundong Li
Towards Certified Unlearning for Deep Neural Networks
ICML 2024
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
In the field of machine unlearning, certified unlearning has been extensively studied in convex machine learning models due to its high efficiency and strong theoretical guarantees. However, its application to deep neural networks (DNNs), known for their highly nonconvex nature, still poses challenges. To bridge the gap between certified unlearning and DNNs, we propose several simple techniques to extend certified unlearning methods to nonconvex objectives. To reduce the time complexity, we develop an efficient computation method by inverse Hessian approximation without compromising certification guarantees. In addition, we extend our discussion of certification to nonconvergence training and sequential unlearning, considering that real-world users can send unlearning requests at different time points. Extensive experiments on three real-world datasets demonstrate the efficacy of our method and the advantages of certified unlearning in DNNs.
[ { "created": "Thu, 1 Aug 2024 21:22:10 GMT", "version": "v1" } ]
2024-08-05
[ [ "Zhang", "Binchi", "" ], [ "Dong", "Yushun", "" ], [ "Wang", "Tianhao", "" ], [ "Li", "Jundong", "" ] ]
In the field of machine unlearning, certified unlearning has been extensively studied in convex machine learning models due to its high efficiency and strong theoretical guarantees. However, its application to deep neural networks (DNNs), known for their highly nonconvex nature, still poses challenges. To bridge the gap between certified unlearning and DNNs, we propose several simple techniques to extend certified unlearning methods to nonconvex objectives. To reduce the time complexity, we develop an efficient computation method by inverse Hessian approximation without compromising certification guarantees. In addition, we extend our discussion of certification to nonconvergence training and sequential unlearning, considering that real-world users can send unlearning requests at different time points. Extensive experiments on three real-world datasets demonstrate the efficacy of our method and the advantages of certified unlearning in DNNs.
2202.12245
Marcos Faundez-Zanuy
Laurence Likforman-Sulem, Anna Esposito, Marcos Faundez-Zanuy, Stephan Clemen\c{c}on, Gennaro Cordasco
EMOTHAW: A novel database for emotional state recognition from handwriting
31 pages
IEEE Transactions on Human-Machine Systems, vol. 47, no. 2, pp. 273-284, April 2017
10.1109/THMS.2016.2635441
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
The detection of negative emotions through daily activities such as handwriting is useful for promoting well-being. The spread of human-machine interfaces such as tablets makes the collection of handwriting samples easier. In this context, we present a first publicly available handwriting database which relates emotional states to handwriting, that we call EMOTHAW. This database includes samples of 129 participants whose emotional states, namely anxiety, depression and stress, are assessed by the Depression Anxiety Stress Scales (DASS) questionnaire. Seven tasks are recorded through a digitizing tablet: pentagons and house drawing, words copied in handprint, circles and clock drawing, and one sentence copied in cursive writing. Records consist in pen positions, on-paper and in-air, time stamp, pressure, pen azimuth and altitude. We report our analysis on this database. From collected data, we first compute measurements related to timing and ductus. We compute separate measurements according to the position of the writing device: on paper or in-air. We analyse and classify this set of measurements (referred to as features) using a random forest approach. This latter is a machine learning method [2], based on an ensemble of decision trees, which includes a feature ranking process. We use this ranking process to identify the features which best reveal a targeted emotional state. We then build random forest classifiers associated to each emotional state. Our results, obtained from cross-validation experiments, show that the targeted emotional states can be identified with accuracies ranging from 60% to 71%.
[ { "created": "Wed, 23 Feb 2022 15:15:44 GMT", "version": "v1" } ]
2022-02-25
[ [ "Likforman-Sulem", "Laurence", "" ], [ "Esposito", "Anna", "" ], [ "Faundez-Zanuy", "Marcos", "" ], [ "Clemençon", "Stephan", "" ], [ "Cordasco", "Gennaro", "" ] ]
The detection of negative emotions through daily activities such as handwriting is useful for promoting well-being. The spread of human-machine interfaces such as tablets makes the collection of handwriting samples easier. In this context, we present a first publicly available handwriting database which relates emotional states to handwriting, that we call EMOTHAW. This database includes samples of 129 participants whose emotional states, namely anxiety, depression and stress, are assessed by the Depression Anxiety Stress Scales (DASS) questionnaire. Seven tasks are recorded through a digitizing tablet: pentagons and house drawing, words copied in handprint, circles and clock drawing, and one sentence copied in cursive writing. Records consist in pen positions, on-paper and in-air, time stamp, pressure, pen azimuth and altitude. We report our analysis on this database. From collected data, we first compute measurements related to timing and ductus. We compute separate measurements according to the position of the writing device: on paper or in-air. We analyse and classify this set of measurements (referred to as features) using a random forest approach. This latter is a machine learning method [2], based on an ensemble of decision trees, which includes a feature ranking process. We use this ranking process to identify the features which best reveal a targeted emotional state. We then build random forest classifiers associated to each emotional state. Our results, obtained from cross-validation experiments, show that the targeted emotional states can be identified with accuracies ranging from 60% to 71%.
2211.15272
Nico Stucki
Nico Stucki, Johannes C. Paetzold, Suprosanna Shit, Bjoern Menze, Ulrich Bauer
Topologically faithful image segmentation via induced matching of persistence barcodes
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Image segmentation is a largely researched field where neural networks find vast applications in many facets of technology. Some of the most popular approaches to train segmentation networks employ loss functions optimizing pixel-overlap, an objective that is insufficient for many segmentation tasks. In recent years, their limitations fueled a growing interest in topology-aware methods, which aim to recover the correct topology of the segmented structures. However, so far, none of the existing approaches achieve a spatially correct matching between the topological features of ground truth and prediction. In this work, we propose the first topologically and feature-wise accurate metric and loss function for supervised image segmentation, which we term Betti matching. We show how induced matchings guarantee the spatially correct matching between barcodes in a segmentation setting. Furthermore, we propose an efficient algorithm to compute the Betti matching of images. We show that the Betti matching error is an interpretable metric to evaluate the topological correctness of segmentations, which is more sensitive than the well-established Betti number error. Moreover, the differentiability of the Betti matching loss enables its use as a loss function. It improves the topological performance of segmentation networks across six diverse datasets while preserving the volumetric performance. Our code is available in https://github.com/nstucki/Betti-matching.
[ { "created": "Mon, 28 Nov 2022 12:57:57 GMT", "version": "v1" } ]
2022-11-29
[ [ "Stucki", "Nico", "" ], [ "Paetzold", "Johannes C.", "" ], [ "Shit", "Suprosanna", "" ], [ "Menze", "Bjoern", "" ], [ "Bauer", "Ulrich", "" ] ]
Image segmentation is a largely researched field where neural networks find vast applications in many facets of technology. Some of the most popular approaches to train segmentation networks employ loss functions optimizing pixel-overlap, an objective that is insufficient for many segmentation tasks. In recent years, their limitations fueled a growing interest in topology-aware methods, which aim to recover the correct topology of the segmented structures. However, so far, none of the existing approaches achieve a spatially correct matching between the topological features of ground truth and prediction. In this work, we propose the first topologically and feature-wise accurate metric and loss function for supervised image segmentation, which we term Betti matching. We show how induced matchings guarantee the spatially correct matching between barcodes in a segmentation setting. Furthermore, we propose an efficient algorithm to compute the Betti matching of images. We show that the Betti matching error is an interpretable metric to evaluate the topological correctness of segmentations, which is more sensitive than the well-established Betti number error. Moreover, the differentiability of the Betti matching loss enables its use as a loss function. It improves the topological performance of segmentation networks across six diverse datasets while preserving the volumetric performance. Our code is available in https://github.com/nstucki/Betti-matching.
2203.05140
Masashi Takeshita
Masashi Takeshita and Rafal Rzepka and Kenji Araki
Speciesist Language and Nonhuman Animal Bias in English Masked Language Models
This paper is an accepted manuscript for publication in Information Processing & Management
null
10.1016/j.ipm.2022.103050
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Various existing studies have analyzed what social biases are inherited by NLP models. These biases may directly or indirectly harm people, therefore previous studies have focused only on human attributes. However, until recently no research on social biases in NLP regarding nonhumans existed. In this paper, we analyze biases to nonhuman animals, i.e. speciesist bias, inherent in English Masked Language Models such as BERT. We analyzed speciesist bias against 46 animal names using template-based and corpus-extracted sentences containing speciesist (or non-speciesist) language. We found that pre-trained masked language models tend to associate harmful words with nonhuman animals and have a bias toward using speciesist language for some nonhuman animal names. Our code for reproducing the experiments will be made available on GitHub.
[ { "created": "Thu, 10 Mar 2022 03:32:29 GMT", "version": "v1" }, { "created": "Tue, 15 Mar 2022 07:52:18 GMT", "version": "v2" }, { "created": "Fri, 12 Aug 2022 07:05:28 GMT", "version": "v3" } ]
2022-08-15
[ [ "Takeshita", "Masashi", "" ], [ "Rzepka", "Rafal", "" ], [ "Araki", "Kenji", "" ] ]
Various existing studies have analyzed what social biases are inherited by NLP models. These biases may directly or indirectly harm people, therefore previous studies have focused only on human attributes. However, until recently no research on social biases in NLP regarding nonhumans existed. In this paper, we analyze biases to nonhuman animals, i.e. speciesist bias, inherent in English Masked Language Models such as BERT. We analyzed speciesist bias against 46 animal names using template-based and corpus-extracted sentences containing speciesist (or non-speciesist) language. We found that pre-trained masked language models tend to associate harmful words with nonhuman animals and have a bias toward using speciesist language for some nonhuman animal names. Our code for reproducing the experiments will be made available on GitHub.
2308.05725
Tu Anh Nguyen
Tu Anh Nguyen, Wei-Ning Hsu, Antony D'Avirro, Bowen Shi, Itai Gat, Maryam Fazel-Zarani, Tal Remez, Jade Copet, Gabriel Synnaeve, Michael Hassid, Felix Kreuk, Yossi Adi, Emmanuel Dupoux
EXPRESSO: A Benchmark and Analysis of Discrete Expressive Speech Resynthesis
null
null
null
null
cs.CL cs.LG cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent work has shown that it is possible to resynthesize high-quality speech based, not on text, but on low bitrate discrete units that have been learned in a self-supervised fashion and can therefore capture expressive aspects of speech that are hard to transcribe (prosody, voice styles, non-verbal vocalization). The adoption of these methods is still limited by the fact that most speech synthesis datasets are read, severely limiting spontaneity and expressivity. Here, we introduce Expresso, a high-quality expressive speech dataset for textless speech synthesis that includes both read speech and improvised dialogues rendered in 26 spontaneous expressive styles. We illustrate the challenges and potentials of this dataset with an expressive resynthesis benchmark where the task is to encode the input in low-bitrate units and resynthesize it in a target voice while preserving content and style. We evaluate resynthesis quality with automatic metrics for different self-supervised discrete encoders, and explore tradeoffs between quality, bitrate and invariance to speaker and style. All the dataset, evaluation metrics and baseline models are open source
[ { "created": "Thu, 10 Aug 2023 17:41:19 GMT", "version": "v1" } ]
2023-08-11
[ [ "Nguyen", "Tu Anh", "" ], [ "Hsu", "Wei-Ning", "" ], [ "D'Avirro", "Antony", "" ], [ "Shi", "Bowen", "" ], [ "Gat", "Itai", "" ], [ "Fazel-Zarani", "Maryam", "" ], [ "Remez", "Tal", "" ], [ "Copet", "Jade", "" ], [ "Synnaeve", "Gabriel", "" ], [ "Hassid", "Michael", "" ], [ "Kreuk", "Felix", "" ], [ "Adi", "Yossi", "" ], [ "Dupoux", "Emmanuel", "" ] ]
Recent work has shown that it is possible to resynthesize high-quality speech based, not on text, but on low bitrate discrete units that have been learned in a self-supervised fashion and can therefore capture expressive aspects of speech that are hard to transcribe (prosody, voice styles, non-verbal vocalization). The adoption of these methods is still limited by the fact that most speech synthesis datasets are read, severely limiting spontaneity and expressivity. Here, we introduce Expresso, a high-quality expressive speech dataset for textless speech synthesis that includes both read speech and improvised dialogues rendered in 26 spontaneous expressive styles. We illustrate the challenges and potentials of this dataset with an expressive resynthesis benchmark where the task is to encode the input in low-bitrate units and resynthesize it in a target voice while preserving content and style. We evaluate resynthesis quality with automatic metrics for different self-supervised discrete encoders, and explore tradeoffs between quality, bitrate and invariance to speaker and style. All the dataset, evaluation metrics and baseline models are open source
1608.06347
Shikai Jin
Shikai Jin, Yuxuan Cui, Chunli Yu
A New Parallelization Method for K-means
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
K-means is a popular clustering method used in data mining area. To work with large datasets, researchers propose PKMeans, which is a parallel k-means on MapReduce. However, the existing k-means parallelization methods including PKMeans have many limitations. PKMeans can't finish all its iterations in one MapReduce job, so it has to repeat cascading MapReduce jobs in a loop until convergence. On the most popular MapReduce platform, Hadoop, every MapReduce job introduces significant I/O overheads and extra execution time at stages of job start-up and shuffling. Even worse, it has been proved that in the worst case, k-means needs $2^{{\Omega}(n)}$ MapReduce jobs to converge, where n is the number of data instances, which means huge overheads for large datasets. Additionally, in PKMeans, at most one reducer can be assigned to and update each centroid, so PKMeans can only make use of limited number of parallel reducers. In this paper, we propose an improved parallel method for k-means, IPKMeans, which has a parallel preprocessing stage using k-d tree and can finish k-means in one single MapReduce job with much more reducers working in parallel and lower I/O overheads than PKMeans and has a fast post-processing stage generating the final result. In our method, both k-d tree and the new improved parallel k-means are implemented using MapReduce and tested on Hadoop. Our experiments show that with same dataset and initial centroids, our method has up to 2/3 lower I/O overheads and consumes less amount of time than PKMeans to get a very close clustering result.
[ { "created": "Tue, 23 Aug 2016 00:35:10 GMT", "version": "v1" }, { "created": "Fri, 26 Aug 2016 21:11:34 GMT", "version": "v2" } ]
2016-08-30
[ [ "Jin", "Shikai", "" ], [ "Cui", "Yuxuan", "" ], [ "Yu", "Chunli", "" ] ]
K-means is a popular clustering method used in data mining area. To work with large datasets, researchers propose PKMeans, which is a parallel k-means on MapReduce. However, the existing k-means parallelization methods including PKMeans have many limitations. PKMeans can't finish all its iterations in one MapReduce job, so it has to repeat cascading MapReduce jobs in a loop until convergence. On the most popular MapReduce platform, Hadoop, every MapReduce job introduces significant I/O overheads and extra execution time at stages of job start-up and shuffling. Even worse, it has been proved that in the worst case, k-means needs $2^{{\Omega}(n)}$ MapReduce jobs to converge, where n is the number of data instances, which means huge overheads for large datasets. Additionally, in PKMeans, at most one reducer can be assigned to and update each centroid, so PKMeans can only make use of limited number of parallel reducers. In this paper, we propose an improved parallel method for k-means, IPKMeans, which has a parallel preprocessing stage using k-d tree and can finish k-means in one single MapReduce job with much more reducers working in parallel and lower I/O overheads than PKMeans and has a fast post-processing stage generating the final result. In our method, both k-d tree and the new improved parallel k-means are implemented using MapReduce and tested on Hadoop. Our experiments show that with same dataset and initial centroids, our method has up to 2/3 lower I/O overheads and consumes less amount of time than PKMeans to get a very close clustering result.
2401.06821
Audrey Galametz
M\'elanie Ducoffe, Guillaume Pov\'eda, Audrey Galametz, Ryma Boumazouza, Marion-C\'ecile Martin, Julien Baris, Derk Daverschot and Eugene O'Higgins
Surrogate Neural Networks Local Stability for Aircraft Predictive Maintenance
Peer-reviewed and accepted at the 29th International Conference on Formal Methods for Industrial Critical Systems (FMICS 2024) - 15 pages
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Surrogate Neural Networks are nowadays routinely used in industry as substitutes for computationally demanding engineering simulations (e.g., in structural analysis). They allow to generate faster predictions and thus analyses in industrial applications e.g., during a product design, testing or monitoring phases. Due to their performance and time-efficiency, these surrogate models are now being developed for use in safety-critical applications. Neural network verification and in particular the assessment of their robustness (e.g., to perturbations) is the next critical step to allow their inclusion in real-life applications and certification. We assess the applicability and scalability of empirical and formal methods in the context of aircraft predictive maintenance for surrogate neural networks designed to predict the stress sustained by an aircraft part from external loads. The case study covers a high-dimensional input and output space and the verification process thus accommodates multi-objective constraints. We explore the complementarity of verification methods in assessing the local stability property of such surrogate models to input noise. We showcase the effectiveness of sequentially combining methods in one verification 'pipeline' and demonstrate the subsequent gain in runtime required to assess the targeted property.
[ { "created": "Thu, 11 Jan 2024 21:04:28 GMT", "version": "v1" }, { "created": "Wed, 5 Jun 2024 08:46:22 GMT", "version": "v2" }, { "created": "Mon, 22 Jul 2024 13:19:50 GMT", "version": "v3" }, { "created": "Wed, 24 Jul 2024 08:12:11 GMT", "version": "v4" } ]
2024-07-25
[ [ "Ducoffe", "Mélanie", "" ], [ "Povéda", "Guillaume", "" ], [ "Galametz", "Audrey", "" ], [ "Boumazouza", "Ryma", "" ], [ "Martin", "Marion-Cécile", "" ], [ "Baris", "Julien", "" ], [ "Daverschot", "Derk", "" ], [ "O'Higgins", "Eugene", "" ] ]
Surrogate Neural Networks are nowadays routinely used in industry as substitutes for computationally demanding engineering simulations (e.g., in structural analysis). They allow to generate faster predictions and thus analyses in industrial applications e.g., during a product design, testing or monitoring phases. Due to their performance and time-efficiency, these surrogate models are now being developed for use in safety-critical applications. Neural network verification and in particular the assessment of their robustness (e.g., to perturbations) is the next critical step to allow their inclusion in real-life applications and certification. We assess the applicability and scalability of empirical and formal methods in the context of aircraft predictive maintenance for surrogate neural networks designed to predict the stress sustained by an aircraft part from external loads. The case study covers a high-dimensional input and output space and the verification process thus accommodates multi-objective constraints. We explore the complementarity of verification methods in assessing the local stability property of such surrogate models to input noise. We showcase the effectiveness of sequentially combining methods in one verification 'pipeline' and demonstrate the subsequent gain in runtime required to assess the targeted property.
2008.09743
Guanghao Yin
Guanghao Yin, Shouqian Sun, Dian Yu, Dejian Li and Kejun Zhang
A Efficient Multimodal Framework for Large Scale Emotion Recognition by Fusing Music and Electrodermal Activity Signals
ACM Transactions on Multimedia Computing, Communications, and Applications (Acceptance 07-Oct-2021)
null
10.1145/3490686
null
cs.SD cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Considerable attention has been paid for physiological signal-based emotion recognition in field of affective computing. For the reliability and user friendly acquisition, Electrodermal Activity (EDA) has great advantage in practical applications. However, the EDA-based emotion recognition with hundreds of subjects still lacks effective solution. In this paper, our work makes an attempt to fuse the subject individual EDA features and the external evoked music features. And we propose an end-to-end multimodal framework, the 1-dimensional residual temporal and channel attention network (RTCAN-1D). For EDA features, the novel convex optimization-based EDA (CvxEDA) method is applied to decompose EDA signals into pahsic and tonic signals for mining the dynamic and steady features. The channel-temporal attention mechanism for EDA-based emotion recognition is firstly involved to improve the temporal- and channel-wise representation. For music features, we process the music signal with the open source toolkit openSMILE to obtain external feature vectors. The individual emotion features from EDA signals and external emotion benchmarks from music are fused in the classifing layers. We have conducted systematic comparisons on three multimodal datasets (PMEmo, DEAP, AMIGOS) for 2-classes valance/arousal emotion recognition. Our proposed RTCAN-1D outperforms the existing state-of-the-art models, which also validate that our work provides an reliable and efficient solution for large scale emotion recognition. Our code has been released at https://github.com/guanghaoyin/RTCAN-1D.
[ { "created": "Sat, 22 Aug 2020 03:13:20 GMT", "version": "v1" }, { "created": "Thu, 2 Dec 2021 03:04:51 GMT", "version": "v2" } ]
2022-05-16
[ [ "Yin", "Guanghao", "" ], [ "Sun", "Shouqian", "" ], [ "Yu", "Dian", "" ], [ "Li", "Dejian", "" ], [ "Zhang", "Kejun", "" ] ]
Considerable attention has been paid for physiological signal-based emotion recognition in field of affective computing. For the reliability and user friendly acquisition, Electrodermal Activity (EDA) has great advantage in practical applications. However, the EDA-based emotion recognition with hundreds of subjects still lacks effective solution. In this paper, our work makes an attempt to fuse the subject individual EDA features and the external evoked music features. And we propose an end-to-end multimodal framework, the 1-dimensional residual temporal and channel attention network (RTCAN-1D). For EDA features, the novel convex optimization-based EDA (CvxEDA) method is applied to decompose EDA signals into pahsic and tonic signals for mining the dynamic and steady features. The channel-temporal attention mechanism for EDA-based emotion recognition is firstly involved to improve the temporal- and channel-wise representation. For music features, we process the music signal with the open source toolkit openSMILE to obtain external feature vectors. The individual emotion features from EDA signals and external emotion benchmarks from music are fused in the classifing layers. We have conducted systematic comparisons on three multimodal datasets (PMEmo, DEAP, AMIGOS) for 2-classes valance/arousal emotion recognition. Our proposed RTCAN-1D outperforms the existing state-of-the-art models, which also validate that our work provides an reliable and efficient solution for large scale emotion recognition. Our code has been released at https://github.com/guanghaoyin/RTCAN-1D.
1303.5903
Kaushik Sarkar
Kaushik Sarkar and Hari Sundaram
How Do We Find Early Adopters Who Will Guide a Resource Constrained Network Towards a Desired Distribution of Behaviors?
null
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We identify influential early adopters that achieve a target behavior distribution for a resource constrained social network with multiple costly behaviors. This problem is important for applications ranging from collective behavior change to corporate viral marketing campaigns. In this paper, we propose a model of diffusion of multiple behaviors when individual participants have resource constraints. Individuals adopt the set of behaviors that maximize their utility subject to available resources. We show that the problem of influence maximization for multiple behaviors is NP-complete. Thus we propose heuristics, which are based on node degree and expected immediate adoption, to select early adopters. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. We also propose heuristics to distribute the behaviors amongst the early adopters to achieve a target distribution in the population. We test our approach on synthetic and real-world topologies with excellent results. Our heuristics produce 15-51\% increase in resource utilization over the na\"ive approach.
[ { "created": "Sun, 24 Mar 2013 01:43:57 GMT", "version": "v1" } ]
2013-03-26
[ [ "Sarkar", "Kaushik", "" ], [ "Sundaram", "Hari", "" ] ]
We identify influential early adopters that achieve a target behavior distribution for a resource constrained social network with multiple costly behaviors. This problem is important for applications ranging from collective behavior change to corporate viral marketing campaigns. In this paper, we propose a model of diffusion of multiple behaviors when individual participants have resource constraints. Individuals adopt the set of behaviors that maximize their utility subject to available resources. We show that the problem of influence maximization for multiple behaviors is NP-complete. Thus we propose heuristics, which are based on node degree and expected immediate adoption, to select early adopters. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. We also propose heuristics to distribute the behaviors amongst the early adopters to achieve a target distribution in the population. We test our approach on synthetic and real-world topologies with excellent results. Our heuristics produce 15-51\% increase in resource utilization over the na\"ive approach.
2311.00152
Jacob Yim
Jordan Schwartz, Madison Bohannan, Jacob Yim, Yuerou Tang, Dana Benedicto, Charisse Liu, Armando Fox, Lisa Yan, Narges Norouzi
Developing a Tool to Automate Extensions to Support a Flexible Extension Policy
2 pages
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this work, we present the development of an automated extension tool to assist educators and increase the success and well-being of students by implementing flexible extension policies. Flexible extension policies materialize in many ways, yet there are similarities in students' interactions with them; students tend to request multi-day long extensions repeatedly. In courses with hundreds or potentially thousands of students, providing a system to support this extension request demand is not possible given most currently available resources and limited staff. As such, a tool is necessary to help automate flexible extension processes. The development of this tool should reduce staff load while increasing individualized student support, which can be used in varying ways for different extension policies. Our research questions are: RQ1: Does the extension tool reduce barriers and stigma around asking for assistance? RQ2: Does the tool lessen the wait time between requesting and receiving an extension, and how does the tool improve students' learning experience in the course? These questions will help inform us about how an automated tool for flexible extensions helps support growing course sizes and students who may not otherwise receive the support they need for their success and well-being in the course.
[ { "created": "Tue, 31 Oct 2023 20:55:08 GMT", "version": "v1" } ]
2023-11-02
[ [ "Schwartz", "Jordan", "" ], [ "Bohannan", "Madison", "" ], [ "Yim", "Jacob", "" ], [ "Tang", "Yuerou", "" ], [ "Benedicto", "Dana", "" ], [ "Liu", "Charisse", "" ], [ "Fox", "Armando", "" ], [ "Yan", "Lisa", "" ], [ "Norouzi", "Narges", "" ] ]
In this work, we present the development of an automated extension tool to assist educators and increase the success and well-being of students by implementing flexible extension policies. Flexible extension policies materialize in many ways, yet there are similarities in students' interactions with them; students tend to request multi-day long extensions repeatedly. In courses with hundreds or potentially thousands of students, providing a system to support this extension request demand is not possible given most currently available resources and limited staff. As such, a tool is necessary to help automate flexible extension processes. The development of this tool should reduce staff load while increasing individualized student support, which can be used in varying ways for different extension policies. Our research questions are: RQ1: Does the extension tool reduce barriers and stigma around asking for assistance? RQ2: Does the tool lessen the wait time between requesting and receiving an extension, and how does the tool improve students' learning experience in the course? These questions will help inform us about how an automated tool for flexible extensions helps support growing course sizes and students who may not otherwise receive the support they need for their success and well-being in the course.
1303.2104
Xiao-Lei Zhang
Xiao-Lei Zhang, Ji Wu
Transfer Learning for Voice Activity Detection: A Denoising Deep Neural Network Perspective
This paper has been submitted to the conference "INTERSPEECH2013" in March 4, 2013 for review
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mismatching problem between the source and target noisy corpora severely hinder the practical use of the machine-learning-based voice activity detection (VAD). In this paper, we try to address this problem in the transfer learning prospective. Transfer learning tries to find a common learning machine or a common feature subspace that is shared by both the source corpus and the target corpus. The denoising deep neural network is used as the learning machine. Three transfer techniques, which aim to learn common feature representations, are used for analysis. Experimental results demonstrate the effectiveness of the transfer learning schemes on the mismatch problem.
[ { "created": "Fri, 8 Mar 2013 20:46:27 GMT", "version": "v1" } ]
2013-03-11
[ [ "Zhang", "Xiao-Lei", "" ], [ "Wu", "Ji", "" ] ]
Mismatching problem between the source and target noisy corpora severely hinder the practical use of the machine-learning-based voice activity detection (VAD). In this paper, we try to address this problem in the transfer learning prospective. Transfer learning tries to find a common learning machine or a common feature subspace that is shared by both the source corpus and the target corpus. The denoising deep neural network is used as the learning machine. Three transfer techniques, which aim to learn common feature representations, are used for analysis. Experimental results demonstrate the effectiveness of the transfer learning schemes on the mismatch problem.
2404.11869
Xiaorui Qi
Xiaorui Qi, Qijie Bai, Yanlong Wen, Haiwei Zhang, Xiaojie Yuan
Node-like as a Whole: Structure-aware Searching and Coarsening for Graph Classification
null
null
null
null
cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
Graph Transformers (GTs) have made remarkable achievements in graph-level tasks. However, most existing works regard graph structures as a form of guidance or bias for enhancing node representations, which focuses on node-central perspectives and lacks explicit representations of edges and structures. One natural question is, can we treat graph structures node-like as a whole to learn high-level features? Through experimental analysis, we explore the feasibility of this assumption. Based on our findings, we propose a novel multi-view graph representation learning model via structure-aware searching and coarsening (GRLsc) on GT architecture for graph classification. Specifically, we build three unique views, original, coarsening, and conversion, to learn a thorough structural representation. We compress loops and cliques via hierarchical heuristic graph coarsening and restrict them with well-designed constraints, which builds the coarsening view to learn high-level interactions between structures. We also introduce line graphs for edge embeddings and switch to edge-central perspective to construct the conversion view. Experiments on eight real-world datasets demonstrate the improvements of GRLsc over 28 baselines from various architectures.
[ { "created": "Thu, 18 Apr 2024 03:03:37 GMT", "version": "v1" }, { "created": "Mon, 24 Jun 2024 08:45:52 GMT", "version": "v2" }, { "created": "Thu, 25 Jul 2024 07:29:02 GMT", "version": "v3" } ]
2024-07-26
[ [ "Qi", "Xiaorui", "" ], [ "Bai", "Qijie", "" ], [ "Wen", "Yanlong", "" ], [ "Zhang", "Haiwei", "" ], [ "Yuan", "Xiaojie", "" ] ]
Graph Transformers (GTs) have made remarkable achievements in graph-level tasks. However, most existing works regard graph structures as a form of guidance or bias for enhancing node representations, which focuses on node-central perspectives and lacks explicit representations of edges and structures. One natural question is, can we treat graph structures node-like as a whole to learn high-level features? Through experimental analysis, we explore the feasibility of this assumption. Based on our findings, we propose a novel multi-view graph representation learning model via structure-aware searching and coarsening (GRLsc) on GT architecture for graph classification. Specifically, we build three unique views, original, coarsening, and conversion, to learn a thorough structural representation. We compress loops and cliques via hierarchical heuristic graph coarsening and restrict them with well-designed constraints, which builds the coarsening view to learn high-level interactions between structures. We also introduce line graphs for edge embeddings and switch to edge-central perspective to construct the conversion view. Experiments on eight real-world datasets demonstrate the improvements of GRLsc over 28 baselines from various architectures.
1603.07421
Ye Yuan
Ye Yuan, Mu Li, Jun Liu, Claire J. Tomlin
On the Powerball Method for Optimization
null
null
10.1109/LCSYS.2019.2913770
null
cs.SY cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new method to accelerate the convergence of optimization algorithms. This method simply adds a power coefficient $\gamma\in[0,1)$ to the gradient during optimization. We call this the Powerball method and analyze the convergence rate for the Powerball method for strongly convex functions. While theoretically the Powerball method is guaranteed to have a linear convergence rate in the same order of the gradient method, we show that empirically it significantly outperforms the gradient descent and Newton's method, especially during the initial iterations. We demonstrate that the Powerball method provides a $10$-fold speedup of the convergence of both gradient descent and L-BFGS on multiple real datasets.
[ { "created": "Thu, 24 Mar 2016 03:13:40 GMT", "version": "v1" }, { "created": "Mon, 6 Jun 2016 16:15:35 GMT", "version": "v2" }, { "created": "Tue, 18 Oct 2016 12:57:04 GMT", "version": "v3" }, { "created": "Fri, 1 Sep 2017 08:23:07 GMT", "version": "v4" } ]
2019-09-24
[ [ "Yuan", "Ye", "" ], [ "Li", "Mu", "" ], [ "Liu", "Jun", "" ], [ "Tomlin", "Claire J.", "" ] ]
We propose a new method to accelerate the convergence of optimization algorithms. This method simply adds a power coefficient $\gamma\in[0,1)$ to the gradient during optimization. We call this the Powerball method and analyze the convergence rate for the Powerball method for strongly convex functions. While theoretically the Powerball method is guaranteed to have a linear convergence rate in the same order of the gradient method, we show that empirically it significantly outperforms the gradient descent and Newton's method, especially during the initial iterations. We demonstrate that the Powerball method provides a $10$-fold speedup of the convergence of both gradient descent and L-BFGS on multiple real datasets.
2306.15111
Chuanyang Jin
Chuanyang Jin
Self-Supervised Image Captioning with CLIP
NeurIPS 2023 Self-Supervised Learning Workshop
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Image captioning, a fundamental task in vision-language understanding, seeks to generate accurate natural language descriptions for provided images. Current image captioning approaches heavily rely on high-quality image-caption pairs, which can be hard to obtain for many domains. To address this, we introduce a self-supervised image captioning method. After learning an initial signal from a small labeled dataset, our method transitions to self-supervised learning on unlabeled data, leveraging the auxiliary task of enhancing the CLIP relevance between images and generated captions. Remarkably, despite utilizing less than 2% of the labeled COCO dataset, our method delivers a performance comparable to state-of-the-art models trained on the complete dataset. Human evaluations further reveal that our method produces captions with greater distinctiveness and informativeness, two attributes inherently challenging to achieve through supervised learning.
[ { "created": "Mon, 26 Jun 2023 23:29:16 GMT", "version": "v1" }, { "created": "Thu, 2 Nov 2023 17:57:54 GMT", "version": "v2" } ]
2023-11-03
[ [ "Jin", "Chuanyang", "" ] ]
Image captioning, a fundamental task in vision-language understanding, seeks to generate accurate natural language descriptions for provided images. Current image captioning approaches heavily rely on high-quality image-caption pairs, which can be hard to obtain for many domains. To address this, we introduce a self-supervised image captioning method. After learning an initial signal from a small labeled dataset, our method transitions to self-supervised learning on unlabeled data, leveraging the auxiliary task of enhancing the CLIP relevance between images and generated captions. Remarkably, despite utilizing less than 2% of the labeled COCO dataset, our method delivers a performance comparable to state-of-the-art models trained on the complete dataset. Human evaluations further reveal that our method produces captions with greater distinctiveness and informativeness, two attributes inherently challenging to achieve through supervised learning.
2207.11926
Wanming Hao
Wencai Yan, Wanming Hao, Chongwen Huang, Gangcan Sun, Osamu Muta, Haris Gacanin, Chau Yuen
Beamforming Analysis and Design for Wideband THz Reconfigurable Intelligent Surface Communications
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Reconfigurable intelligent surface (RIS)-aided terahertz (THz) communications have been regarded as a promising candidate for future 6G networks because of its ultra-wide bandwidth and ultra-low power consumption. However, there exists the beam split problem, especially when the base station (BS) or RIS owns the large-scale antennas, which may lead to serious array gain loss. Therefore, in this paper, we investigate the beam split and beamforming design problems in the THz RIS communications. Specifically, we first analyze the beam split effect caused by different RIS sizes, shapes and deployments. On this basis, we apply the fully connected time delayer phase shifter hybrid beamforming architecture at the BS and deploy distributed RISs to cooperatively mitigate the beam split effect. We aim to maximize the achievable sum rate by jointly optimizing the hybrid analog/digital beamforming, time delays at the BS and reflection coefficients at the RISs. To solve the formulated problem, we first design the analog beamforming and time delays based on different RISs physical directions, and then it is transformed into an optimization problem by jointly optimizing the digital beamforming and reflection coefficients. Next, we propose an alternatively iterative optimization algorithm to deal with it. Specifically, for given the reflection coefficients, we propose an iterative algorithm based on the minimum mean square error technique to obtain the digital beamforming. After, we apply LDR and MCQT methods to transform the original problem to a QCQP, which can be solved by ADMM technique to obtain the reflection coefficients. Finally, the digital beamforming and reflection coefficients are obtained via repeating the above processes until convergence. Simulation results verify that the proposed scheme can effectively alleviate the beam split effect and improve the system capacity.
[ { "created": "Mon, 25 Jul 2022 06:41:05 GMT", "version": "v1" }, { "created": "Sat, 30 Jul 2022 08:04:13 GMT", "version": "v2" }, { "created": "Fri, 23 Jun 2023 12:02:30 GMT", "version": "v3" } ]
2023-06-26
[ [ "Yan", "Wencai", "" ], [ "Hao", "Wanming", "" ], [ "Huang", "Chongwen", "" ], [ "Sun", "Gangcan", "" ], [ "Muta", "Osamu", "" ], [ "Gacanin", "Haris", "" ], [ "Yuen", "Chau", "" ] ]
Reconfigurable intelligent surface (RIS)-aided terahertz (THz) communications have been regarded as a promising candidate for future 6G networks because of its ultra-wide bandwidth and ultra-low power consumption. However, there exists the beam split problem, especially when the base station (BS) or RIS owns the large-scale antennas, which may lead to serious array gain loss. Therefore, in this paper, we investigate the beam split and beamforming design problems in the THz RIS communications. Specifically, we first analyze the beam split effect caused by different RIS sizes, shapes and deployments. On this basis, we apply the fully connected time delayer phase shifter hybrid beamforming architecture at the BS and deploy distributed RISs to cooperatively mitigate the beam split effect. We aim to maximize the achievable sum rate by jointly optimizing the hybrid analog/digital beamforming, time delays at the BS and reflection coefficients at the RISs. To solve the formulated problem, we first design the analog beamforming and time delays based on different RISs physical directions, and then it is transformed into an optimization problem by jointly optimizing the digital beamforming and reflection coefficients. Next, we propose an alternatively iterative optimization algorithm to deal with it. Specifically, for given the reflection coefficients, we propose an iterative algorithm based on the minimum mean square error technique to obtain the digital beamforming. After, we apply LDR and MCQT methods to transform the original problem to a QCQP, which can be solved by ADMM technique to obtain the reflection coefficients. Finally, the digital beamforming and reflection coefficients are obtained via repeating the above processes until convergence. Simulation results verify that the proposed scheme can effectively alleviate the beam split effect and improve the system capacity.
2009.01659
Dethie Dione
Bakary Kone, Salimata Gueye Diagne, Dethie Dione, Coumba Diallo
Analysis of an M/G/1 system for the optimization of the RTG performances in the delivery of containers in Abidjan Terminal
15 pages, 3 figures
IJAAMM (2016)
null
17BK
cs.PF math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In front of the major challenges to increase its productivity while satisfying its customer, it is today important to establish in advance the operational performances of the RTG Abidjan Terminal. In this article, by using an M/G/1 retrial queue system, we obtained the average number of parked delivery trucks and as well as their waiting time. Finally, we used Matlab to represent them graphically then analyze the RTG performances according to the traffic rate.
[ { "created": "Thu, 27 Aug 2020 22:09:18 GMT", "version": "v1" } ]
2020-09-04
[ [ "Kone", "Bakary", "" ], [ "Diagne", "Salimata Gueye", "" ], [ "Dione", "Dethie", "" ], [ "Diallo", "Coumba", "" ] ]
In front of the major challenges to increase its productivity while satisfying its customer, it is today important to establish in advance the operational performances of the RTG Abidjan Terminal. In this article, by using an M/G/1 retrial queue system, we obtained the average number of parked delivery trucks and as well as their waiting time. Finally, we used Matlab to represent them graphically then analyze the RTG performances according to the traffic rate.
2006.08159
Yang Wang
Yang Wang
Survey on Deep Multi-modal Data Analytics: Collaboration, Rivalry and Fusion
Appearing at ACM TOMM, 26 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the development of web technology, multi-modal or multi-view data has surged as a major stream for big data, where each modal/view encodes individual property of data objects. Often, different modalities are complementary to each other. Such fact motivated a lot of research attention on fusing the multi-modal feature spaces to comprehensively characterize the data objects. Most of the existing state-of-the-art focused on how to fuse the energy or information from multi-modal spaces to deliver a superior performance over their counterparts with single modal. Recently, deep neural networks have exhibited as a powerful architecture to well capture the nonlinear distribution of high-dimensional multimedia data, so naturally does for multi-modal data. Substantial empirical studies are carried out to demonstrate its advantages that are benefited from deep multi-modal methods, which can essentially deepen the fusion from multi-modal deep feature spaces. In this paper, we provide a substantial overview of the existing state-of-the-arts on the filed of multi-modal data analytics from shallow to deep spaces. Throughout this survey, we further indicate that the critical components for this field go to collaboration, adversarial competition and fusion over multi-modal spaces. Finally, we share our viewpoints regarding some future directions on this field.
[ { "created": "Mon, 15 Jun 2020 06:42:04 GMT", "version": "v1" } ]
2020-06-16
[ [ "Wang", "Yang", "" ] ]
With the development of web technology, multi-modal or multi-view data has surged as a major stream for big data, where each modal/view encodes individual property of data objects. Often, different modalities are complementary to each other. Such fact motivated a lot of research attention on fusing the multi-modal feature spaces to comprehensively characterize the data objects. Most of the existing state-of-the-art focused on how to fuse the energy or information from multi-modal spaces to deliver a superior performance over their counterparts with single modal. Recently, deep neural networks have exhibited as a powerful architecture to well capture the nonlinear distribution of high-dimensional multimedia data, so naturally does for multi-modal data. Substantial empirical studies are carried out to demonstrate its advantages that are benefited from deep multi-modal methods, which can essentially deepen the fusion from multi-modal deep feature spaces. In this paper, we provide a substantial overview of the existing state-of-the-arts on the filed of multi-modal data analytics from shallow to deep spaces. Throughout this survey, we further indicate that the critical components for this field go to collaboration, adversarial competition and fusion over multi-modal spaces. Finally, we share our viewpoints regarding some future directions on this field.
1909.08386
Cesar A. Gomez
Cesar A. Gomez, Xianbin Wang, and Abdallah Shami
Intelligent Active Queue Management Using Explicit Congestion Notification
To be presented at the IEEE Global Communications Conference -GLOBECOM- 2019
null
null
null
cs.NI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As more end devices are getting connected, the Internet will become more congested. Various congestion control techniques have been developed either on transport or network layers. Active Queue Management (AQM) is a paradigm that aims to mitigate the congestion on the network layer through active buffer control to avoid overflow. However, finding the right parameters for an AQM scheme is challenging, due to the complexity and dynamics of the networks. On the other hand, the Explicit Congestion Notification (ECN) mechanism is a solution that makes visible incipient congestion on the network layer to the transport layer. In this work, we propose to exploit the ECN information to improve AQM algorithms by applying Machine Learning techniques. Our intelligent method uses an artificial neural network to predict congestion and an AQM parameter tuner based on reinforcement learning. The evaluation results show that our solution can enhance the performance of deployed AQM, using the existing TCP congestion control mechanisms.
[ { "created": "Wed, 28 Aug 2019 00:16:48 GMT", "version": "v1" } ]
2019-09-19
[ [ "Gomez", "Cesar A.", "" ], [ "Wang", "Xianbin", "" ], [ "Shami", "Abdallah", "" ] ]
As more end devices are getting connected, the Internet will become more congested. Various congestion control techniques have been developed either on transport or network layers. Active Queue Management (AQM) is a paradigm that aims to mitigate the congestion on the network layer through active buffer control to avoid overflow. However, finding the right parameters for an AQM scheme is challenging, due to the complexity and dynamics of the networks. On the other hand, the Explicit Congestion Notification (ECN) mechanism is a solution that makes visible incipient congestion on the network layer to the transport layer. In this work, we propose to exploit the ECN information to improve AQM algorithms by applying Machine Learning techniques. Our intelligent method uses an artificial neural network to predict congestion and an AQM parameter tuner based on reinforcement learning. The evaluation results show that our solution can enhance the performance of deployed AQM, using the existing TCP congestion control mechanisms.
1711.11183
Yanbing Mao
Yanbing Mao, Emrah Akyol, and Ziang Zhang
Strategic Topology Switching for Security-Part I: Consensus & Switching Times
working paper, 12 pages
null
null
null
cs.SY cs.MA math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this two-part paper, we consider strategic topology switching for the second-order multi-agent systems under a special class of stealthy attacks, namely the "zero-dynamics" attack (ZDA). The main mathematical tool proposed here is to strategically switch the network topology to detect a possible ZDA. However, it is not clear a priori that such a switching strategy still yields consensus in this switched system, in the normal (un-attacked) operation mode. In Part I, we propose a strategy on the switching times that enables the topology-switching algorithm proposed in Part II to reach the second-order consensus in the absence of a ZDA. Utilizing the theory of stable switched linear systems with unstable subsystems, we characterize sufficient conditions for the dwell time of topology-switching signal to reach consensus. Building on this characterization, we then propose a decentralized time-dependent topology-switching algorithm. The proposed algorithm, used in conjunction with a simplified control protocol, achieves consensus while providing substantial advantages over other control approaches: it relies only on the relative position measurements (without any requirement for velocity measurements); and it does not impose any constraint on the magnitudes of coupling weights. We finally demonstrate our theoretical findings via the numerical simulation results.
[ { "created": "Thu, 30 Nov 2017 01:49:46 GMT", "version": "v1" }, { "created": "Fri, 6 Apr 2018 16:04:48 GMT", "version": "v2" }, { "created": "Mon, 25 Jun 2018 22:56:18 GMT", "version": "v3" }, { "created": "Mon, 4 Mar 2019 16:39:03 GMT", "version": "v4" } ]
2019-03-05
[ [ "Mao", "Yanbing", "" ], [ "Akyol", "Emrah", "" ], [ "Zhang", "Ziang", "" ] ]
In this two-part paper, we consider strategic topology switching for the second-order multi-agent systems under a special class of stealthy attacks, namely the "zero-dynamics" attack (ZDA). The main mathematical tool proposed here is to strategically switch the network topology to detect a possible ZDA. However, it is not clear a priori that such a switching strategy still yields consensus in this switched system, in the normal (un-attacked) operation mode. In Part I, we propose a strategy on the switching times that enables the topology-switching algorithm proposed in Part II to reach the second-order consensus in the absence of a ZDA. Utilizing the theory of stable switched linear systems with unstable subsystems, we characterize sufficient conditions for the dwell time of topology-switching signal to reach consensus. Building on this characterization, we then propose a decentralized time-dependent topology-switching algorithm. The proposed algorithm, used in conjunction with a simplified control protocol, achieves consensus while providing substantial advantages over other control approaches: it relies only on the relative position measurements (without any requirement for velocity measurements); and it does not impose any constraint on the magnitudes of coupling weights. We finally demonstrate our theoretical findings via the numerical simulation results.
2405.07500
Yuzhang Xie
Yuzhang Xie, Jiaying Lu, Joyce Ho, Fadi Nahab, Xiao Hu, Carl Yang
PromptLink: Leveraging Large Language Models for Cross-Source Biomedical Concept Linking
null
Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (Short-Paper Track), 2024
10.1145/3626772.3657904
null
cs.IR cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linking (aligning) biomedical concepts across diverse data sources enables various integrative analyses, but it is challenging due to the discrepancies in concept naming conventions. Various strategies have been developed to overcome this challenge, such as those based on string-matching rules, manually crafted thesauri, and machine learning models. However, these methods are constrained by limited prior biomedical knowledge and can hardly generalize beyond the limited amounts of rules, thesauri, or training samples. Recently, large language models (LLMs) have exhibited impressive results in diverse biomedical NLP tasks due to their unprecedentedly rich prior knowledge and strong zero-shot prediction abilities. However, LLMs suffer from issues including high costs, limited context length, and unreliable predictions. In this research, we propose PromptLink, a novel biomedical concept linking framework that leverages LLMs. It first employs a biomedical-specialized pre-trained language model to generate candidate concepts that can fit in the LLM context windows. Then it utilizes an LLM to link concepts through two-stage prompts, where the first-stage prompt aims to elicit the biomedical prior knowledge from the LLM for the concept linking task and the second-stage prompt enforces the LLM to reflect on its own predictions to further enhance their reliability. Empirical results on the concept linking task between two EHR datasets and an external biomedical KG demonstrate the effectiveness of PromptLink. Furthermore, PromptLink is a generic framework without reliance on additional prior knowledge, context, or training data, making it well-suited for concept linking across various types of data sources. The source code is available at https://github.com/constantjxyz/PromptLink.
[ { "created": "Mon, 13 May 2024 06:36:30 GMT", "version": "v1" } ]
2024-05-14
[ [ "Xie", "Yuzhang", "" ], [ "Lu", "Jiaying", "" ], [ "Ho", "Joyce", "" ], [ "Nahab", "Fadi", "" ], [ "Hu", "Xiao", "" ], [ "Yang", "Carl", "" ] ]
Linking (aligning) biomedical concepts across diverse data sources enables various integrative analyses, but it is challenging due to the discrepancies in concept naming conventions. Various strategies have been developed to overcome this challenge, such as those based on string-matching rules, manually crafted thesauri, and machine learning models. However, these methods are constrained by limited prior biomedical knowledge and can hardly generalize beyond the limited amounts of rules, thesauri, or training samples. Recently, large language models (LLMs) have exhibited impressive results in diverse biomedical NLP tasks due to their unprecedentedly rich prior knowledge and strong zero-shot prediction abilities. However, LLMs suffer from issues including high costs, limited context length, and unreliable predictions. In this research, we propose PromptLink, a novel biomedical concept linking framework that leverages LLMs. It first employs a biomedical-specialized pre-trained language model to generate candidate concepts that can fit in the LLM context windows. Then it utilizes an LLM to link concepts through two-stage prompts, where the first-stage prompt aims to elicit the biomedical prior knowledge from the LLM for the concept linking task and the second-stage prompt enforces the LLM to reflect on its own predictions to further enhance their reliability. Empirical results on the concept linking task between two EHR datasets and an external biomedical KG demonstrate the effectiveness of PromptLink. Furthermore, PromptLink is a generic framework without reliance on additional prior knowledge, context, or training data, making it well-suited for concept linking across various types of data sources. The source code is available at https://github.com/constantjxyz/PromptLink.
2007.11821
Elad Yom-Tov
Elad Yom-Tov, Vasileios Lampos, Ingemar J. Cox, Michael Edelstein
Providing early indication of regional anomalies in COVID19 case counts in England using search engine queries
null
null
null
null
cs.IR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
COVID19 was first reported in England at the end of January 2020, and by mid-June over 150,000 cases were reported. We assume that, similarly to influenza-like illnesses, people who suffer from COVID19 may query for their symptoms prior to accessing the medical system (or in lieu of it). Therefore, we analyzed searches to Bing from users in England, identifying cases where unexpected rises in relevant symptom searches occurred at specific areas of the country. Our analysis shows that searches for "fever" and "cough" were the most correlated with future case counts, with searches preceding case counts by 16-17 days. Unexpected rises in search patterns were predictive of future case counts multiplying by 2.5 or more within a week, reaching an Area Under Curve (AUC) of 0.64. Similar rises in mortality were predicted with an AUC of approximately 0.61 at a lead time of 3 weeks. Thus, our metric provided Public Health England with an indication which could be used to plan the response to COVID19 and could possibly be utilized to detect regional anomalies of other pathogens.
[ { "created": "Thu, 23 Jul 2020 06:47:17 GMT", "version": "v1" } ]
2020-07-24
[ [ "Yom-Tov", "Elad", "" ], [ "Lampos", "Vasileios", "" ], [ "Cox", "Ingemar J.", "" ], [ "Edelstein", "Michael", "" ] ]
COVID19 was first reported in England at the end of January 2020, and by mid-June over 150,000 cases were reported. We assume that, similarly to influenza-like illnesses, people who suffer from COVID19 may query for their symptoms prior to accessing the medical system (or in lieu of it). Therefore, we analyzed searches to Bing from users in England, identifying cases where unexpected rises in relevant symptom searches occurred at specific areas of the country. Our analysis shows that searches for "fever" and "cough" were the most correlated with future case counts, with searches preceding case counts by 16-17 days. Unexpected rises in search patterns were predictive of future case counts multiplying by 2.5 or more within a week, reaching an Area Under Curve (AUC) of 0.64. Similar rises in mortality were predicted with an AUC of approximately 0.61 at a lead time of 3 weeks. Thus, our metric provided Public Health England with an indication which could be used to plan the response to COVID19 and could possibly be utilized to detect regional anomalies of other pathogens.
1510.05477
Mehmet Basbug
Mehmet Emin Basbug, Koray Ozcan and Senem Velipasalar
Accelerometer based Activity Classification with Variational Inference on Sticky HDP-SLDS
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
As part of daily monitoring of human activities, wearable sensors and devices are becoming increasingly popular sources of data. With the advent of smartphones equipped with acceloremeter, gyroscope and camera; it is now possible to develop activity classification platforms everyone can use conveniently. In this paper, we propose a fast inference method for an unsupervised non-parametric time series model namely variational inference for sticky HDP-SLDS(Hierarchical Dirichlet Process Switching Linear Dynamical System). We show that the proposed algorithm can differentiate various indoor activities such as sitting, walking, turning, going up/down the stairs and taking the elevator using only the acceloremeter of an Android smartphone Samsung Galaxy S4. We used the front camera of the smartphone to annotate activity types precisely. We compared the proposed method with Hidden Markov Models with Gaussian emission probabilities on a dataset of 10 subjects. We showed that the efficacy of the stickiness property. We further compared the variational inference to the Gibbs sampler on the same model and show that variational inference is faster in one order of magnitude.
[ { "created": "Mon, 19 Oct 2015 13:58:37 GMT", "version": "v1" } ]
2015-10-20
[ [ "Basbug", "Mehmet Emin", "" ], [ "Ozcan", "Koray", "" ], [ "Velipasalar", "Senem", "" ] ]
As part of daily monitoring of human activities, wearable sensors and devices are becoming increasingly popular sources of data. With the advent of smartphones equipped with acceloremeter, gyroscope and camera; it is now possible to develop activity classification platforms everyone can use conveniently. In this paper, we propose a fast inference method for an unsupervised non-parametric time series model namely variational inference for sticky HDP-SLDS(Hierarchical Dirichlet Process Switching Linear Dynamical System). We show that the proposed algorithm can differentiate various indoor activities such as sitting, walking, turning, going up/down the stairs and taking the elevator using only the acceloremeter of an Android smartphone Samsung Galaxy S4. We used the front camera of the smartphone to annotate activity types precisely. We compared the proposed method with Hidden Markov Models with Gaussian emission probabilities on a dataset of 10 subjects. We showed that the efficacy of the stickiness property. We further compared the variational inference to the Gibbs sampler on the same model and show that variational inference is faster in one order of magnitude.
1411.7973
Marcos R Vieira
Matthias Kormaksson, Luciano Barbosa, Marcos R. Vieira, Bianca Zadrozny
Bus Travel Time Predictions Using Additive Models
11 pages, this is the technical report supporting the IEEE 2014 International Conference on Data Mining (ICDM) submission with the same title
null
10.1109/ICDM.2014.107
null
cs.LG stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many factors can affect the predictability of public bus services such as traffic, weather and local events. Other aspects, such as day of week or hour of day, may influence bus travel times as well, either directly or in conjunction with other variables. However, the exact nature of such relationships between travel times and predictor variables is, in most situations, not known. In this paper we develop a framework that allows for flexible modeling of bus travel times through the use of Additive Models. In particular, we model travel times as a sum of linear as well as nonlinear terms that are modeled as smooth functions of predictor variables. The proposed class of models provides a principled statistical framework that is highly flexible in terms of model building. The experimental results demonstrate uniformly superior performance of our best model as compared to previous prediction methods when applied to a very large GPS data set obtained from buses operating in the city of Rio de Janeiro.
[ { "created": "Fri, 28 Nov 2014 18:45:31 GMT", "version": "v1" } ]
2016-11-15
[ [ "Kormaksson", "Matthias", "" ], [ "Barbosa", "Luciano", "" ], [ "Vieira", "Marcos R.", "" ], [ "Zadrozny", "Bianca", "" ] ]
Many factors can affect the predictability of public bus services such as traffic, weather and local events. Other aspects, such as day of week or hour of day, may influence bus travel times as well, either directly or in conjunction with other variables. However, the exact nature of such relationships between travel times and predictor variables is, in most situations, not known. In this paper we develop a framework that allows for flexible modeling of bus travel times through the use of Additive Models. In particular, we model travel times as a sum of linear as well as nonlinear terms that are modeled as smooth functions of predictor variables. The proposed class of models provides a principled statistical framework that is highly flexible in terms of model building. The experimental results demonstrate uniformly superior performance of our best model as compared to previous prediction methods when applied to a very large GPS data set obtained from buses operating in the city of Rio de Janeiro.
2312.08878
Yuntian Chen
Qinglong Cao, Zhengqin Xu, Yuntian Chen, Chao Ma, Xiaokang Yang
Domain Prompt Learning with Quaternion Networks
null
null
null
null
cs.CV cs.LG stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prompt learning has emerged as an effective and data-efficient technique in large Vision-Language Models (VLMs). However, when adapting VLMs to specialized domains such as remote sensing and medical imaging, domain prompt learning remains underexplored. While large-scale domain-specific foundation models can help tackle this challenge, their concentration on a single vision level makes it challenging to prompt both vision and language modalities. To overcome this, we propose to leverage domain-specific knowledge from domain-specific foundation models to transfer the robust recognition ability of VLMs from generalized to specialized domains, using quaternion networks. Specifically, the proposed method involves using domain-specific vision features from domain-specific foundation models to guide the transformation of generalized contextual embeddings from the language branch into a specialized space within the quaternion networks. Moreover, we present a hierarchical approach that generates vision prompt features by analyzing intermodal relationships between hierarchical language prompt features and domain-specific vision features. In this way, quaternion networks can effectively mine the intermodal relationships in the specific domain, facilitating domain-specific vision-language contrastive learning. Extensive experiments on domain-specific datasets show that our proposed method achieves new state-of-the-art results in prompt learning.
[ { "created": "Tue, 12 Dec 2023 08:49:39 GMT", "version": "v1" } ]
2023-12-15
[ [ "Cao", "Qinglong", "" ], [ "Xu", "Zhengqin", "" ], [ "Chen", "Yuntian", "" ], [ "Ma", "Chao", "" ], [ "Yang", "Xiaokang", "" ] ]
Prompt learning has emerged as an effective and data-efficient technique in large Vision-Language Models (VLMs). However, when adapting VLMs to specialized domains such as remote sensing and medical imaging, domain prompt learning remains underexplored. While large-scale domain-specific foundation models can help tackle this challenge, their concentration on a single vision level makes it challenging to prompt both vision and language modalities. To overcome this, we propose to leverage domain-specific knowledge from domain-specific foundation models to transfer the robust recognition ability of VLMs from generalized to specialized domains, using quaternion networks. Specifically, the proposed method involves using domain-specific vision features from domain-specific foundation models to guide the transformation of generalized contextual embeddings from the language branch into a specialized space within the quaternion networks. Moreover, we present a hierarchical approach that generates vision prompt features by analyzing intermodal relationships between hierarchical language prompt features and domain-specific vision features. In this way, quaternion networks can effectively mine the intermodal relationships in the specific domain, facilitating domain-specific vision-language contrastive learning. Extensive experiments on domain-specific datasets show that our proposed method achieves new state-of-the-art results in prompt learning.
1810.01520
Hamed Zamani
Hamed Zamani, Markus Schedl, Paul Lamere, Ching-Wei Chen
An Analysis of Approaches Taken in the ACM RecSys Challenge 2018 for Automatic Music Playlist Continuation
null
null
null
null
cs.IR cs.LG cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ACM Recommender Systems Challenge 2018 focused on the task of automatic music playlist continuation, which is a form of the more general task of sequential recommendation. Given a playlist of arbitrary length with some additional meta-data, the task was to recommend up to 500 tracks that fit the target characteristics of the original playlist. For the RecSys Challenge, Spotify released a dataset of one million user-generated playlists. Participants could compete in two tracks, i.e., main and creative tracks. Participants in the main track were only allowed to use the provided training set, however, in the creative track, the use of external public sources was permitted. In total, 113 teams submitted 1,228 runs to the main track; 33 teams submitted 239 runs to the creative track. The highest performing team in the main track achieved an R-precision of 0.2241, an NDCG of 0.3946, and an average number of recommended songs clicks of 1.784. In the creative track, an R-precision of 0.2233, an NDCG of 0.3939, and a click rate of 1.785 was obtained by the best team. This article provides an overview of the challenge, including motivation, task definition, dataset description, and evaluation. We further report and analyze the results obtained by the top performing teams in each track and explore the approaches taken by the winners. We finally summarize our key findings, discuss generalizability of approaches and results to domains other than music, and list the open avenues and possible future directions in the area of automatic playlist continuation.
[ { "created": "Tue, 2 Oct 2018 21:19:19 GMT", "version": "v1" }, { "created": "Sat, 31 Aug 2019 22:13:33 GMT", "version": "v2" } ]
2019-09-04
[ [ "Zamani", "Hamed", "" ], [ "Schedl", "Markus", "" ], [ "Lamere", "Paul", "" ], [ "Chen", "Ching-Wei", "" ] ]
The ACM Recommender Systems Challenge 2018 focused on the task of automatic music playlist continuation, which is a form of the more general task of sequential recommendation. Given a playlist of arbitrary length with some additional meta-data, the task was to recommend up to 500 tracks that fit the target characteristics of the original playlist. For the RecSys Challenge, Spotify released a dataset of one million user-generated playlists. Participants could compete in two tracks, i.e., main and creative tracks. Participants in the main track were only allowed to use the provided training set, however, in the creative track, the use of external public sources was permitted. In total, 113 teams submitted 1,228 runs to the main track; 33 teams submitted 239 runs to the creative track. The highest performing team in the main track achieved an R-precision of 0.2241, an NDCG of 0.3946, and an average number of recommended songs clicks of 1.784. In the creative track, an R-precision of 0.2233, an NDCG of 0.3939, and a click rate of 1.785 was obtained by the best team. This article provides an overview of the challenge, including motivation, task definition, dataset description, and evaluation. We further report and analyze the results obtained by the top performing teams in each track and explore the approaches taken by the winners. We finally summarize our key findings, discuss generalizability of approaches and results to domains other than music, and list the open avenues and possible future directions in the area of automatic playlist continuation.
2201.12790
Bin Sheng
Bin Sheng, Gregory Gutin
Solving Routing Problems via Important Cuts
There is an error in the proof of Lemma 4
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
We introduce a novel approach of using important cuts which allowed us to design significantly faster fixed-parameter tractable (FPT) algorithms for the following routing problems: the Mixed Chinese Postman Problem parameterized by the number of directed edges (Gutin et al., JCSS 2017), the Minimum Shared Edges problem (MSE) parameterized by the number p of paths between two specified vertices (Fluschnik et al., JCSS 2019), and the Weighted Min Cut Prevention problem (Gruttemeier et al., WG 2021). The Minimum Vulnerability problem (MV) is a generalization of MSE (Assadi et al., Algorithmica 2014). The only known FPT algorithm for MV parameterized by p (the same parameter as for MSE) was for chordal graphs (Aoki et al., JCO 2018). We design an FPT algorithm for MV on all undirected graphs.
[ { "created": "Sun, 30 Jan 2022 11:17:11 GMT", "version": "v1" }, { "created": "Wed, 8 Jun 2022 10:02:51 GMT", "version": "v2" } ]
2022-06-09
[ [ "Sheng", "Bin", "" ], [ "Gutin", "Gregory", "" ] ]
We introduce a novel approach of using important cuts which allowed us to design significantly faster fixed-parameter tractable (FPT) algorithms for the following routing problems: the Mixed Chinese Postman Problem parameterized by the number of directed edges (Gutin et al., JCSS 2017), the Minimum Shared Edges problem (MSE) parameterized by the number p of paths between two specified vertices (Fluschnik et al., JCSS 2019), and the Weighted Min Cut Prevention problem (Gruttemeier et al., WG 2021). The Minimum Vulnerability problem (MV) is a generalization of MSE (Assadi et al., Algorithmica 2014). The only known FPT algorithm for MV parameterized by p (the same parameter as for MSE) was for chordal graphs (Aoki et al., JCO 2018). We design an FPT algorithm for MV on all undirected graphs.
2101.11956
Pere-Llu\'is Huguet Cabot
Pere-Llu\'is Huguet-Cabot and David Abadi and Agneta Fischer and Ekaterina Shutova
Us vs. Them: A Dataset of Populist Attitudes, News Bias and Emotions
Camera-ready version in EACL 2021
null
10.18653/v1/2021.eacl-main.165
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational modelling of political discourse tasks has become an increasingly important area of research in natural language processing. Populist rhetoric has risen across the political sphere in recent years; however, computational approaches to it have been scarce due to its complex nature. In this paper, we present the new $\textit{Us vs. Them}$ dataset, consisting of 6861 Reddit comments annotated for populist attitudes and the first large-scale computational models of this phenomenon. We investigate the relationship between populist mindsets and social groups, as well as a range of emotions typically associated with these. We set a baseline for two tasks related to populist attitudes and present a set of multi-task learning models that leverage and demonstrate the importance of emotion and group identification as auxiliary tasks.
[ { "created": "Thu, 28 Jan 2021 12:18:19 GMT", "version": "v1" }, { "created": "Wed, 10 Feb 2021 21:53:40 GMT", "version": "v2" }, { "created": "Sun, 14 Feb 2021 17:42:12 GMT", "version": "v3" } ]
2022-06-16
[ [ "Huguet-Cabot", "Pere-Lluís", "" ], [ "Abadi", "David", "" ], [ "Fischer", "Agneta", "" ], [ "Shutova", "Ekaterina", "" ] ]
Computational modelling of political discourse tasks has become an increasingly important area of research in natural language processing. Populist rhetoric has risen across the political sphere in recent years; however, computational approaches to it have been scarce due to its complex nature. In this paper, we present the new $\textit{Us vs. Them}$ dataset, consisting of 6861 Reddit comments annotated for populist attitudes and the first large-scale computational models of this phenomenon. We investigate the relationship between populist mindsets and social groups, as well as a range of emotions typically associated with these. We set a baseline for two tasks related to populist attitudes and present a set of multi-task learning models that leverage and demonstrate the importance of emotion and group identification as auxiliary tasks.
2302.03201
Kaiwen Wang
Kaiwen Wang and Nathan Kallus and Wen Sun
Near-Minimax-Optimal Risk-Sensitive Reinforcement Learning with CVaR
Accepted at ICML 2023
null
null
null
cs.LG math.OC math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study risk-sensitive Reinforcement Learning (RL), focusing on the objective of Conditional Value at Risk (CVaR) with risk tolerance $\tau$. Starting with multi-arm bandits (MABs), we show the minimax CVaR regret rate is $\Omega(\sqrt{\tau^{-1}AK})$, where $A$ is the number of actions and $K$ is the number of episodes, and that it is achieved by an Upper Confidence Bound algorithm with a novel Bernstein bonus. For online RL in tabular Markov Decision Processes (MDPs), we show a minimax regret lower bound of $\Omega(\sqrt{\tau^{-1}SAK})$ (with normalized cumulative rewards), where $S$ is the number of states, and we propose a novel bonus-driven Value Iteration procedure. We show that our algorithm achieves the optimal regret of $\widetilde O(\sqrt{\tau^{-1}SAK})$ under a continuity assumption and in general attains a near-optimal regret of $\widetilde O(\tau^{-1}\sqrt{SAK})$, which is minimax-optimal for constant $\tau$. This improves on the best available bounds. By discretizing rewards appropriately, our algorithms are computationally efficient.
[ { "created": "Tue, 7 Feb 2023 02:22:31 GMT", "version": "v1" }, { "created": "Wed, 24 May 2023 21:47:10 GMT", "version": "v2" } ]
2023-05-26
[ [ "Wang", "Kaiwen", "" ], [ "Kallus", "Nathan", "" ], [ "Sun", "Wen", "" ] ]
In this paper, we study risk-sensitive Reinforcement Learning (RL), focusing on the objective of Conditional Value at Risk (CVaR) with risk tolerance $\tau$. Starting with multi-arm bandits (MABs), we show the minimax CVaR regret rate is $\Omega(\sqrt{\tau^{-1}AK})$, where $A$ is the number of actions and $K$ is the number of episodes, and that it is achieved by an Upper Confidence Bound algorithm with a novel Bernstein bonus. For online RL in tabular Markov Decision Processes (MDPs), we show a minimax regret lower bound of $\Omega(\sqrt{\tau^{-1}SAK})$ (with normalized cumulative rewards), where $S$ is the number of states, and we propose a novel bonus-driven Value Iteration procedure. We show that our algorithm achieves the optimal regret of $\widetilde O(\sqrt{\tau^{-1}SAK})$ under a continuity assumption and in general attains a near-optimal regret of $\widetilde O(\tau^{-1}\sqrt{SAK})$, which is minimax-optimal for constant $\tau$. This improves on the best available bounds. By discretizing rewards appropriately, our algorithms are computationally efficient.
2005.08076
Richard Jiang
Fraser Young, L Zhang, Richard Jiang, Han Liu and Conor Wall
A Deep Learning based Wearable Healthcare IoT Device for AI-enabled Hearing Assistance Automation
null
The 2020 International Conference on Machine Learning and Cybernetics
null
null
cs.HC cs.CV cs.CY
http://creativecommons.org/licenses/by/4.0/
With the recent booming of artificial intelligence (AI), particularly deep learning techniques, digital healthcare is one of the prevalent areas that could gain benefits from AI-enabled functionality. This research presents a novel AI-enabled Internet of Things (IoT) device operating from the ESP-8266 platform capable of assisting those who suffer from impairment of hearing or deafness to communicate with others in conversations. In the proposed solution, a server application is created that leverages Google's online speech recognition service to convert the received conversations into texts, then deployed to a micro-display attached to the glasses to display the conversation contents to deaf people, to enable and assist conversation as normal with the general population. Furthermore, in order to raise alert of traffic or dangerous scenarios, an 'urban-emergency' classifier is developed using a deep learning model, Inception-v4, with transfer learning to detect/recognize alerting/alarming sounds, such as a horn sound or a fire alarm, with texts generated to alert the prospective user. The training of Inception-v4 was carried out on a consumer desktop PC and then implemented into the AI based IoT application. The empirical results indicate that the developed prototype system achieves an accuracy rate of 92% for sound recognition and classification with real-time performance.
[ { "created": "Sat, 16 May 2020 19:42:16 GMT", "version": "v1" } ]
2020-05-19
[ [ "Young", "Fraser", "" ], [ "Zhang", "L", "" ], [ "Jiang", "Richard", "" ], [ "Liu", "Han", "" ], [ "Wall", "Conor", "" ] ]
With the recent booming of artificial intelligence (AI), particularly deep learning techniques, digital healthcare is one of the prevalent areas that could gain benefits from AI-enabled functionality. This research presents a novel AI-enabled Internet of Things (IoT) device operating from the ESP-8266 platform capable of assisting those who suffer from impairment of hearing or deafness to communicate with others in conversations. In the proposed solution, a server application is created that leverages Google's online speech recognition service to convert the received conversations into texts, then deployed to a micro-display attached to the glasses to display the conversation contents to deaf people, to enable and assist conversation as normal with the general population. Furthermore, in order to raise alert of traffic or dangerous scenarios, an 'urban-emergency' classifier is developed using a deep learning model, Inception-v4, with transfer learning to detect/recognize alerting/alarming sounds, such as a horn sound or a fire alarm, with texts generated to alert the prospective user. The training of Inception-v4 was carried out on a consumer desktop PC and then implemented into the AI based IoT application. The empirical results indicate that the developed prototype system achieves an accuracy rate of 92% for sound recognition and classification with real-time performance.
2009.01468
Nicholas Wilkins
Nicholas Wilkins, Beck Cordes Galbraith, Ifeoma Nwogu
Modeling Global Body Configurations in American Sign Language
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
American Sign Language (ASL) is the fourth most commonly used language in the United States and is the language most commonly used by Deaf people in the United States and the English-speaking regions of Canada. Unfortunately, until recently, ASL received little research. This is due, in part, to its delayed recognition as a language until William C. Stokoe's publication in 1960. Limited data has been a long-standing obstacle to ASL research and computational modeling. The lack of large-scale datasets has prohibited many modern machine-learning techniques, such as Neural Machine Translation, from being applied to ASL. In addition, the modality required to capture sign language (i.e. video) is complex in natural settings (as one must deal with background noise, motion blur, and the curse of dimensionality). Finally, when compared with spoken languages, such as English, there has been limited research conducted into the linguistics of ASL. We realize a simplified version of Liddell and Johnson's Movement-Hold (MH) Model using a Probabilistic Graphical Model (PGM). We trained our model on ASLing, a dataset collected from three fluent ASL signers. We evaluate our PGM against other models to determine its ability to model ASL. Finally, we interpret various aspects of the PGM and draw conclusions about ASL phonetics. The main contributions of this paper are
[ { "created": "Thu, 3 Sep 2020 06:20:10 GMT", "version": "v1" } ]
2020-09-04
[ [ "Wilkins", "Nicholas", "" ], [ "Galbraith", "Beck Cordes", "" ], [ "Nwogu", "Ifeoma", "" ] ]
American Sign Language (ASL) is the fourth most commonly used language in the United States and is the language most commonly used by Deaf people in the United States and the English-speaking regions of Canada. Unfortunately, until recently, ASL received little research. This is due, in part, to its delayed recognition as a language until William C. Stokoe's publication in 1960. Limited data has been a long-standing obstacle to ASL research and computational modeling. The lack of large-scale datasets has prohibited many modern machine-learning techniques, such as Neural Machine Translation, from being applied to ASL. In addition, the modality required to capture sign language (i.e. video) is complex in natural settings (as one must deal with background noise, motion blur, and the curse of dimensionality). Finally, when compared with spoken languages, such as English, there has been limited research conducted into the linguistics of ASL. We realize a simplified version of Liddell and Johnson's Movement-Hold (MH) Model using a Probabilistic Graphical Model (PGM). We trained our model on ASLing, a dataset collected from three fluent ASL signers. We evaluate our PGM against other models to determine its ability to model ASL. Finally, we interpret various aspects of the PGM and draw conclusions about ASL phonetics. The main contributions of this paper are
1010.3988
Pei Wu
Pei Wu, Chi Ho Yeung, Weiping Liu, Cihang Jin, Yi-Cheng Zhang
Time-aware Collaborative Filtering with the Piecewise Decay Function
4 pages. 1 figure
null
null
null
cs.IR physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we determine the appropriate decay function for item-based collaborative filtering (CF). Instead of intuitive deduction, we introduce the Similarity-Signal-to-Noise-Ratio (SSNR) to quantify the impacts of rated items on current recommendations. By measuring the variation of SSNR over time, drift in user interest is well visualized and quantified. Based on the trend changes of SSNR, the piecewise decay function is thus devised and incorporated to build our time-aware CF algorithm. Experiments show that the proposed algorithm strongly outperforms the conventional item-based CF algorithm and other time-aware algorithms with various decay functions.
[ { "created": "Tue, 19 Oct 2010 17:32:26 GMT", "version": "v1" } ]
2010-10-20
[ [ "Wu", "Pei", "" ], [ "Yeung", "Chi Ho", "" ], [ "Liu", "Weiping", "" ], [ "Jin", "Cihang", "" ], [ "Zhang", "Yi-Cheng", "" ] ]
In this paper, we determine the appropriate decay function for item-based collaborative filtering (CF). Instead of intuitive deduction, we introduce the Similarity-Signal-to-Noise-Ratio (SSNR) to quantify the impacts of rated items on current recommendations. By measuring the variation of SSNR over time, drift in user interest is well visualized and quantified. Based on the trend changes of SSNR, the piecewise decay function is thus devised and incorporated to build our time-aware CF algorithm. Experiments show that the proposed algorithm strongly outperforms the conventional item-based CF algorithm and other time-aware algorithms with various decay functions.
2104.06443
Julia Mendelsohn
Julia Mendelsohn, Ceren Budak, David Jurgens
Modeling Framing in Immigration Discourse on Social Media
Accepted at NAACL 2021 (camera-ready), Annotation codebook, data, models, and code available at https://github.com/juliamendelsohn/framing
null
10.18653/v1/2021.naacl-main.179
null
cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
The framing of political issues can influence policy and public opinion. Even though the public plays a key role in creating and spreading frames, little is known about how ordinary people on social media frame political issues. By creating a new dataset of immigration-related tweets labeled for multiple framing typologies from political communication theory, we develop supervised models to detect frames. We demonstrate how users' ideology and region impact framing choices, and how a message's framing influences audience responses. We find that the more commonly-used issue-generic frames obscure important ideological and regional patterns that are only revealed by immigration-specific frames. Furthermore, frames oriented towards human interests, culture, and politics are associated with higher user engagement. This large-scale analysis of a complex social and linguistic phenomenon contributes to both NLP and social science research.
[ { "created": "Tue, 13 Apr 2021 18:35:44 GMT", "version": "v1" } ]
2021-09-09
[ [ "Mendelsohn", "Julia", "" ], [ "Budak", "Ceren", "" ], [ "Jurgens", "David", "" ] ]
The framing of political issues can influence policy and public opinion. Even though the public plays a key role in creating and spreading frames, little is known about how ordinary people on social media frame political issues. By creating a new dataset of immigration-related tweets labeled for multiple framing typologies from political communication theory, we develop supervised models to detect frames. We demonstrate how users' ideology and region impact framing choices, and how a message's framing influences audience responses. We find that the more commonly-used issue-generic frames obscure important ideological and regional patterns that are only revealed by immigration-specific frames. Furthermore, frames oriented towards human interests, culture, and politics are associated with higher user engagement. This large-scale analysis of a complex social and linguistic phenomenon contributes to both NLP and social science research.
2204.12084
Khay Boon Hong
Khay Boon Hong
U-Net with ResNet Backbone for Garment Landmarking Purpose
A draft for purpose of archive, not intended for official academic uses
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
We build a heatmap-based landmark detection model to locate important landmarks on 2D RGB garment images. The main goal is to detect edges, corners and suitable interior region of the garments. This let us re-create 3D garments in modern 3D editing software by incorporate landmark detection model and texture unwrapping. We use a U-net architecture with ResNet backbone to build the model. With an appropriate loss function, we are able to train a moderately robust model.
[ { "created": "Tue, 26 Apr 2022 05:47:27 GMT", "version": "v1" } ]
2022-04-27
[ [ "Hong", "Khay Boon", "" ] ]
We build a heatmap-based landmark detection model to locate important landmarks on 2D RGB garment images. The main goal is to detect edges, corners and suitable interior region of the garments. This let us re-create 3D garments in modern 3D editing software by incorporate landmark detection model and texture unwrapping. We use a U-net architecture with ResNet backbone to build the model. With an appropriate loss function, we are able to train a moderately robust model.
2404.05231
Xiaofan Li
Xiaofan Li, Zhizhong Zhang, Xin Tan, Chengwei Chen, Yanyun Qu, Yuan Xie, Lizhuang Ma
PromptAD: Learning Prompts with only Normal Samples for Few-Shot Anomaly Detection
Accepted by CVPR2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The vision-language model has brought great improvement to few-shot industrial anomaly detection, which usually needs to design of hundreds of prompts through prompt engineering. For automated scenarios, we first use conventional prompt learning with many-class paradigm as the baseline to automatically learn prompts but found that it can not work well in one-class anomaly detection. To address the above problem, this paper proposes a one-class prompt learning method for few-shot anomaly detection, termed PromptAD. First, we propose semantic concatenation which can transpose normal prompts into anomaly prompts by concatenating normal prompts with anomaly suffixes, thus constructing a large number of negative samples used to guide prompt learning in one-class setting. Furthermore, to mitigate the training challenge caused by the absence of anomaly images, we introduce the concept of explicit anomaly margin, which is used to explicitly control the margin between normal prompt features and anomaly prompt features through a hyper-parameter. For image-level/pixel-level anomaly detection, PromptAD achieves first place in 11/12 few-shot settings on MVTec and VisA.
[ { "created": "Mon, 8 Apr 2024 06:53:30 GMT", "version": "v1" }, { "created": "Tue, 16 Jul 2024 08:02:46 GMT", "version": "v2" } ]
2024-07-17
[ [ "Li", "Xiaofan", "" ], [ "Zhang", "Zhizhong", "" ], [ "Tan", "Xin", "" ], [ "Chen", "Chengwei", "" ], [ "Qu", "Yanyun", "" ], [ "Xie", "Yuan", "" ], [ "Ma", "Lizhuang", "" ] ]
The vision-language model has brought great improvement to few-shot industrial anomaly detection, which usually needs to design of hundreds of prompts through prompt engineering. For automated scenarios, we first use conventional prompt learning with many-class paradigm as the baseline to automatically learn prompts but found that it can not work well in one-class anomaly detection. To address the above problem, this paper proposes a one-class prompt learning method for few-shot anomaly detection, termed PromptAD. First, we propose semantic concatenation which can transpose normal prompts into anomaly prompts by concatenating normal prompts with anomaly suffixes, thus constructing a large number of negative samples used to guide prompt learning in one-class setting. Furthermore, to mitigate the training challenge caused by the absence of anomaly images, we introduce the concept of explicit anomaly margin, which is used to explicitly control the margin between normal prompt features and anomaly prompt features through a hyper-parameter. For image-level/pixel-level anomaly detection, PromptAD achieves first place in 11/12 few-shot settings on MVTec and VisA.
2105.13697
Mingfu Xue
Mingfu Xue, Zhiyu Wu, Jian Wang, Yushu Zhang, Weiqiang Liu
AdvParams: An Active DNN Intellectual Property Protection Technique via Adversarial Perturbation Based Parameter Encryption
null
IEEE Transactions on Emerging Topics in Computing, 2022
10.1109/TETC.2022.3231012
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A well-trained DNN model can be regarded as an intellectual property (IP) of the model owner. To date, many DNN IP protection methods have been proposed, but most of them are watermarking based verification methods where model owners can only verify their ownership passively after the copyright of DNN models has been infringed. In this paper, we propose an effective framework to actively protect the DNN IP from infringement. Specifically, we encrypt the DNN model's parameters by perturbing them with well-crafted adversarial perturbations. With the encrypted parameters, the accuracy of the DNN model drops significantly, which can prevent malicious infringers from using the model. After the encryption, the positions of encrypted parameters and the values of the added adversarial perturbations form a secret key. Authorized user can use the secret key to decrypt the model. Compared with the watermarking methods which only passively verify the ownership after the infringement occurs, the proposed method can prevent infringement in advance. Moreover, compared with most of the existing active DNN IP protection methods, the proposed method does not require additional training process of the model, which introduces low computational overhead. Experimental results show that, after the encryption, the test accuracy of the model drops by 80.65%, 81.16%, and 87.91% on Fashion-MNIST, CIFAR-10, and GTSRB, respectively. Moreover, the proposed method only needs to encrypt an extremely low number of parameters, and the proportion of the encrypted parameters of all the model's parameters is as low as 0.000205%. The experimental results also indicate that, the proposed method is robust against model fine-tuning attack and model pruning attack. Moreover, for the adaptive attack where attackers know the detailed steps of the proposed method, the proposed method is also demonstrated to be robust.
[ { "created": "Fri, 28 May 2021 09:42:35 GMT", "version": "v1" } ]
2023-05-26
[ [ "Xue", "Mingfu", "" ], [ "Wu", "Zhiyu", "" ], [ "Wang", "Jian", "" ], [ "Zhang", "Yushu", "" ], [ "Liu", "Weiqiang", "" ] ]
A well-trained DNN model can be regarded as an intellectual property (IP) of the model owner. To date, many DNN IP protection methods have been proposed, but most of them are watermarking based verification methods where model owners can only verify their ownership passively after the copyright of DNN models has been infringed. In this paper, we propose an effective framework to actively protect the DNN IP from infringement. Specifically, we encrypt the DNN model's parameters by perturbing them with well-crafted adversarial perturbations. With the encrypted parameters, the accuracy of the DNN model drops significantly, which can prevent malicious infringers from using the model. After the encryption, the positions of encrypted parameters and the values of the added adversarial perturbations form a secret key. Authorized user can use the secret key to decrypt the model. Compared with the watermarking methods which only passively verify the ownership after the infringement occurs, the proposed method can prevent infringement in advance. Moreover, compared with most of the existing active DNN IP protection methods, the proposed method does not require additional training process of the model, which introduces low computational overhead. Experimental results show that, after the encryption, the test accuracy of the model drops by 80.65%, 81.16%, and 87.91% on Fashion-MNIST, CIFAR-10, and GTSRB, respectively. Moreover, the proposed method only needs to encrypt an extremely low number of parameters, and the proportion of the encrypted parameters of all the model's parameters is as low as 0.000205%. The experimental results also indicate that, the proposed method is robust against model fine-tuning attack and model pruning attack. Moreover, for the adaptive attack where attackers know the detailed steps of the proposed method, the proposed method is also demonstrated to be robust.
2011.08122
Yong Deng
Yong Deng and Min Dong
Memory-Rate Tradeoff for Caching with Uncoded Placement under Nonuniform File Popularity
To appear in the Asilomar Conference on Signals, Systems, and Computers, 2020
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For caching with nonuniform file popularity, we aim to characterize the memory-rate tradeoff under uncoded cache placement. We consider the recently proposed Modified Coded Caching Scheme (MCCS) with the optimized cache placement based on the popularity-first approach to minimize the average delivery rate. We introduce two information-theoretic lower bounds on the average rate for caching under uncoded placement. For $K = 2$ users, we show that the optimized MCCS attains the lower bound and is optimal for caching with uncoded placement. For general $K$ users with distinct file requests, the optimized MCCS attains the popularity-first-based lower bound. When there are redundant file requests among $K$ users, we show a possible gap between the optimized MCCS and the lower bounds, which is attributed to zero-padding commonly used for coded delivery. We analyze the impact of zero-padding and its limitation. Simulation study shows that the loss is very small in general and only exists in some limited cases.
[ { "created": "Mon, 16 Nov 2020 17:35:27 GMT", "version": "v1" }, { "created": "Sat, 16 Jan 2021 02:23:09 GMT", "version": "v2" } ]
2021-01-19
[ [ "Deng", "Yong", "" ], [ "Dong", "Min", "" ] ]
For caching with nonuniform file popularity, we aim to characterize the memory-rate tradeoff under uncoded cache placement. We consider the recently proposed Modified Coded Caching Scheme (MCCS) with the optimized cache placement based on the popularity-first approach to minimize the average delivery rate. We introduce two information-theoretic lower bounds on the average rate for caching under uncoded placement. For $K = 2$ users, we show that the optimized MCCS attains the lower bound and is optimal for caching with uncoded placement. For general $K$ users with distinct file requests, the optimized MCCS attains the popularity-first-based lower bound. When there are redundant file requests among $K$ users, we show a possible gap between the optimized MCCS and the lower bounds, which is attributed to zero-padding commonly used for coded delivery. We analyze the impact of zero-padding and its limitation. Simulation study shows that the loss is very small in general and only exists in some limited cases.
1802.09012
Min Chen
Min Chen, Kelly Gaither, Nigel W. John, and Brian McCann
Cost-benefit Analysis of Visualization in Virtual Environments
Submitted to SciVis 2017 on 31 March 2017 and was not accepted. Authors' feedback about the SciVis 2017 reviews can be found in the cover letter accompanying the EuroVis submission. Revised with a major extension and submitted to EuroVis 2018 on 13 December 2017, but was not accepted
IEEE Transactions on Visualization and Computer Graphics, 25(1), 2019
10.1109/TVCG.2018.2865025
null
cs.HC cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visualization and virtual environments (VEs) have been two interconnected parallel strands in visual computing for decades. Some VEs have been purposely developed for visualization applications, while many visualization applications are exemplary showcases in general-purpose VEs. Because of the development and operation costs of VEs, the majority of visualization applications in practice are yet to benefit from the capacity of VEs. In this paper, we examine this perplexity from an information-theoretic perspective. Our objectives are to conduct cost-benefit analysis on typical VE systems (including augmented and mixed reality, theatre-based systems, and large powerwalls), to explain why some visualization applications benefit more from VEs than others, and to sketch out pathways for the future development of visualization applications in VEs. We support our theoretical propositions and analysis using theories and discoveries in the literature of cognitive sciences and the practical evidence reported in the literatures of visualization and VEs.
[ { "created": "Sun, 25 Feb 2018 14:14:42 GMT", "version": "v1" }, { "created": "Wed, 28 Feb 2018 09:30:32 GMT", "version": "v2" } ]
2018-12-04
[ [ "Chen", "Min", "" ], [ "Gaither", "Kelly", "" ], [ "John", "Nigel W.", "" ], [ "McCann", "Brian", "" ] ]
Visualization and virtual environments (VEs) have been two interconnected parallel strands in visual computing for decades. Some VEs have been purposely developed for visualization applications, while many visualization applications are exemplary showcases in general-purpose VEs. Because of the development and operation costs of VEs, the majority of visualization applications in practice are yet to benefit from the capacity of VEs. In this paper, we examine this perplexity from an information-theoretic perspective. Our objectives are to conduct cost-benefit analysis on typical VE systems (including augmented and mixed reality, theatre-based systems, and large powerwalls), to explain why some visualization applications benefit more from VEs than others, and to sketch out pathways for the future development of visualization applications in VEs. We support our theoretical propositions and analysis using theories and discoveries in the literature of cognitive sciences and the practical evidence reported in the literatures of visualization and VEs.
2304.14238
Michael Sejr Schlichtkrull
Michael Schlichtkrull, Nedjma Ousidhoum, Andreas Vlachos
The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who
Accepted to the Findings of EMNLP 2023
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated fact-checking is often presented as an epistemic tool that fact-checkers, social media consumers, and other stakeholders can use to fight misinformation. Nevertheless, few papers thoroughly discuss how. We document this by analysing 100 highly-cited papers, and annotating epistemic elements related to intended use, i.e., means, ends, and stakeholders. We find that narratives leaving out some of these aspects are common, that many papers propose inconsistent means and ends, and that the feasibility of suggested strategies rarely has empirical backing. We argue that this vagueness actively hinders the technology from reaching its goals, as it encourages overclaiming, limits criticism, and prevents stakeholder feedback. Accordingly, we provide several recommendations for thinking and writing about the use of fact-checking artefacts.
[ { "created": "Thu, 27 Apr 2023 14:55:23 GMT", "version": "v1" }, { "created": "Wed, 8 Nov 2023 12:10:49 GMT", "version": "v2" } ]
2023-11-09
[ [ "Schlichtkrull", "Michael", "" ], [ "Ousidhoum", "Nedjma", "" ], [ "Vlachos", "Andreas", "" ] ]
Automated fact-checking is often presented as an epistemic tool that fact-checkers, social media consumers, and other stakeholders can use to fight misinformation. Nevertheless, few papers thoroughly discuss how. We document this by analysing 100 highly-cited papers, and annotating epistemic elements related to intended use, i.e., means, ends, and stakeholders. We find that narratives leaving out some of these aspects are common, that many papers propose inconsistent means and ends, and that the feasibility of suggested strategies rarely has empirical backing. We argue that this vagueness actively hinders the technology from reaching its goals, as it encourages overclaiming, limits criticism, and prevents stakeholder feedback. Accordingly, we provide several recommendations for thinking and writing about the use of fact-checking artefacts.
2306.14878
Yilun Xu
Yilun Xu, Mingyang Deng, Xiang Cheng, Yonglong Tian, Ziming Liu, Tommi Jaakkola
Restart Sampling for Improving Generative Processes
Code is available at https://github.com/Newbeeer/diffusion_restart_sampling
null
null
null
cs.LG cs.CV stat.CO stat.ML
http://creativecommons.org/licenses/by/4.0/
Generative processes that involve solving differential equations, such as diffusion models, frequently necessitate balancing speed and quality. ODE-based samplers are fast but plateau in performance while SDE-based samplers deliver higher sample quality at the cost of increased sampling time. We attribute this difference to sampling errors: ODE-samplers involve smaller discretization errors while stochasticity in SDE contracts accumulated errors. Based on these findings, we propose a novel sampling algorithm called Restart in order to better balance discretization errors and contraction. The sampling method alternates between adding substantial noise in additional forward steps and strictly following a backward ODE. Empirically, Restart sampler surpasses previous SDE and ODE samplers in both speed and accuracy. Restart not only outperforms the previous best SDE results, but also accelerates the sampling speed by 10-fold / 2-fold on CIFAR-10 / ImageNet $64 \times 64$. In addition, it attains significantly better sample quality than ODE samplers within comparable sampling times. Moreover, Restart better balances text-image alignment/visual quality versus diversity than previous samplers in the large-scale text-to-image Stable Diffusion model pre-trained on LAION $512 \times 512$. Code is available at https://github.com/Newbeeer/diffusion_restart_sampling
[ { "created": "Mon, 26 Jun 2023 17:48:25 GMT", "version": "v1" }, { "created": "Wed, 1 Nov 2023 04:17:43 GMT", "version": "v2" } ]
2023-11-02
[ [ "Xu", "Yilun", "" ], [ "Deng", "Mingyang", "" ], [ "Cheng", "Xiang", "" ], [ "Tian", "Yonglong", "" ], [ "Liu", "Ziming", "" ], [ "Jaakkola", "Tommi", "" ] ]
Generative processes that involve solving differential equations, such as diffusion models, frequently necessitate balancing speed and quality. ODE-based samplers are fast but plateau in performance while SDE-based samplers deliver higher sample quality at the cost of increased sampling time. We attribute this difference to sampling errors: ODE-samplers involve smaller discretization errors while stochasticity in SDE contracts accumulated errors. Based on these findings, we propose a novel sampling algorithm called Restart in order to better balance discretization errors and contraction. The sampling method alternates between adding substantial noise in additional forward steps and strictly following a backward ODE. Empirically, Restart sampler surpasses previous SDE and ODE samplers in both speed and accuracy. Restart not only outperforms the previous best SDE results, but also accelerates the sampling speed by 10-fold / 2-fold on CIFAR-10 / ImageNet $64 \times 64$. In addition, it attains significantly better sample quality than ODE samplers within comparable sampling times. Moreover, Restart better balances text-image alignment/visual quality versus diversity than previous samplers in the large-scale text-to-image Stable Diffusion model pre-trained on LAION $512 \times 512$. Code is available at https://github.com/Newbeeer/diffusion_restart_sampling
1807.01866
Harold Soh
Harold Soh, Yaqi Xie, Min Chen, David Hsu
Multi-Task Trust Transfer for Human-Robot Interaction
IJRR and RSS conference
null
10.1177/0278364919866905
null
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trust is essential in shaping human interactions with one another and with robots. This paper discusses how human trust in robot capabilities transfers across multiple tasks. We first present a human-subject study of two distinct task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers. The findings expand our understanding of trust and inspire new differentiable models of trust evolution and transfer via latent task representations: (i) a rational Bayes model, (ii) a data-driven neural network model, and (iii) a hybrid model that combines the two. Experiments show that the proposed models outperform prevailing models when predicting trust over unseen tasks and users. These results suggest that (i) task-dependent functional trust models capture human trust in robot capabilities more accurately, and (ii) trust transfer across tasks can be inferred to a good degree. The latter enables trust-mediated robot decision-making for fluent human-robot interaction in multi-task settings.
[ { "created": "Thu, 5 Jul 2018 06:58:13 GMT", "version": "v1" }, { "created": "Thu, 22 Aug 2019 03:37:35 GMT", "version": "v2" }, { "created": "Thu, 26 Aug 2021 09:51:33 GMT", "version": "v3" } ]
2021-08-27
[ [ "Soh", "Harold", "" ], [ "Xie", "Yaqi", "" ], [ "Chen", "Min", "" ], [ "Hsu", "David", "" ] ]
Trust is essential in shaping human interactions with one another and with robots. This paper discusses how human trust in robot capabilities transfers across multiple tasks. We first present a human-subject study of two distinct task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers. The findings expand our understanding of trust and inspire new differentiable models of trust evolution and transfer via latent task representations: (i) a rational Bayes model, (ii) a data-driven neural network model, and (iii) a hybrid model that combines the two. Experiments show that the proposed models outperform prevailing models when predicting trust over unseen tasks and users. These results suggest that (i) task-dependent functional trust models capture human trust in robot capabilities more accurately, and (ii) trust transfer across tasks can be inferred to a good degree. The latter enables trust-mediated robot decision-making for fluent human-robot interaction in multi-task settings.
1808.05832
Olivier Sigaud
Alo\"is Pourchot, Nicolas Perrin, Olivier Sigaud
Importance mixing: Improving sample reuse in evolutionary policy search methods
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neuroevolution, that is evolutionary policy search methods based on deep neural networks, have recently emerged as a competitor to deep reinforcement learning algorithms due to their better parallelization capabilities. However, these methods still suffer from a far worse sample efficiency. In this paper we investigate whether a mechanism known as "importance mixing" can significantly improve their sample efficiency. We provide a didactic presentation of importance mixing and we explain how it can be extended to reuse more samples. Then, from an empirical comparison based on a simple benchmark, we show that, though it actually provides better sample efficiency, it is still far from the sample efficiency of deep reinforcement learning, though it is more stable.
[ { "created": "Fri, 17 Aug 2018 11:25:19 GMT", "version": "v1" } ]
2018-08-20
[ [ "Pourchot", "Aloïs", "" ], [ "Perrin", "Nicolas", "" ], [ "Sigaud", "Olivier", "" ] ]
Deep neuroevolution, that is evolutionary policy search methods based on deep neural networks, have recently emerged as a competitor to deep reinforcement learning algorithms due to their better parallelization capabilities. However, these methods still suffer from a far worse sample efficiency. In this paper we investigate whether a mechanism known as "importance mixing" can significantly improve their sample efficiency. We provide a didactic presentation of importance mixing and we explain how it can be extended to reuse more samples. Then, from an empirical comparison based on a simple benchmark, we show that, though it actually provides better sample efficiency, it is still far from the sample efficiency of deep reinforcement learning, though it is more stable.
1512.07438
Steffen Wendzel
Steffen Wendzel, Wojciech Mazurczyk, Sebastian Zander
Unified Description for Network Information Hiding Methods
24 pages, 7 figures, 1 table; currently under review
Journal of Universal Computer Science (J.UCS), vol. 22, no. 11 (2016), 1456-1486
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Until now hiding methods in network steganography have been described in arbitrary ways, making them difficult to compare. For instance, some publications describe classical channel characteristics, such as robustness and bandwidth, while others describe the embedding of hidden information. We introduce the first unified description of hiding methods in network steganography. Our description method is based on a comprehensive analysis of the existing publications in the domain. When our description method is applied by the research community, future publications will be easier to categorize, compare and extend. Our method can also serve as a basis to evaluate the novelty of hiding methods proposed in the future.
[ { "created": "Wed, 23 Dec 2015 11:32:41 GMT", "version": "v1" }, { "created": "Mon, 9 Jan 2017 11:44:07 GMT", "version": "v2" } ]
2017-01-10
[ [ "Wendzel", "Steffen", "" ], [ "Mazurczyk", "Wojciech", "" ], [ "Zander", "Sebastian", "" ] ]
Until now hiding methods in network steganography have been described in arbitrary ways, making them difficult to compare. For instance, some publications describe classical channel characteristics, such as robustness and bandwidth, while others describe the embedding of hidden information. We introduce the first unified description of hiding methods in network steganography. Our description method is based on a comprehensive analysis of the existing publications in the domain. When our description method is applied by the research community, future publications will be easier to categorize, compare and extend. Our method can also serve as a basis to evaluate the novelty of hiding methods proposed in the future.
2103.13864
Febin Sebastian Elayanithottathil
Febin Sebastian Elayanithottathil and Janis Keuper
A Retail Product Categorisation Dataset
null
null
null
null
cs.LG cs.IR
http://creativecommons.org/publicdomain/zero/1.0/
Most eCommerce applications, like web-shops have millions of products. In this context, the identification of similar products is a common sub-task, which can be utilized in the implementation of recommendation systems, product search engines and internal supply logistics. Providing this data set, our goal is to boost the evaluation of machine learning methods for the prediction of the category of the retail products from tuples of images and descriptions.
[ { "created": "Thu, 25 Mar 2021 14:23:48 GMT", "version": "v1" }, { "created": "Sat, 3 Apr 2021 09:31:56 GMT", "version": "v2" } ]
2021-04-06
[ [ "Elayanithottathil", "Febin Sebastian", "" ], [ "Keuper", "Janis", "" ] ]
Most eCommerce applications, like web-shops have millions of products. In this context, the identification of similar products is a common sub-task, which can be utilized in the implementation of recommendation systems, product search engines and internal supply logistics. Providing this data set, our goal is to boost the evaluation of machine learning methods for the prediction of the category of the retail products from tuples of images and descriptions.
1812.07932
Alexander Kampmann
Alexander Kampmann, Andreas Zeller
Carving Parameterized Unit Tests
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method to automatically extract ("carve") parameterized unit tests from system executions. The unit tests execute the same functions as the system tests they are carved from, but can do so much faster as they call functions directly; furthermore, being parameterized, they can execute the functions with a large variety of randomly selected input values. If a unit-level test fails, we lift it to the system level to ensure the failure can be reproduced there. Our method thus allows to focus testing efforts on selected modules while still avoiding false alarms: In our experiments, running parameterized unit tests for individual functions was, on average, 30~times faster than running the system tests they were carved from.
[ { "created": "Wed, 19 Dec 2018 13:21:53 GMT", "version": "v1" } ]
2018-12-20
[ [ "Kampmann", "Alexander", "" ], [ "Zeller", "Andreas", "" ] ]
We present a method to automatically extract ("carve") parameterized unit tests from system executions. The unit tests execute the same functions as the system tests they are carved from, but can do so much faster as they call functions directly; furthermore, being parameterized, they can execute the functions with a large variety of randomly selected input values. If a unit-level test fails, we lift it to the system level to ensure the failure can be reproduced there. Our method thus allows to focus testing efforts on selected modules while still avoiding false alarms: In our experiments, running parameterized unit tests for individual functions was, on average, 30~times faster than running the system tests they were carved from.
1804.07068
Masashi Yoshikawa
Masashi Yoshikawa, Koji Mineshima, Hiroshi Noji and Daisuke Bekki
Consistent CCG Parsing over Multiple Sentences for Improved Logical Reasoning
6 pages. short paper accepted to NAACL2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In formal logic-based approaches to Recognizing Textual Entailment (RTE), a Combinatory Categorial Grammar (CCG) parser is used to parse input premises and hypotheses to obtain their logical formulas. Here, it is important that the parser processes the sentences consistently; failing to recognize a similar syntactic structure results in inconsistent predicate argument structures among them, in which case the succeeding theorem proving is doomed to failure. In this work, we present a simple method to extend an existing CCG parser to parse a set of sentences consistently, which is achieved with an inter-sentence modeling with Markov Random Fields (MRF). When combined with existing logic-based systems, our method always shows improvement in the RTE experiments on English and Japanese languages.
[ { "created": "Thu, 19 Apr 2018 10:17:47 GMT", "version": "v1" } ]
2018-04-20
[ [ "Yoshikawa", "Masashi", "" ], [ "Mineshima", "Koji", "" ], [ "Noji", "Hiroshi", "" ], [ "Bekki", "Daisuke", "" ] ]
In formal logic-based approaches to Recognizing Textual Entailment (RTE), a Combinatory Categorial Grammar (CCG) parser is used to parse input premises and hypotheses to obtain their logical formulas. Here, it is important that the parser processes the sentences consistently; failing to recognize a similar syntactic structure results in inconsistent predicate argument structures among them, in which case the succeeding theorem proving is doomed to failure. In this work, we present a simple method to extend an existing CCG parser to parse a set of sentences consistently, which is achieved with an inter-sentence modeling with Markov Random Fields (MRF). When combined with existing logic-based systems, our method always shows improvement in the RTE experiments on English and Japanese languages.
2002.05410
Tchuitcheu Willy Carlos
Willy Carlos Tchuitcheu, Christophe Bobda, and Md Jubaer Hossain Pantho
Internet of Smart-Cameras for Traffic Lights Optimization in Smart Cities
12 pages
Internet of Things(2020)
10.1016/j.iot.2020.100207
null
cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Smart and decentralized control systems have recently been proposed to handle the growing traffic congestion in urban cities. Proposed smart traffic light solutions based on Wireless Sensor Network and Vehicular Ad-hoc NETwork are either unreliable and inflexible or complex and costly. Furthermore, the handling of special vehicles such as emergency is still not viable, especially during busy hours. Inspired by the emergence of distributed smart cameras, we present a novel approach to traffic control at intersections. Our approach uses smart cameras at intersections along with image understanding for real-time traffic monitoring and assessment. Besides understanding the traffic flow, the cameras can detect and track special vehicles and help prioritize emergency cases. Traffic violations can be identified as well and traffic statistics collected. In this paper, we introduce a flexible, adaptive and distributed control algorithm that uses the information provided by distributed smart cameras to efficiently control traffic signals. Experimental results show that our collision-free approach outperforms the state-of-the-art of the average user's waiting time in the queue and improves the routing of emergency vehicles in a cross congestion area.
[ { "created": "Thu, 13 Feb 2020 09:51:34 GMT", "version": "v1" } ]
2020-05-14
[ [ "Tchuitcheu", "Willy Carlos", "" ], [ "Bobda", "Christophe", "" ], [ "Pantho", "Md Jubaer Hossain", "" ] ]
Smart and decentralized control systems have recently been proposed to handle the growing traffic congestion in urban cities. Proposed smart traffic light solutions based on Wireless Sensor Network and Vehicular Ad-hoc NETwork are either unreliable and inflexible or complex and costly. Furthermore, the handling of special vehicles such as emergency is still not viable, especially during busy hours. Inspired by the emergence of distributed smart cameras, we present a novel approach to traffic control at intersections. Our approach uses smart cameras at intersections along with image understanding for real-time traffic monitoring and assessment. Besides understanding the traffic flow, the cameras can detect and track special vehicles and help prioritize emergency cases. Traffic violations can be identified as well and traffic statistics collected. In this paper, we introduce a flexible, adaptive and distributed control algorithm that uses the information provided by distributed smart cameras to efficiently control traffic signals. Experimental results show that our collision-free approach outperforms the state-of-the-art of the average user's waiting time in the queue and improves the routing of emergency vehicles in a cross congestion area.
1304.5107
Pablo Dorta-Gonzalez
Pablo Dorta-Gonzalez and Maria Isabel Dorta-Gonzalez
Comparing journals from different fields of Science and Social Science through a JCR Subject Categories Normalized Impact Factor
28 pages, 4 tables and 5 figures. arXiv admin note: text overlap with arXiv:1007.4749 by other authors
Scientometrics 95(2), 645-672 (2013)
10.1007/s11192-012-0929-9
null
cs.DL physics.soc-ph stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The journal Impact Factor (IF) is not comparable among fields of Science and Social Science because of systematic differences in publication and citation behaviour across disciplines. In this work, a decomposing of the field aggregate impact factor into five normally distributed variables is presented. Considering these factors, a Principal Component Analysis is employed to find the sources of the variance in the JCR subject categories of Science and Social Science. Although publication and citation behaviour differs largely across disciplines, principal components explain more than 78% of the total variance and the average number of references per paper is not the primary factor explaining the variance in impact factors across categories. The Categories Normalized Impact Factor (CNIF) based on the JCR subject category list is proposed and compared with the IF. This normalization is achieved by considering all the indexing categories of each journal. An empirical application, with one hundred journals in two or more subject categories of economics and business, shows that the gap between rankings is reduced around 32% in the journals analyzed. This gap is obtained as the maximum distance among the ranking percentiles from all categories where each journal is included.
[ { "created": "Thu, 18 Apr 2013 12:36:49 GMT", "version": "v1" } ]
2013-04-19
[ [ "Dorta-Gonzalez", "Pablo", "" ], [ "Dorta-Gonzalez", "Maria Isabel", "" ] ]
The journal Impact Factor (IF) is not comparable among fields of Science and Social Science because of systematic differences in publication and citation behaviour across disciplines. In this work, a decomposing of the field aggregate impact factor into five normally distributed variables is presented. Considering these factors, a Principal Component Analysis is employed to find the sources of the variance in the JCR subject categories of Science and Social Science. Although publication and citation behaviour differs largely across disciplines, principal components explain more than 78% of the total variance and the average number of references per paper is not the primary factor explaining the variance in impact factors across categories. The Categories Normalized Impact Factor (CNIF) based on the JCR subject category list is proposed and compared with the IF. This normalization is achieved by considering all the indexing categories of each journal. An empirical application, with one hundred journals in two or more subject categories of economics and business, shows that the gap between rankings is reduced around 32% in the journals analyzed. This gap is obtained as the maximum distance among the ranking percentiles from all categories where each journal is included.
1912.03458
Yinpeng Chen
Yinpeng Chen, Xiyang Dai, Mengchen Liu, Dongdong Chen, Lu Yuan, Zicheng Liu
Dynamic Convolution: Attention over Convolution Kernels
CVPR 2020 (Oral)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Light-weight convolutional neural networks (CNNs) suffer performance degradation as their low computational budgets constrain both the depth (number of convolution layers) and the width (number of channels) of CNNs, resulting in limited representation capability. To address this issue, we present Dynamic Convolution, a new design that increases model complexity without increasing the network depth or width. Instead of using a single convolution kernel per layer, dynamic convolution aggregates multiple parallel convolution kernels dynamically based upon their attentions, which are input dependent. Assembling multiple kernels is not only computationally efficient due to the small kernel size, but also has more representation power since these kernels are aggregated in a non-linear way via attention. By simply using dynamic convolution for the state-of-the-art architecture MobileNetV3-Small, the top-1 accuracy of ImageNet classification is boosted by 2.9% with only 4% additional FLOPs and 2.9 AP gain is achieved on COCO keypoint detection.
[ { "created": "Sat, 7 Dec 2019 07:51:35 GMT", "version": "v1" }, { "created": "Tue, 31 Mar 2020 21:56:49 GMT", "version": "v2" } ]
2020-04-02
[ [ "Chen", "Yinpeng", "" ], [ "Dai", "Xiyang", "" ], [ "Liu", "Mengchen", "" ], [ "Chen", "Dongdong", "" ], [ "Yuan", "Lu", "" ], [ "Liu", "Zicheng", "" ] ]
Light-weight convolutional neural networks (CNNs) suffer performance degradation as their low computational budgets constrain both the depth (number of convolution layers) and the width (number of channels) of CNNs, resulting in limited representation capability. To address this issue, we present Dynamic Convolution, a new design that increases model complexity without increasing the network depth or width. Instead of using a single convolution kernel per layer, dynamic convolution aggregates multiple parallel convolution kernels dynamically based upon their attentions, which are input dependent. Assembling multiple kernels is not only computationally efficient due to the small kernel size, but also has more representation power since these kernels are aggregated in a non-linear way via attention. By simply using dynamic convolution for the state-of-the-art architecture MobileNetV3-Small, the top-1 accuracy of ImageNet classification is boosted by 2.9% with only 4% additional FLOPs and 2.9 AP gain is achieved on COCO keypoint detection.
2203.04751
Kshitij Tiwari
Kshitij Tiwari, Basak Sakcak, Prasanna Routray, Manivannan M., and Steven M. LaValle
Visibility-Inspired Models of Touch Sensors for Navigation
Accepted at IEEE IROS 2022
null
10.1109/IROS47612.2022.9981084
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper introduces mathematical models of \sensors\ for mobile robots based on visibility. Serving a purpose similar to the pinhole camera model for computer vision, the introduced models are expected to provide a useful, idealized characterization of task-relevant information that can be inferred from their outputs or observations. Possible tasks include navigation, localization and mapping when a mobile robot is deployed in an unknown environment. These models allow direct comparisons to be made between traditional depth sensors, highlighting cases in which touch sensing may be interchangeable with time of flight or vision sensors, and characterizing unique advantages provided by touch sensing. The models include contact detection, compression, load bearing, and deflection. The results could serve as a basic building block for innovative touch sensor designs for mobile robot sensor fusion systems.
[ { "created": "Fri, 4 Mar 2022 08:23:01 GMT", "version": "v1" }, { "created": "Thu, 28 Jul 2022 07:42:32 GMT", "version": "v2" } ]
2023-01-24
[ [ "Tiwari", "Kshitij", "" ], [ "Sakcak", "Basak", "" ], [ "Routray", "Prasanna", "" ], [ "M.", "Manivannan", "" ], [ "LaValle", "Steven M.", "" ] ]
This paper introduces mathematical models of \sensors\ for mobile robots based on visibility. Serving a purpose similar to the pinhole camera model for computer vision, the introduced models are expected to provide a useful, idealized characterization of task-relevant information that can be inferred from their outputs or observations. Possible tasks include navigation, localization and mapping when a mobile robot is deployed in an unknown environment. These models allow direct comparisons to be made between traditional depth sensors, highlighting cases in which touch sensing may be interchangeable with time of flight or vision sensors, and characterizing unique advantages provided by touch sensing. The models include contact detection, compression, load bearing, and deflection. The results could serve as a basic building block for innovative touch sensor designs for mobile robot sensor fusion systems.
2309.04082
Sungjun Cho
Sungjun Cho, Seunghyuk Cho, Sungwoo Park, Hankook Lee, Honglak Lee, Moontae Lee
Curve Your Attention: Mixed-Curvature Transformers for Graph Representation Learning
19 pages, 7 figures
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Real-world graphs naturally exhibit hierarchical or cyclical structures that are unfit for the typical Euclidean space. While there exist graph neural networks that leverage hyperbolic or spherical spaces to learn representations that embed such structures more accurately, these methods are confined under the message-passing paradigm, making the models vulnerable against side-effects such as oversmoothing and oversquashing. More recent work have proposed global attention-based graph Transformers that can easily model long-range interactions, but their extensions towards non-Euclidean geometry are yet unexplored. To bridge this gap, we propose Fully Product-Stereographic Transformer, a generalization of Transformers towards operating entirely on the product of constant curvature spaces. When combined with tokenized graph Transformers, our model can learn the curvature appropriate for the input graph in an end-to-end fashion, without the need of additional tuning on different curvature initializations. We also provide a kernelized approach to non-Euclidean attention, which enables our model to run in time and memory cost linear to the number of nodes and edges while respecting the underlying geometry. Experiments on graph reconstruction and node classification demonstrate the benefits of generalizing Transformers to the non-Euclidean domain.
[ { "created": "Fri, 8 Sep 2023 02:44:37 GMT", "version": "v1" } ]
2023-09-11
[ [ "Cho", "Sungjun", "" ], [ "Cho", "Seunghyuk", "" ], [ "Park", "Sungwoo", "" ], [ "Lee", "Hankook", "" ], [ "Lee", "Honglak", "" ], [ "Lee", "Moontae", "" ] ]
Real-world graphs naturally exhibit hierarchical or cyclical structures that are unfit for the typical Euclidean space. While there exist graph neural networks that leverage hyperbolic or spherical spaces to learn representations that embed such structures more accurately, these methods are confined under the message-passing paradigm, making the models vulnerable against side-effects such as oversmoothing and oversquashing. More recent work have proposed global attention-based graph Transformers that can easily model long-range interactions, but their extensions towards non-Euclidean geometry are yet unexplored. To bridge this gap, we propose Fully Product-Stereographic Transformer, a generalization of Transformers towards operating entirely on the product of constant curvature spaces. When combined with tokenized graph Transformers, our model can learn the curvature appropriate for the input graph in an end-to-end fashion, without the need of additional tuning on different curvature initializations. We also provide a kernelized approach to non-Euclidean attention, which enables our model to run in time and memory cost linear to the number of nodes and edges while respecting the underlying geometry. Experiments on graph reconstruction and node classification demonstrate the benefits of generalizing Transformers to the non-Euclidean domain.
1003.1795
Rdv Ijcsis
Vidhya. K. A, G. Aghila
A Survey of Na\"ive Bayes Machine Learning approach in Text Document Classification
Pages IEEE format, International Journal of Computer Science and Information Security, IJCSIS, Vol. 7 No. 2, February 2010, USA. ISSN 1947 5500, http://sites.google.com/site/ijcsis/
null
null
null
cs.LG cs.IR
http://creativecommons.org/licenses/by-nc-sa/3.0/
Text Document classification aims in associating one or more predefined categories based on the likelihood suggested by the training set of labeled documents. Many machine learning algorithms play a vital role in training the system with predefined categories among which Na\"ive Bayes has some intriguing facts that it is simple, easy to implement and draws better accuracy in large datasets in spite of the na\"ive dependence. The importance of Na\"ive Bayes Machine learning approach has felt hence the study has been taken up for text document classification and the statistical event models available. This survey the various feature selection methods has been discussed and compared along with the metrics related to text document classification.
[ { "created": "Tue, 9 Mar 2010 06:41:49 GMT", "version": "v1" } ]
2010-03-10
[ [ "A", "Vidhya. K.", "" ], [ "Aghila", "G.", "" ] ]
Text Document classification aims in associating one or more predefined categories based on the likelihood suggested by the training set of labeled documents. Many machine learning algorithms play a vital role in training the system with predefined categories among which Na\"ive Bayes has some intriguing facts that it is simple, easy to implement and draws better accuracy in large datasets in spite of the na\"ive dependence. The importance of Na\"ive Bayes Machine learning approach has felt hence the study has been taken up for text document classification and the statistical event models available. This survey the various feature selection methods has been discussed and compared along with the metrics related to text document classification.
1304.3963
Sundeep Rangan
Mustafa Riza Akdeniz, Yuanpeng Liu, Sundeep Rangan and Elza Erkip
Millimeter Wave Picocellular System Evaluation for Urban Deployments
This paper is replaced by arXiv:1312.4921
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the severe spectrum shortage in conventional cellular bands, millimeter wave (mmW) frequencies between 30 and 300 GHz have been attracting growing attention as a possible candidate for next-generation micro- and picocellular wireless networks. The mmW bands offer orders of magnitude greater spectrum than current cellular allocations and enable very high-dimensional antenna arrays for further gains via spatial multiplexing. However, the propagation of mmW signals in outdoor non line-of-sight (NLOS) links remains challenging and the feasibility of wide-area mmW cellular networks is far from clear. This paper uses recent real-world measurements at 28 GHz in New York City to provide a realistic assessment of mmW picocellular networks in a dense urban deployment. It is found that, even under conservative propagation assumptions, mmW systems with cell radii of 100m can offer an order of magnitude increase in capacity over current state-of-the-art 4G cellular networks with similar cell density. However, it is also shown that such mmW networks may operate in a largely power-limited regime where the full spatial and bandwidth degrees of freedom are not fully utilized. This power-limited regime contrasts significantly with current bandwidth-limited cellular systems, requiring alternate technologies for mmW systems that may unlock further gains that mmW frequency bands offer.
[ { "created": "Mon, 15 Apr 2013 01:15:48 GMT", "version": "v1" }, { "created": "Fri, 25 Apr 2014 20:50:47 GMT", "version": "v2" } ]
2014-04-29
[ [ "Akdeniz", "Mustafa Riza", "" ], [ "Liu", "Yuanpeng", "" ], [ "Rangan", "Sundeep", "" ], [ "Erkip", "Elza", "" ] ]
With the severe spectrum shortage in conventional cellular bands, millimeter wave (mmW) frequencies between 30 and 300 GHz have been attracting growing attention as a possible candidate for next-generation micro- and picocellular wireless networks. The mmW bands offer orders of magnitude greater spectrum than current cellular allocations and enable very high-dimensional antenna arrays for further gains via spatial multiplexing. However, the propagation of mmW signals in outdoor non line-of-sight (NLOS) links remains challenging and the feasibility of wide-area mmW cellular networks is far from clear. This paper uses recent real-world measurements at 28 GHz in New York City to provide a realistic assessment of mmW picocellular networks in a dense urban deployment. It is found that, even under conservative propagation assumptions, mmW systems with cell radii of 100m can offer an order of magnitude increase in capacity over current state-of-the-art 4G cellular networks with similar cell density. However, it is also shown that such mmW networks may operate in a largely power-limited regime where the full spatial and bandwidth degrees of freedom are not fully utilized. This power-limited regime contrasts significantly with current bandwidth-limited cellular systems, requiring alternate technologies for mmW systems that may unlock further gains that mmW frequency bands offer.
1402.0391
Hui Gao
Hui Gao, Tiejun Lv, Di Fang, Shaoshi Yang, Chau Yuen
Limited Feedback-Based Interference Alignment for Interfering Multi-Access Channels
4 pages, 4 figures, to appear in IEEE Communications Letters
IEEE Communications Letters, Vol. 18, No. 4, pp. 540 - 543, April 2014
10.1109/LCOMM.2014.021214.132762
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A limited feedback-based interference alignment (IA) scheme is proposed for the interfering multi-access channel (IMAC). By employing a novel performance-oriented quantization strategy, the proposed scheme is able to achieve the minimum overall residual inter-cell interference (ICI) with the optimized transceivers under limited feedback. Consequently, the scheme outperforms the existing counterparts in terms of system throughput. In addition, the proposed scheme can be implemented with flexible antenna configurations.
[ { "created": "Mon, 3 Feb 2014 14:51:22 GMT", "version": "v1" }, { "created": "Tue, 4 Feb 2014 02:57:39 GMT", "version": "v2" } ]
2015-03-04
[ [ "Gao", "Hui", "" ], [ "Lv", "Tiejun", "" ], [ "Fang", "Di", "" ], [ "Yang", "Shaoshi", "" ], [ "Yuen", "Chau", "" ] ]
A limited feedback-based interference alignment (IA) scheme is proposed for the interfering multi-access channel (IMAC). By employing a novel performance-oriented quantization strategy, the proposed scheme is able to achieve the minimum overall residual inter-cell interference (ICI) with the optimized transceivers under limited feedback. Consequently, the scheme outperforms the existing counterparts in terms of system throughput. In addition, the proposed scheme can be implemented with flexible antenna configurations.
2401.12496
Kang-Won Lee
Kang-Won Lee, Yuzhe Qin, Xiaolong Wang and Soo-Chul Lim
DexTouch: Learning to Seek and Manipulate Objects with Tactile Dexterity
Project page: https://lee-kangwon.github.io/dextouch/
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
The sense of touch is an essential ability for skillfully performing a variety of tasks, providing the capacity to search and manipulate objects without relying on visual information. Extensive research has been conducted over time to apply these human tactile abilities to robots. In this paper, we introduce a multi-finger robot system designed to search for and manipulate objects using the sense of touch without relying on visual information. Randomly located target objects are searched using tactile sensors, and the objects are manipulated for tasks that mimic daily-life. The objective of the study is to endow robots with human-like tactile capabilities. To achieve this, binary tactile sensors are implemented on one side of the robot hand to minimize the Sim2Real gap. Training the policy through reinforcement learning in simulation and transferring the trained policy to the real environment, we demonstrate that object search and manipulation using tactile sensors is possible even in an environment without vision information. In addition, an ablation study was conducted to analyze the effect of tactile information on manipulative tasks. Our project page is available at https://lee-kangwon.github.io/dextouch/
[ { "created": "Tue, 23 Jan 2024 05:37:32 GMT", "version": "v1" } ]
2024-01-24
[ [ "Lee", "Kang-Won", "" ], [ "Qin", "Yuzhe", "" ], [ "Wang", "Xiaolong", "" ], [ "Lim", "Soo-Chul", "" ] ]
The sense of touch is an essential ability for skillfully performing a variety of tasks, providing the capacity to search and manipulate objects without relying on visual information. Extensive research has been conducted over time to apply these human tactile abilities to robots. In this paper, we introduce a multi-finger robot system designed to search for and manipulate objects using the sense of touch without relying on visual information. Randomly located target objects are searched using tactile sensors, and the objects are manipulated for tasks that mimic daily-life. The objective of the study is to endow robots with human-like tactile capabilities. To achieve this, binary tactile sensors are implemented on one side of the robot hand to minimize the Sim2Real gap. Training the policy through reinforcement learning in simulation and transferring the trained policy to the real environment, we demonstrate that object search and manipulation using tactile sensors is possible even in an environment without vision information. In addition, an ablation study was conducted to analyze the effect of tactile information on manipulative tasks. Our project page is available at https://lee-kangwon.github.io/dextouch/
1305.1060
Gian Luca Pozzato
Laura Giordano and Valentina Gliozzi and Nicola Olivetti and Gian Luca Pozzato
On Rational Closure in Description Logics of Typicality
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We define the notion of rational closure in the context of Description Logics extended with a tipicality operator. We start from ALC+T, an extension of ALC with a typicality operator T: intuitively allowing to express concepts of the form T(C), meant to select the "most normal" instances of a concept C. The semantics we consider is based on rational model. But we further restrict the semantics to minimal models, that is to say, to models that minimise the rank of domain elements. We show that this semantics captures exactly a notion of rational closure which is a natural extension to Description Logics of Lehmann and Magidor's original one. We also extend the notion of rational closure to the Abox component. We provide an ExpTime algorithm for computing the rational closure of an Abox and we show that it is sound and complete with respect to the minimal model semantics.
[ { "created": "Sun, 5 May 2013 22:32:16 GMT", "version": "v1" } ]
2013-05-07
[ [ "Giordano", "Laura", "" ], [ "Gliozzi", "Valentina", "" ], [ "Olivetti", "Nicola", "" ], [ "Pozzato", "Gian Luca", "" ] ]
We define the notion of rational closure in the context of Description Logics extended with a tipicality operator. We start from ALC+T, an extension of ALC with a typicality operator T: intuitively allowing to express concepts of the form T(C), meant to select the "most normal" instances of a concept C. The semantics we consider is based on rational model. But we further restrict the semantics to minimal models, that is to say, to models that minimise the rank of domain elements. We show that this semantics captures exactly a notion of rational closure which is a natural extension to Description Logics of Lehmann and Magidor's original one. We also extend the notion of rational closure to the Abox component. We provide an ExpTime algorithm for computing the rational closure of an Abox and we show that it is sound and complete with respect to the minimal model semantics.
cs/0502069
Sandor P. Fekete
Alexander Kroeller, Sandor P. Fekete, Carsten Buschmann, Stefan Fischer and Dennis Pfisterer
Koordinatenfreies Lokationsbewusstsein (Localization without Coordinates)
German, 15 pages, 6 figures, Latex, to appear in Information Technology
null
null
null
cs.DC
null
Localization is one of the fundamental issues in sensor networks. It is almost always assumed that it must be solved by assigning coordinates to the nodes. This article discusses positioning algorithms from a theoretical, practical and simulative point of view, and identifies difficulties and limitations. Ideas for more abstract means of location awareness are presented and the resulting possible improvements for applications are shown. Nodes with certain topological or environmental properties are clustered, and the neighborhood structure of the clusters is modeled as a graph. Eines der fundamentalen Probleme in Sensornetzwerken besteht darin, ein Bewusstsein fuer die Position eines Knotens im Netz zu entwickeln. Dabei wird fast immer davon ausgegangen, dass dies durch die Zuweisung von Koordinaten zu erfolgen hat. In diesem Artikel wird auf theoretischer, praktischer und simulativer Ebene ein kritischer Blick auf entsprechende Verfahren geworfen, und es werden Grenzen aufgezeigt. Es wird ein Ansatz vorgestellt, mit dem in der Zukunft eine abstrakte Form von Lokationsbewusstsein etabliert werden kann, und es wird gezeigt, wie Anwendungen dadurch verbessert werden koennen. Er basiert auf einer graphenbasierten Modellierung des Netzes: Knoten mit bestimmten topologischen oder Umwelteigenschaften werden zu Clustern zusammengefasst, und Clusternachbarschaften dann als Graphen modelliert.
[ { "created": "Tue, 15 Feb 2005 16:35:13 GMT", "version": "v1" } ]
2009-09-29
[ [ "Kroeller", "Alexander", "" ], [ "Fekete", "Sandor P.", "" ], [ "Buschmann", "Carsten", "" ], [ "Fischer", "Stefan", "" ], [ "Pfisterer", "Dennis", "" ] ]
Localization is one of the fundamental issues in sensor networks. It is almost always assumed that it must be solved by assigning coordinates to the nodes. This article discusses positioning algorithms from a theoretical, practical and simulative point of view, and identifies difficulties and limitations. Ideas for more abstract means of location awareness are presented and the resulting possible improvements for applications are shown. Nodes with certain topological or environmental properties are clustered, and the neighborhood structure of the clusters is modeled as a graph. Eines der fundamentalen Probleme in Sensornetzwerken besteht darin, ein Bewusstsein fuer die Position eines Knotens im Netz zu entwickeln. Dabei wird fast immer davon ausgegangen, dass dies durch die Zuweisung von Koordinaten zu erfolgen hat. In diesem Artikel wird auf theoretischer, praktischer und simulativer Ebene ein kritischer Blick auf entsprechende Verfahren geworfen, und es werden Grenzen aufgezeigt. Es wird ein Ansatz vorgestellt, mit dem in der Zukunft eine abstrakte Form von Lokationsbewusstsein etabliert werden kann, und es wird gezeigt, wie Anwendungen dadurch verbessert werden koennen. Er basiert auf einer graphenbasierten Modellierung des Netzes: Knoten mit bestimmten topologischen oder Umwelteigenschaften werden zu Clustern zusammengefasst, und Clusternachbarschaften dann als Graphen modelliert.
1210.7506
Cong Ling
Kezhi Li, Lu Gan, Cong Ling
Convolutional Compressed Sensing Using Deterministic Sequences
A major overhaul of the withdrawn paper Orthogonal symmetric Toeplitz matrices for compressed sensing: Statistical isometry property
null
10.1109/TSP.2012.2229994
null
cs.IT cs.MM math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a new class of circulant matrices built from deterministic sequences is proposed for convolution-based compressed sensing (CS). In contrast to random convolution, the coefficients of the underlying filter are given by the discrete Fourier transform of a deterministic sequence with good autocorrelation. Both uniform recovery and non-uniform recovery of sparse signals are investigated, based on the coherence parameter of the proposed sensing matrices. Many examples of the sequences are investigated, particularly the Frank-Zadoff-Chu (FZC) sequence, the \textit{m}-sequence and the Golay sequence. A salient feature of the proposed sensing matrices is that they can not only handle sparse signals in the time domain, but also those in the frequency and/or or discrete-cosine transform (DCT) domain.
[ { "created": "Sun, 28 Oct 2012 20:34:06 GMT", "version": "v1" } ]
2015-06-11
[ [ "Li", "Kezhi", "" ], [ "Gan", "Lu", "" ], [ "Ling", "Cong", "" ] ]
In this paper, a new class of circulant matrices built from deterministic sequences is proposed for convolution-based compressed sensing (CS). In contrast to random convolution, the coefficients of the underlying filter are given by the discrete Fourier transform of a deterministic sequence with good autocorrelation. Both uniform recovery and non-uniform recovery of sparse signals are investigated, based on the coherence parameter of the proposed sensing matrices. Many examples of the sequences are investigated, particularly the Frank-Zadoff-Chu (FZC) sequence, the \textit{m}-sequence and the Golay sequence. A salient feature of the proposed sensing matrices is that they can not only handle sparse signals in the time domain, but also those in the frequency and/or or discrete-cosine transform (DCT) domain.
2102.06429
Yiping Jin
Yiping Jin, Vishakha Kadam, Dittaya Wanvarie
Bootstrapping Large-Scale Fine-Grained Contextual Advertising Classifier from Wikipedia
Accepted to TextGraphs-15 Workshop @ NAACL 2021
null
null
null
cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
Contextual advertising provides advertisers with the opportunity to target the context which is most relevant to their ads. However, its power cannot be fully utilized unless we can target the page content using fine-grained categories, e.g., "coupe" vs. "hatchback" instead of "automotive" vs. "sport". The widely used advertising content taxonomy (IAB taxonomy) consists of 23 coarse-grained categories and 355 fine-grained categories. With the large number of categories, it becomes very challenging either to collect training documents to build a supervised classification model, or to compose expert-written rules in a rule-based classification system. Besides, in fine-grained classification, different categories often overlap or co-occur, making it harder to classify accurately. In this work, we propose wiki2cat, a method to tackle the problem of large-scaled fine-grained text classification by tapping on Wikipedia category graph. The categories in IAB taxonomy are first mapped to category nodes in the graph. Then the label is propagated across the graph to obtain a list of labeled Wikipedia documents to induce text classifiers. The method is ideal for large-scale classification problems since it does not require any manually-labeled document or hand-curated rules or keywords. The proposed method is benchmarked with various learning-based and keyword-based baselines and yields competitive performance on both publicly available datasets and a new dataset containing more than 300 fine-grained categories.
[ { "created": "Fri, 12 Feb 2021 10:18:25 GMT", "version": "v1" }, { "created": "Tue, 20 Apr 2021 03:28:19 GMT", "version": "v2" } ]
2021-04-21
[ [ "Jin", "Yiping", "" ], [ "Kadam", "Vishakha", "" ], [ "Wanvarie", "Dittaya", "" ] ]
Contextual advertising provides advertisers with the opportunity to target the context which is most relevant to their ads. However, its power cannot be fully utilized unless we can target the page content using fine-grained categories, e.g., "coupe" vs. "hatchback" instead of "automotive" vs. "sport". The widely used advertising content taxonomy (IAB taxonomy) consists of 23 coarse-grained categories and 355 fine-grained categories. With the large number of categories, it becomes very challenging either to collect training documents to build a supervised classification model, or to compose expert-written rules in a rule-based classification system. Besides, in fine-grained classification, different categories often overlap or co-occur, making it harder to classify accurately. In this work, we propose wiki2cat, a method to tackle the problem of large-scaled fine-grained text classification by tapping on Wikipedia category graph. The categories in IAB taxonomy are first mapped to category nodes in the graph. Then the label is propagated across the graph to obtain a list of labeled Wikipedia documents to induce text classifiers. The method is ideal for large-scale classification problems since it does not require any manually-labeled document or hand-curated rules or keywords. The proposed method is benchmarked with various learning-based and keyword-based baselines and yields competitive performance on both publicly available datasets and a new dataset containing more than 300 fine-grained categories.
2306.09468
Xiaotian Han
Xiaotian Han, Jianfeng Chi, Yu Chen, Qifan Wang, Han Zhao, Na Zou, Xia Hu
FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods
ICLR2024
null
null
null
cs.LG cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces the Fair Fairness Benchmark (\textsf{FFB}), a benchmarking framework for in-processing group fairness methods. Ensuring fairness in machine learning is important for ethical compliance. However, there exist challenges in comparing and developing fairness methods due to inconsistencies in experimental settings, lack of accessible algorithmic implementations, and limited extensibility of current fairness packages and tools. To address these issues, we introduce an open-source standardized benchmark for evaluating in-processing group fairness methods and provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness. This work offers the following key contributions: the provision of flexible, extensible, minimalistic, and research-oriented open-source code; the establishment of unified fairness method benchmarking pipelines; and extensive benchmarking, which yields key insights from $\mathbf{45,079}$ experiments, $\mathbf{14,428}$ GPU hours. We believe that our work will significantly facilitate the growth and development of the fairness research community.
[ { "created": "Thu, 15 Jun 2023 19:51:28 GMT", "version": "v1" }, { "created": "Tue, 11 Jun 2024 03:10:10 GMT", "version": "v2" } ]
2024-06-12
[ [ "Han", "Xiaotian", "" ], [ "Chi", "Jianfeng", "" ], [ "Chen", "Yu", "" ], [ "Wang", "Qifan", "" ], [ "Zhao", "Han", "" ], [ "Zou", "Na", "" ], [ "Hu", "Xia", "" ] ]
This paper introduces the Fair Fairness Benchmark (\textsf{FFB}), a benchmarking framework for in-processing group fairness methods. Ensuring fairness in machine learning is important for ethical compliance. However, there exist challenges in comparing and developing fairness methods due to inconsistencies in experimental settings, lack of accessible algorithmic implementations, and limited extensibility of current fairness packages and tools. To address these issues, we introduce an open-source standardized benchmark for evaluating in-processing group fairness methods and provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness. This work offers the following key contributions: the provision of flexible, extensible, minimalistic, and research-oriented open-source code; the establishment of unified fairness method benchmarking pipelines; and extensive benchmarking, which yields key insights from $\mathbf{45,079}$ experiments, $\mathbf{14,428}$ GPU hours. We believe that our work will significantly facilitate the growth and development of the fairness research community.
2210.00038
Zhiqi Bu
Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis
Differentially Private Optimization on Large Model at Small Cost
null
null
null
null
cs.LG cs.CL cs.CR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differentially private (DP) optimization is the standard paradigm to learn large neural networks that are accurate and privacy-preserving. The computational cost for DP deep learning, however, is notoriously heavy due to the per-sample gradient clipping. Existing DP implementations are 2-1000X more costly in time and space complexity than the standard (non-private) training. In this work, we develop a novel Book-Keeping (BK) technique that implements existing DP optimizers (thus achieving the same accuracy), with a substantial improvement on the computational cost. Specifically, BK enables DP training on large models and high dimensional data to be roughly as fast and memory-saving as the standard training, whereas previous DP algorithms can be inefficient or incapable of training due to memory error. The computational advantage of BK is supported by the complexity analysis as well as extensive experiments on vision and language tasks. Our implementation achieves state-of-the-art (SOTA) accuracy with very small extra cost: on GPT2 and at almost the same memory cost (<1% overhead), BK has 1.03X the time complexity of the standard training (0.83X training speed in practice), and 0.61X the time complexity of the most efficient DP implementation (1.36X training speed in practice). We open-source the codebase for the BK algorithm at the FastDP library (https://github.com/awslabs/fast-differential-privacy).
[ { "created": "Fri, 30 Sep 2022 18:38:53 GMT", "version": "v1" }, { "created": "Tue, 19 Sep 2023 02:14:06 GMT", "version": "v2" } ]
2023-09-20
[ [ "Bu", "Zhiqi", "" ], [ "Wang", "Yu-Xiang", "" ], [ "Zha", "Sheng", "" ], [ "Karypis", "George", "" ] ]
Differentially private (DP) optimization is the standard paradigm to learn large neural networks that are accurate and privacy-preserving. The computational cost for DP deep learning, however, is notoriously heavy due to the per-sample gradient clipping. Existing DP implementations are 2-1000X more costly in time and space complexity than the standard (non-private) training. In this work, we develop a novel Book-Keeping (BK) technique that implements existing DP optimizers (thus achieving the same accuracy), with a substantial improvement on the computational cost. Specifically, BK enables DP training on large models and high dimensional data to be roughly as fast and memory-saving as the standard training, whereas previous DP algorithms can be inefficient or incapable of training due to memory error. The computational advantage of BK is supported by the complexity analysis as well as extensive experiments on vision and language tasks. Our implementation achieves state-of-the-art (SOTA) accuracy with very small extra cost: on GPT2 and at almost the same memory cost (<1% overhead), BK has 1.03X the time complexity of the standard training (0.83X training speed in practice), and 0.61X the time complexity of the most efficient DP implementation (1.36X training speed in practice). We open-source the codebase for the BK algorithm at the FastDP library (https://github.com/awslabs/fast-differential-privacy).
2112.01989
Cedric M\"oller
Cedric M\"oller, Jens Lehmann, Ricardo Usbeck
Survey on English Entity Linking on Wikidata
Disclaimer: Cedric M\"oller, Jens Lehmann, Ricardo Usbeck, 2021. The definitive, peer reviewed and edited version of this article is published in the Semantic Web Journal, Special issue: Latest Advancements in Linguistic 3 Linked Data, 2021
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Wikidata is a frequently updated, community-driven, and multilingual knowledge graph. Hence, Wikidata is an attractive basis for Entity Linking, which is evident by the recent increase in published papers. This survey focuses on four subjects: (1) Which Wikidata Entity Linking datasets exist, how widely used are they and how are they constructed? (2) Do the characteristics of Wikidata matter for the design of Entity Linking datasets and if so, how? (3) How do current Entity Linking approaches exploit the specific characteristics of Wikidata? (4) Which Wikidata characteristics are unexploited by existing Entity Linking approaches? This survey reveals that current Wikidata-specific Entity Linking datasets do not differ in their annotation scheme from schemes for other knowledge graphs like DBpedia. Thus, the potential for multilingual and time-dependent datasets, naturally suited for Wikidata, is not lifted. Furthermore, we show that most Entity Linking approaches use Wikidata in the same way as any other knowledge graph missing the chance to leverage Wikidata-specific characteristics to increase quality. Almost all approaches employ specific properties like labels and sometimes descriptions but ignore characteristics such as the hyper-relational structure. Hence, there is still room for improvement, for example, by including hyper-relational graph embeddings or type information. Many approaches also include information from Wikipedia, which is easily combinable with Wikidata and provides valuable textual information, which Wikidata lacks.
[ { "created": "Fri, 3 Dec 2021 16:02:42 GMT", "version": "v1" } ]
2021-12-06
[ [ "Möller", "Cedric", "" ], [ "Lehmann", "Jens", "" ], [ "Usbeck", "Ricardo", "" ] ]
Wikidata is a frequently updated, community-driven, and multilingual knowledge graph. Hence, Wikidata is an attractive basis for Entity Linking, which is evident by the recent increase in published papers. This survey focuses on four subjects: (1) Which Wikidata Entity Linking datasets exist, how widely used are they and how are they constructed? (2) Do the characteristics of Wikidata matter for the design of Entity Linking datasets and if so, how? (3) How do current Entity Linking approaches exploit the specific characteristics of Wikidata? (4) Which Wikidata characteristics are unexploited by existing Entity Linking approaches? This survey reveals that current Wikidata-specific Entity Linking datasets do not differ in their annotation scheme from schemes for other knowledge graphs like DBpedia. Thus, the potential for multilingual and time-dependent datasets, naturally suited for Wikidata, is not lifted. Furthermore, we show that most Entity Linking approaches use Wikidata in the same way as any other knowledge graph missing the chance to leverage Wikidata-specific characteristics to increase quality. Almost all approaches employ specific properties like labels and sometimes descriptions but ignore characteristics such as the hyper-relational structure. Hence, there is still room for improvement, for example, by including hyper-relational graph embeddings or type information. Many approaches also include information from Wikipedia, which is easily combinable with Wikidata and provides valuable textual information, which Wikidata lacks.
2102.12119
Yuhua Sun
Chun-e Zhao, Wenping Ma, Tongjiang Yan, Yuhua Sun
A new upper bound and optimal constructions of equi-difference conflict-avoiding codes on constant weight
10
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conflict-avoiding codes (CACs) have been used in multiple-access collision channel without feedback. The size of a CAC is the number of potential users that can be supported in the system. A code with maximum size is called optimal. The use of an optimal CAC enables the largest possible number of asynchronous users to transmit information efficiently and reliably. In this paper, a new upper bound on the maximum size of arbitrary equi-difference CAC is presented. Furthermore, three optimal constructions of equi-difference CACs are also given. One is a generalized construction for prime length $L=p$ and the other two are for two-prime length $L=pq$.
[ { "created": "Wed, 24 Feb 2021 08:27:27 GMT", "version": "v1" } ]
2021-02-25
[ [ "Zhao", "Chun-e", "" ], [ "Ma", "Wenping", "" ], [ "Yan", "Tongjiang", "" ], [ "Sun", "Yuhua", "" ] ]
Conflict-avoiding codes (CACs) have been used in multiple-access collision channel without feedback. The size of a CAC is the number of potential users that can be supported in the system. A code with maximum size is called optimal. The use of an optimal CAC enables the largest possible number of asynchronous users to transmit information efficiently and reliably. In this paper, a new upper bound on the maximum size of arbitrary equi-difference CAC is presented. Furthermore, three optimal constructions of equi-difference CACs are also given. One is a generalized construction for prime length $L=p$ and the other two are for two-prime length $L=pq$.
1806.07084
Kong Hyeok
Hyeok Kong, Dokjun An, Jihyang Ri
Itemsets of interest for negative association rules
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
So far, most of association rule minings have considered about positive association rules based on frequent itemsets in databases[2,5-7], but they have not considered the problem of mining negative association rules correlated with frequent and infrequent itemsets. Negative association rule mining is much more difficult than positive association rule mining because it needs infrequent itemsets, and only the rare association rule mining which is a kind of negative association rule minings has been studied. This paper presents a mathematical model to mine positive and negative association rules precisely, for which in a point of view that negation of a frequent itemset is an infrequent itemset, we make clear the importance of the problem of mining negative association rules based on certain infrequent itemsets and study on what conditions infrequent itemsets of interest should satisfy for negative association rules.
[ { "created": "Tue, 19 Jun 2018 07:50:48 GMT", "version": "v1" } ]
2018-06-20
[ [ "Kong", "Hyeok", "" ], [ "An", "Dokjun", "" ], [ "Ri", "Jihyang", "" ] ]
So far, most of association rule minings have considered about positive association rules based on frequent itemsets in databases[2,5-7], but they have not considered the problem of mining negative association rules correlated with frequent and infrequent itemsets. Negative association rule mining is much more difficult than positive association rule mining because it needs infrequent itemsets, and only the rare association rule mining which is a kind of negative association rule minings has been studied. This paper presents a mathematical model to mine positive and negative association rules precisely, for which in a point of view that negation of a frequent itemset is an infrequent itemset, we make clear the importance of the problem of mining negative association rules based on certain infrequent itemsets and study on what conditions infrequent itemsets of interest should satisfy for negative association rules.
2203.01611
Mohsen Annabestani
Mohsen Annabestani, Majid Shabani, Samuel Videira Magalhaes, Alessio Mondini, and Barbara Mazzolai
A Plant-Inspired Multifunctional, Two Way, and Fiberless Soft Gripper with Sensorized Kinaesthesia
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents a new fiberless soft pneumatic actuator that can work multifunctional and bidirectional, and its embedded sensors give it a self-proprioception ability. This actuator works based on the idea of employing helical pressure channels. Applying the controlled input pressures into these two channels causes a variety of deformations and actuation. In particular, single pressure, imbalanced pressures, and balanced pressures applied in the channels cause bidirectional coilings, opposite bendings, and elongation, respectively, in a single unit actuator. Also, two U-shaped microchannels are created, and by injecting a gel-based conductive material, the actuator is equipped with resistive sensors which are responsive to a vast dynamic range from a small oscillation to a large elongation. This actuator has so many promising features as a multifunctional soft gripper, and its embedded soft sensors enable it to have better controllability in real problems. The multifunctionality of this actuator has been validated with several experimental tests, and also we have shown it has excellent potential in gripping a variety of objects. Finally, the embedded sensors can discriminate the main functions of actuators, and also they can play the role of independent sensors as well like a stretch, pressure, or bending sensors.
[ { "created": "Thu, 3 Mar 2022 10:10:24 GMT", "version": "v1" } ]
2022-03-04
[ [ "Annabestani", "Mohsen", "" ], [ "Shabani", "Majid", "" ], [ "Magalhaes", "Samuel Videira", "" ], [ "Mondini", "Alessio", "" ], [ "Mazzolai", "Barbara", "" ] ]
This work presents a new fiberless soft pneumatic actuator that can work multifunctional and bidirectional, and its embedded sensors give it a self-proprioception ability. This actuator works based on the idea of employing helical pressure channels. Applying the controlled input pressures into these two channels causes a variety of deformations and actuation. In particular, single pressure, imbalanced pressures, and balanced pressures applied in the channels cause bidirectional coilings, opposite bendings, and elongation, respectively, in a single unit actuator. Also, two U-shaped microchannels are created, and by injecting a gel-based conductive material, the actuator is equipped with resistive sensors which are responsive to a vast dynamic range from a small oscillation to a large elongation. This actuator has so many promising features as a multifunctional soft gripper, and its embedded soft sensors enable it to have better controllability in real problems. The multifunctionality of this actuator has been validated with several experimental tests, and also we have shown it has excellent potential in gripping a variety of objects. Finally, the embedded sensors can discriminate the main functions of actuators, and also they can play the role of independent sensors as well like a stretch, pressure, or bending sensors.
1704.09004
Adam Hey
Adam J. Hey and Michael A. Heroux
Kanban + X: Leveraging Kanban for Focused Improvements
7 pages, 6 figures
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Agile Development is used for many problems, often with different priorities and challenges. However, generalized engineering methodologies often overlook the particularities of a project. To solve this problem, we have looked at ways engineers have modified development methodologies for a particular focus, and created a generalized framework for leveraging Kanban towards focused improvements. The result is a parallel iterative board that tracks and visualizes progress towards a focus, which we have applied to security, sustainability, and high performance as examples. Through use of this system, software projects can be more focused and directed towards their goals.
[ { "created": "Fri, 28 Apr 2017 16:47:57 GMT", "version": "v1" } ]
2017-05-01
[ [ "Hey", "Adam J.", "" ], [ "Heroux", "Michael A.", "" ] ]
Agile Development is used for many problems, often with different priorities and challenges. However, generalized engineering methodologies often overlook the particularities of a project. To solve this problem, we have looked at ways engineers have modified development methodologies for a particular focus, and created a generalized framework for leveraging Kanban towards focused improvements. The result is a parallel iterative board that tracks and visualizes progress towards a focus, which we have applied to security, sustainability, and high performance as examples. Through use of this system, software projects can be more focused and directed towards their goals.
2403.14124
Yong He
Yong He, Hongshan Yu, Muhammad Ibrahim, Xiaoyan Liu, Tongjia Chen, Anwaar Ulhaq, Ajmal Mian
Soft Masked Transformer for Point Cloud Processing with Skip Attention-Based Upsampling
14 pages, 8 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point cloud processing methods leverage local and global point features %at the feature level to cater to downstream tasks, yet they often overlook the task-level context inherent in point clouds during the encoding stage. We argue that integrating task-level information into the encoding stage significantly enhances performance. To that end, we propose SMTransformer which incorporates task-level information into a vector-based transformer by utilizing a soft mask generated from task-level queries and keys to learn the attention weights. Additionally, to facilitate effective communication between features from the encoding and decoding layers in high-level tasks such as segmentation, we introduce a skip-attention-based up-sampling block. This block dynamically fuses features from various resolution points across the encoding and decoding layers. To mitigate the increase in network parameters and training time resulting from the complexity of the aforementioned blocks, we propose a novel shared position encoding strategy. This strategy allows various transformer blocks to share the same position information over the same resolution points, thereby reducing network parameters and training time without compromising accuracy.Experimental comparisons with existing methods on multiple datasets demonstrate the efficacy of SMTransformer and skip-attention-based up-sampling for point cloud processing tasks, including semantic segmentation and classification. In particular, we achieve state-of-the-art semantic segmentation results of 73.4% mIoU on S3DIS Area 5 and 62.4% mIoU on SWAN dataset
[ { "created": "Thu, 21 Mar 2024 04:34:24 GMT", "version": "v1" } ]
2024-03-22
[ [ "He", "Yong", "" ], [ "Yu", "Hongshan", "" ], [ "Ibrahim", "Muhammad", "" ], [ "Liu", "Xiaoyan", "" ], [ "Chen", "Tongjia", "" ], [ "Ulhaq", "Anwaar", "" ], [ "Mian", "Ajmal", "" ] ]
Point cloud processing methods leverage local and global point features %at the feature level to cater to downstream tasks, yet they often overlook the task-level context inherent in point clouds during the encoding stage. We argue that integrating task-level information into the encoding stage significantly enhances performance. To that end, we propose SMTransformer which incorporates task-level information into a vector-based transformer by utilizing a soft mask generated from task-level queries and keys to learn the attention weights. Additionally, to facilitate effective communication between features from the encoding and decoding layers in high-level tasks such as segmentation, we introduce a skip-attention-based up-sampling block. This block dynamically fuses features from various resolution points across the encoding and decoding layers. To mitigate the increase in network parameters and training time resulting from the complexity of the aforementioned blocks, we propose a novel shared position encoding strategy. This strategy allows various transformer blocks to share the same position information over the same resolution points, thereby reducing network parameters and training time without compromising accuracy.Experimental comparisons with existing methods on multiple datasets demonstrate the efficacy of SMTransformer and skip-attention-based up-sampling for point cloud processing tasks, including semantic segmentation and classification. In particular, we achieve state-of-the-art semantic segmentation results of 73.4% mIoU on S3DIS Area 5 and 62.4% mIoU on SWAN dataset
2004.12362
Xiaojun Quan
Kai Wang and Weizhou Shen and Yunyi Yang and Xiaojun Quan and Rui Wang
Relational Graph Attention Network for Aspect-based Sentiment Analysis
To appear at ACL 2020
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aspect-based sentiment analysis aims to determine the sentiment polarity towards a specific aspect in online reviews. Most recent efforts adopt attention-based neural network models to implicitly connect aspects with opinion words. However, due to the complexity of language and the existence of multiple aspects in a single sentence, these models often confuse the connections. In this paper, we address this problem by means of effective encoding of syntax information. Firstly, we define a unified aspect-oriented dependency tree structure rooted at a target aspect by reshaping and pruning an ordinary dependency parse tree. Then, we propose a relational graph attention network (R-GAT) to encode the new tree structure for sentiment prediction. Extensive experiments are conducted on the SemEval 2014 and Twitter datasets, and the experimental results confirm that the connections between aspects and opinion words can be better established with our approach, and the performance of the graph attention network (GAT) is significantly improved as a consequence.
[ { "created": "Sun, 26 Apr 2020 12:21:04 GMT", "version": "v1" } ]
2020-04-28
[ [ "Wang", "Kai", "" ], [ "Shen", "Weizhou", "" ], [ "Yang", "Yunyi", "" ], [ "Quan", "Xiaojun", "" ], [ "Wang", "Rui", "" ] ]
Aspect-based sentiment analysis aims to determine the sentiment polarity towards a specific aspect in online reviews. Most recent efforts adopt attention-based neural network models to implicitly connect aspects with opinion words. However, due to the complexity of language and the existence of multiple aspects in a single sentence, these models often confuse the connections. In this paper, we address this problem by means of effective encoding of syntax information. Firstly, we define a unified aspect-oriented dependency tree structure rooted at a target aspect by reshaping and pruning an ordinary dependency parse tree. Then, we propose a relational graph attention network (R-GAT) to encode the new tree structure for sentiment prediction. Extensive experiments are conducted on the SemEval 2014 and Twitter datasets, and the experimental results confirm that the connections between aspects and opinion words can be better established with our approach, and the performance of the graph attention network (GAT) is significantly improved as a consequence.
2007.06767
Silvano Herculano da Luz Junior
Silvano Herculano da Luz J\'unior, Francisco \'Icaro Cipriano Silva, Gustavo Sousa Galisa Albuquerque, Francisco Petr\^onio Alencar de Medeiros and Heremita Brasileiro Lira
Enterprise Architecture in Healthcare Systems: A systematic literature review
22 pages
null
10.17632/44bygxg8w3.1
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Enterprise architecture (EA) has been present in scientific literature since the 1980s and has branched out into several research fields. EA delivers value by presenting business and ICT leaders with recommendations for adjusting policies and projects to achieve business goals. Although there are many works on the EA application in healthcare systems, the literature lacks studies that provide a systematic approach to this topic specifically. This work presents a deep and broad Systematic Literature Review (SLR) to select studies demonstrating current EA practices in healthcare systems. The researchers established an SLR protocol returning 280 primary studies after the first step of the Data Selection and a consolidated inclusion of 46 articles after the second step. They assessed the level of disagreement during the team's evaluations using Cohen's Kappa. This SLR revealed essential aspects of state-of-the-art EA application in healthcare systems, such as the most used methodologies and tools, best practices, and criteria considered for their choice. It also analyzed the main positive impacts, challenges, and critical success factors described by the studies' authors based on empirical approaches. Besides, this work brings the main publication channels and the most influential authors on the topic of EA in Healthcare systems.
[ { "created": "Tue, 14 Jul 2020 02:01:25 GMT", "version": "v1" }, { "created": "Fri, 17 Jul 2020 11:18:38 GMT", "version": "v2" } ]
2020-07-20
[ [ "Júnior", "Silvano Herculano da Luz", "" ], [ "Silva", "Francisco Ícaro Cipriano", "" ], [ "Albuquerque", "Gustavo Sousa Galisa", "" ], [ "de Medeiros", "Francisco Petrônio Alencar", "" ], [ "Lira", "Heremita Brasileiro", "" ] ]
Enterprise architecture (EA) has been present in scientific literature since the 1980s and has branched out into several research fields. EA delivers value by presenting business and ICT leaders with recommendations for adjusting policies and projects to achieve business goals. Although there are many works on the EA application in healthcare systems, the literature lacks studies that provide a systematic approach to this topic specifically. This work presents a deep and broad Systematic Literature Review (SLR) to select studies demonstrating current EA practices in healthcare systems. The researchers established an SLR protocol returning 280 primary studies after the first step of the Data Selection and a consolidated inclusion of 46 articles after the second step. They assessed the level of disagreement during the team's evaluations using Cohen's Kappa. This SLR revealed essential aspects of state-of-the-art EA application in healthcare systems, such as the most used methodologies and tools, best practices, and criteria considered for their choice. It also analyzed the main positive impacts, challenges, and critical success factors described by the studies' authors based on empirical approaches. Besides, this work brings the main publication channels and the most influential authors on the topic of EA in Healthcare systems.
1811.10971
James Thorne
James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, Arpit Mittal
The Fact Extraction and VERification (FEVER) Shared Task
Revised from published version in the proceedings of the FEVER workshop at EMNLP 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the results of the first Fact Extraction and VERification (FEVER) Shared Task. The task challenged participants to classify whether human-written factoid claims could be Supported or Refuted using evidence retrieved from Wikipedia. We received entries from 23 competing teams, 19 of which scored higher than the previously published baseline. The best performing system achieved a FEVER score of 64.21%. In this paper, we present the results of the shared task and a summary of the systems, highlighting commonalities and innovations among participating systems.
[ { "created": "Tue, 27 Nov 2018 13:32:26 GMT", "version": "v1" }, { "created": "Fri, 30 Nov 2018 09:26:31 GMT", "version": "v2" } ]
2018-12-03
[ [ "Thorne", "James", "" ], [ "Vlachos", "Andreas", "" ], [ "Cocarascu", "Oana", "" ], [ "Christodoulopoulos", "Christos", "" ], [ "Mittal", "Arpit", "" ] ]
We present the results of the first Fact Extraction and VERification (FEVER) Shared Task. The task challenged participants to classify whether human-written factoid claims could be Supported or Refuted using evidence retrieved from Wikipedia. We received entries from 23 competing teams, 19 of which scored higher than the previously published baseline. The best performing system achieved a FEVER score of 64.21%. In this paper, we present the results of the shared task and a summary of the systems, highlighting commonalities and innovations among participating systems.
2108.02425
Tao Kong
Yiming Li, Tao Kong, Ruihang Chu, Yifeng Li, Peng Wang and Lei Li
Simultaneous Semantic and Collision Learning for 6-DoF Grasp Pose Estimation
International Conference on Intelligent Robots and Systems (IROS) 2021
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
Grasping in cluttered scenes has always been a great challenge for robots, due to the requirement of the ability to well understand the scene and object information. Previous works usually assume that the geometry information of the objects is available, or utilize a step-wise, multi-stage strategy to predict the feasible 6-DoF grasp poses. In this work, we propose to formalize the 6-DoF grasp pose estimation as a simultaneous multi-task learning problem. In a unified framework, we jointly predict the feasible 6-DoF grasp poses, instance semantic segmentation, and collision information. The whole framework is jointly optimized and end-to-end differentiable. Our model is evaluated on large-scale benchmarks as well as the real robot system. On the public dataset, our method outperforms prior state-of-the-art methods by a large margin (+4.08 AP). We also demonstrate the implementation of our model on a real robotic platform and show that the robot can accurately grasp target objects in cluttered scenarios with a high success rate. Project link: https://openbyterobotics.github.io/sscl
[ { "created": "Thu, 5 Aug 2021 07:46:48 GMT", "version": "v1" }, { "created": "Sun, 26 Sep 2021 08:26:38 GMT", "version": "v2" } ]
2021-09-28
[ [ "Li", "Yiming", "" ], [ "Kong", "Tao", "" ], [ "Chu", "Ruihang", "" ], [ "Li", "Yifeng", "" ], [ "Wang", "Peng", "" ], [ "Li", "Lei", "" ] ]
Grasping in cluttered scenes has always been a great challenge for robots, due to the requirement of the ability to well understand the scene and object information. Previous works usually assume that the geometry information of the objects is available, or utilize a step-wise, multi-stage strategy to predict the feasible 6-DoF grasp poses. In this work, we propose to formalize the 6-DoF grasp pose estimation as a simultaneous multi-task learning problem. In a unified framework, we jointly predict the feasible 6-DoF grasp poses, instance semantic segmentation, and collision information. The whole framework is jointly optimized and end-to-end differentiable. Our model is evaluated on large-scale benchmarks as well as the real robot system. On the public dataset, our method outperforms prior state-of-the-art methods by a large margin (+4.08 AP). We also demonstrate the implementation of our model on a real robotic platform and show that the robot can accurately grasp target objects in cluttered scenarios with a high success rate. Project link: https://openbyterobotics.github.io/sscl
0901.4934
Carl Hewitt
Carl Hewitt
A historical perspective on developing foundations iInfo(TM) information systems: iConsult(TM) and iEntertain(TM) apps using iDescribers(TM) information integration for iOrgs(TM) information systems
updated title and abstract
null
null
null
cs.DC cs.DB cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Technology now at hand can integrate all kinds of digital information for individuals, groups, and organizations so their information usefully links together. iInfo(TM) information integration works by making connections including examples like the following: - A statistical connection between "being in a traffic jam" and "driving in downtown Trenton between 5PM and 6PM on a weekday." - A terminological connection between "MSR" and "Microsoft Research." - A causal connection between "joining a group" and "being a member of the group." - A syntactic connection between "a pin dropped" and "a dropped pin." - A biological connection between "a dolphin" and "a mammal". - A demographic connection between "undocumented residents of California" and "7% of the population of California." - A geographical connection between "Leeds" and "England." - A temporal connection between "turning on a computer" and "joining an on-line discussion." By making these connections, iInfo offers tremendous value for individuals, families, groups, and organizations in making more effective use of information technology. In practice, integrated information is invariably pervasively inconsistent. Therefore iInfo must be able to make connections even in the face of inconsistency. The business of iInfo is not to make difficult decisions like deciding the ultimate truth or probability of propositions. Instead it provides means for processing information and carefully recording its provenance including arguments (including arguments about arguments) for and against propositions that is used by iConsult(TM) and iEntertain(TM) apps in iOrgs(TM) Information Systems. A historical perspective on the above questions is highly pertinent to the current quest to develop foundations for privacy-friendly client-cloud computing.
[ { "created": "Fri, 30 Jan 2009 17:33:14 GMT", "version": "v1" }, { "created": "Tue, 17 Nov 2009 12:54:00 GMT", "version": "v10" }, { "created": "Tue, 24 Nov 2009 15:36:18 GMT", "version": "v11" }, { "created": "Tue, 29 Dec 2009 22:54:57 GMT", "version": "v12" }, { "created": "Thu, 7 Jan 2010 20:11:54 GMT", "version": "v13" }, { "created": "Tue, 12 Jan 2010 19:19:22 GMT", "version": "v14" }, { "created": "Thu, 21 Jan 2010 21:50:00 GMT", "version": "v15" }, { "created": "Thu, 8 Apr 2010 21:48:15 GMT", "version": "v16" }, { "created": "Mon, 19 Apr 2010 04:13:36 GMT", "version": "v17" }, { "created": "Tue, 20 Apr 2010 12:27:24 GMT", "version": "v18" }, { "created": "Mon, 26 Apr 2010 15:18:50 GMT", "version": "v19" }, { "created": "Tue, 24 Feb 2009 01:37:44 GMT", "version": "v2" }, { "created": "Mon, 4 Oct 2010 21:16:49 GMT", "version": "v20" }, { "created": "Tue, 14 Apr 2009 11:54:47 GMT", "version": "v3" }, { "created": "Mon, 1 Jun 2009 23:05:51 GMT", "version": "v4" }, { "created": "Mon, 20 Jul 2009 00:53:03 GMT", "version": "v5" }, { "created": "Sun, 2 Aug 2009 21:06:55 GMT", "version": "v6" }, { "created": "Tue, 29 Sep 2009 18:19:02 GMT", "version": "v7" }, { "created": "Fri, 2 Oct 2009 14:42:21 GMT", "version": "v8" }, { "created": "Thu, 29 Oct 2009 21:14:14 GMT", "version": "v9" } ]
2010-10-06
[ [ "Hewitt", "Carl", "" ] ]
Technology now at hand can integrate all kinds of digital information for individuals, groups, and organizations so their information usefully links together. iInfo(TM) information integration works by making connections including examples like the following: - A statistical connection between "being in a traffic jam" and "driving in downtown Trenton between 5PM and 6PM on a weekday." - A terminological connection between "MSR" and "Microsoft Research." - A causal connection between "joining a group" and "being a member of the group." - A syntactic connection between "a pin dropped" and "a dropped pin." - A biological connection between "a dolphin" and "a mammal". - A demographic connection between "undocumented residents of California" and "7% of the population of California." - A geographical connection between "Leeds" and "England." - A temporal connection between "turning on a computer" and "joining an on-line discussion." By making these connections, iInfo offers tremendous value for individuals, families, groups, and organizations in making more effective use of information technology. In practice, integrated information is invariably pervasively inconsistent. Therefore iInfo must be able to make connections even in the face of inconsistency. The business of iInfo is not to make difficult decisions like deciding the ultimate truth or probability of propositions. Instead it provides means for processing information and carefully recording its provenance including arguments (including arguments about arguments) for and against propositions that is used by iConsult(TM) and iEntertain(TM) apps in iOrgs(TM) Information Systems. A historical perspective on the above questions is highly pertinent to the current quest to develop foundations for privacy-friendly client-cloud computing.
2408.07644
Jianye Xu
Jianye Xu, Pan Hu, Bassam Alrifaee
SigmaRL: A Sample-Efficient and Generalizable Multi-Agent Reinforcement Learning Framework for Motion Planning
8 pages, 5 figures, accepted for presentation at the IEEE International Conference on Intelligent Transportation Systems (ITSC) 2024
null
10.13140/RG.2.2.24505.17769
null
cs.RO cs.LG cs.MA cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces an open-source, decentralized framework named SigmaRL, designed to enhance both sample efficiency and generalization of multi-agent Reinforcement Learning (RL) for motion planning of connected and automated vehicles. Most RL agents exhibit a limited capacity to generalize, often focusing narrowly on specific scenarios, and are usually evaluated in similar or even the same scenarios seen during training. Various methods have been proposed to address these challenges, including experience replay and regularization. However, how observation design in RL affects sample efficiency and generalization remains an under-explored area. We address this gap by proposing five strategies to design information-dense observations, focusing on general features that are applicable to most traffic scenarios. We train our RL agents using these strategies on an intersection and evaluate their generalization through numerical experiments across completely unseen traffic scenarios, including a new intersection, an on-ramp, and a roundabout. Incorporating these information-dense observations reduces training times to under one hour on a single CPU, and the evaluation results reveal that our RL agents can effectively zero-shot generalize. Code: github.com/cas-lab-munich/SigmaRL
[ { "created": "Wed, 14 Aug 2024 16:16:51 GMT", "version": "v1" } ]
2024-08-15
[ [ "Xu", "Jianye", "" ], [ "Hu", "Pan", "" ], [ "Alrifaee", "Bassam", "" ] ]
This paper introduces an open-source, decentralized framework named SigmaRL, designed to enhance both sample efficiency and generalization of multi-agent Reinforcement Learning (RL) for motion planning of connected and automated vehicles. Most RL agents exhibit a limited capacity to generalize, often focusing narrowly on specific scenarios, and are usually evaluated in similar or even the same scenarios seen during training. Various methods have been proposed to address these challenges, including experience replay and regularization. However, how observation design in RL affects sample efficiency and generalization remains an under-explored area. We address this gap by proposing five strategies to design information-dense observations, focusing on general features that are applicable to most traffic scenarios. We train our RL agents using these strategies on an intersection and evaluate their generalization through numerical experiments across completely unseen traffic scenarios, including a new intersection, an on-ramp, and a roundabout. Incorporating these information-dense observations reduces training times to under one hour on a single CPU, and the evaluation results reveal that our RL agents can effectively zero-shot generalize. Code: github.com/cas-lab-munich/SigmaRL
2402.05749
Yunhao Tang
Yunhao Tang, Zhaohan Daniel Guo, Zeyu Zheng, Daniele Calandriello, R\'emi Munos, Mark Rowland, Pierre Harvey Richemond, Michal Valko, Bernardo \'Avila Pires, Bilal Piot
Generalized Preference Optimization: A Unified Approach to Offline Alignment
Accepted at ICML 2023 main conference
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Offline preference optimization allows fine-tuning large models directly from offline data, and has proved effective in recent alignment practices. We propose generalized preference optimization (GPO), a family of offline losses parameterized by a general class of convex functions. GPO enables a unified view over preference optimization, encompassing existing algorithms such as DPO, IPO and SLiC as special cases, while naturally introducing new variants. The GPO framework also sheds light on how offline algorithms enforce regularization, through the design of the convex function that defines the loss. Our analysis and experiments reveal the connections and subtle differences between the offline regularization and the KL divergence regularization intended by the canonical RLHF formulation. In a controlled setting akin to Gao et al 2023, we also show that different GPO variants achieve similar trade-offs between regularization and performance, though the optimal values of hyper-parameter might differ as predicted by theory. In all, our results present new algorithmic toolkits and empirical insights to alignment practitioners.
[ { "created": "Thu, 8 Feb 2024 15:33:09 GMT", "version": "v1" }, { "created": "Tue, 28 May 2024 23:25:15 GMT", "version": "v2" } ]
2024-05-30
[ [ "Tang", "Yunhao", "" ], [ "Guo", "Zhaohan Daniel", "" ], [ "Zheng", "Zeyu", "" ], [ "Calandriello", "Daniele", "" ], [ "Munos", "Rémi", "" ], [ "Rowland", "Mark", "" ], [ "Richemond", "Pierre Harvey", "" ], [ "Valko", "Michal", "" ], [ "Pires", "Bernardo Ávila", "" ], [ "Piot", "Bilal", "" ] ]
Offline preference optimization allows fine-tuning large models directly from offline data, and has proved effective in recent alignment practices. We propose generalized preference optimization (GPO), a family of offline losses parameterized by a general class of convex functions. GPO enables a unified view over preference optimization, encompassing existing algorithms such as DPO, IPO and SLiC as special cases, while naturally introducing new variants. The GPO framework also sheds light on how offline algorithms enforce regularization, through the design of the convex function that defines the loss. Our analysis and experiments reveal the connections and subtle differences between the offline regularization and the KL divergence regularization intended by the canonical RLHF formulation. In a controlled setting akin to Gao et al 2023, we also show that different GPO variants achieve similar trade-offs between regularization and performance, though the optimal values of hyper-parameter might differ as predicted by theory. In all, our results present new algorithmic toolkits and empirical insights to alignment practitioners.
1604.08123
Adrian Garcia-Rodriguez
Adrian Garcia-Rodriguez, Vijay Venkateswaran, Pawel Rulikowski and Christos Masouros
Hybrid Analog-Digital Precoding Revisited under Realistic RF Modeling
12 pages, 5 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we revisit hybrid analog-digital precoding systems with emphasis on their modelling and radio-frequency (RF) losses, to realistically evaluate their benefits in 5G system implementations. For this, we decompose the analog beamforming networks (ABFN) as a bank of commonly used RF components and formulate realistic model constraints based on their S-parameters. Specifically, we concentrate on fully-connected ABFN (FC-ABFN) and Butler networks for implementing the discrete Fourier transform (DFT) in the RF domain. The results presented in this paper reveal that the performance and energy efficiency of hybrid precoding systems are severely affected, once practical factors are considered in the overall design. In this context, we also show that Butler RF networks are capable of providing better performances than FC-ABFN for systems with a large number of RF chains.
[ { "created": "Wed, 27 Apr 2016 16:09:08 GMT", "version": "v1" } ]
2016-04-28
[ [ "Garcia-Rodriguez", "Adrian", "" ], [ "Venkateswaran", "Vijay", "" ], [ "Rulikowski", "Pawel", "" ], [ "Masouros", "Christos", "" ] ]
In this paper we revisit hybrid analog-digital precoding systems with emphasis on their modelling and radio-frequency (RF) losses, to realistically evaluate their benefits in 5G system implementations. For this, we decompose the analog beamforming networks (ABFN) as a bank of commonly used RF components and formulate realistic model constraints based on their S-parameters. Specifically, we concentrate on fully-connected ABFN (FC-ABFN) and Butler networks for implementing the discrete Fourier transform (DFT) in the RF domain. The results presented in this paper reveal that the performance and energy efficiency of hybrid precoding systems are severely affected, once practical factors are considered in the overall design. In this context, we also show that Butler RF networks are capable of providing better performances than FC-ABFN for systems with a large number of RF chains.
2312.08985
Han Liang
Han Liang, Jiacheng Bao, Ruichi Zhang, Sihan Ren, Yuecheng Xu, Sibei Yang, Xin Chen, Jingyi Yu, Lan Xu
OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers
accepted by CVPR 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We have recently seen tremendous progress in realistic text-to-motion generation. Yet, the existing methods often fail or produce implausible motions with unseen text inputs, which limits the applications. In this paper, we present OMG, a novel framework, which enables compelling motion generation from zero-shot open-vocabulary text prompts. Our key idea is to carefully tailor the pretrain-then-finetune paradigm into the text-to-motion generation. At the pre-training stage, our model improves the generation ability by learning the rich out-of-domain inherent motion traits. To this end, we scale up a large unconditional diffusion model up to 1B parameters, so as to utilize the massive unlabeled motion data up to over 20M motion instances. At the subsequent fine-tuning stage, we introduce motion ControlNet, which incorporates text prompts as conditioning information, through a trainable copy of the pre-trained model and the proposed novel Mixture-of-Controllers (MoC) block. MoC block adaptively recognizes various ranges of the sub-motions with a cross-attention mechanism and processes them separately with the text-token-specific experts. Such a design effectively aligns the CLIP token embeddings of text prompts to various ranges of compact and expressive motion features. Extensive experiments demonstrate that our OMG achieves significant improvements over the state-of-the-art methods on zero-shot text-to-motion generation. Project page: https://tr3e.github.io/omg-page.
[ { "created": "Thu, 14 Dec 2023 14:31:40 GMT", "version": "v1" }, { "created": "Mon, 18 Dec 2023 05:05:32 GMT", "version": "v2" }, { "created": "Tue, 19 Mar 2024 06:50:17 GMT", "version": "v3" } ]
2024-03-20
[ [ "Liang", "Han", "" ], [ "Bao", "Jiacheng", "" ], [ "Zhang", "Ruichi", "" ], [ "Ren", "Sihan", "" ], [ "Xu", "Yuecheng", "" ], [ "Yang", "Sibei", "" ], [ "Chen", "Xin", "" ], [ "Yu", "Jingyi", "" ], [ "Xu", "Lan", "" ] ]
We have recently seen tremendous progress in realistic text-to-motion generation. Yet, the existing methods often fail or produce implausible motions with unseen text inputs, which limits the applications. In this paper, we present OMG, a novel framework, which enables compelling motion generation from zero-shot open-vocabulary text prompts. Our key idea is to carefully tailor the pretrain-then-finetune paradigm into the text-to-motion generation. At the pre-training stage, our model improves the generation ability by learning the rich out-of-domain inherent motion traits. To this end, we scale up a large unconditional diffusion model up to 1B parameters, so as to utilize the massive unlabeled motion data up to over 20M motion instances. At the subsequent fine-tuning stage, we introduce motion ControlNet, which incorporates text prompts as conditioning information, through a trainable copy of the pre-trained model and the proposed novel Mixture-of-Controllers (MoC) block. MoC block adaptively recognizes various ranges of the sub-motions with a cross-attention mechanism and processes them separately with the text-token-specific experts. Such a design effectively aligns the CLIP token embeddings of text prompts to various ranges of compact and expressive motion features. Extensive experiments demonstrate that our OMG achieves significant improvements over the state-of-the-art methods on zero-shot text-to-motion generation. Project page: https://tr3e.github.io/omg-page.
2305.06142
Jiaqi Sun
Jiaqi Sun, Lin Zhang, Guangyi Chen, Kun Zhang, Peng XU, Yujiu Yang
Feature Expansion for Graph Neural Networks
Accepted by ICML'23
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph neural networks aim to learn representations for graph-structured data and show impressive performance, particularly in node classification. Recently, many methods have studied the representations of GNNs from the perspective of optimization goals and spectral graph theory. However, the feature space that dominates representation learning has not been systematically studied in graph neural networks. In this paper, we propose to fill this gap by analyzing the feature space of both spatial and spectral models. We decompose graph neural networks into determined feature spaces and trainable weights, providing the convenience of studying the feature space explicitly using matrix space analysis. In particular, we theoretically find that the feature space tends to be linearly correlated due to repeated aggregations. Motivated by these findings, we propose 1) feature subspaces flattening and 2) structural principal components to expand the feature space. Extensive experiments verify the effectiveness of our proposed more comprehensive feature space, with comparable inference time to the baseline, and demonstrate its efficient convergence capability.
[ { "created": "Wed, 10 May 2023 13:45:57 GMT", "version": "v1" }, { "created": "Sat, 27 May 2023 17:26:04 GMT", "version": "v2" } ]
2023-05-30
[ [ "Sun", "Jiaqi", "" ], [ "Zhang", "Lin", "" ], [ "Chen", "Guangyi", "" ], [ "Zhang", "Kun", "" ], [ "XU", "Peng", "" ], [ "Yang", "Yujiu", "" ] ]
Graph neural networks aim to learn representations for graph-structured data and show impressive performance, particularly in node classification. Recently, many methods have studied the representations of GNNs from the perspective of optimization goals and spectral graph theory. However, the feature space that dominates representation learning has not been systematically studied in graph neural networks. In this paper, we propose to fill this gap by analyzing the feature space of both spatial and spectral models. We decompose graph neural networks into determined feature spaces and trainable weights, providing the convenience of studying the feature space explicitly using matrix space analysis. In particular, we theoretically find that the feature space tends to be linearly correlated due to repeated aggregations. Motivated by these findings, we propose 1) feature subspaces flattening and 2) structural principal components to expand the feature space. Extensive experiments verify the effectiveness of our proposed more comprehensive feature space, with comparable inference time to the baseline, and demonstrate its efficient convergence capability.
2003.05240
Christian Meske
Christian Meske, Konstantin Wilms, Stefan Stieglitz
Enterprise Social Networks as Digital Infrastructures -- Understanding the Utilitarian Value of Social Media at the Workplace
null
Information Systems Management, 36:4, pp. 350-367 (2019)
null
null
cs.SI cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this study, we first show that while both the perceived usefulness and perceived enjoyment of enterprise social networks impact employees' intentions for continuous participation, the utilitarian value significantly outpaces its hedonic value. Second, we prove that the network's utilitarian value is constituted by its digital infrastructure characteristics: versatility, adaptability, interconnectedness and invisibility-in-use. The study is set within a software engineering company and bases on quantitative survey research, applying partial least squares structural equation modeling.
[ { "created": "Wed, 11 Mar 2020 11:56:27 GMT", "version": "v1" } ]
2020-03-12
[ [ "Meske", "Christian", "" ], [ "Wilms", "Konstantin", "" ], [ "Stieglitz", "Stefan", "" ] ]
In this study, we first show that while both the perceived usefulness and perceived enjoyment of enterprise social networks impact employees' intentions for continuous participation, the utilitarian value significantly outpaces its hedonic value. Second, we prove that the network's utilitarian value is constituted by its digital infrastructure characteristics: versatility, adaptability, interconnectedness and invisibility-in-use. The study is set within a software engineering company and bases on quantitative survey research, applying partial least squares structural equation modeling.
2404.15378
Khai Nguyen
Khai Nguyen and Nhat Ho
Hierarchical Hybrid Sliced Wasserstein: A Scalable Metric for Heterogeneous Joint Distributions
28 pages, 11 figures, 4 tables
null
null
null
cs.CV cs.AI cs.GR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sliced Wasserstein (SW) and Generalized Sliced Wasserstein (GSW) have been widely used in applications due to their computational and statistical scalability. However, the SW and the GSW are only defined between distributions supported on a homogeneous domain. This limitation prevents their usage in applications with heterogeneous joint distributions with marginal distributions supported on multiple different domains. Using SW and GSW directly on the joint domains cannot make a meaningful comparison since their homogeneous slicing operator i.e., Radon Transform (RT) and Generalized Radon Transform (GRT) are not expressive enough to capture the structure of the joint supports set. To address the issue, we propose two new slicing operators i.e., Partial Generalized Radon Transform (PGRT) and Hierarchical Hybrid Radon Transform (HHRT). In greater detail, PGRT is the generalization of Partial Radon Transform (PRT), which transforms a subset of function arguments non-linearly while HHRT is the composition of PRT and multiple domain-specific PGRT on marginal domain arguments. By using HHRT, we extend the SW into Hierarchical Hybrid Sliced Wasserstein (H2SW) distance which is designed specifically for comparing heterogeneous joint distributions. We then discuss the topological, statistical, and computational properties of H2SW. Finally, we demonstrate the favorable performance of H2SW in 3D mesh deformation, deep 3D mesh autoencoders, and datasets comparison.
[ { "created": "Tue, 23 Apr 2024 03:04:22 GMT", "version": "v1" }, { "created": "Tue, 30 Apr 2024 20:52:08 GMT", "version": "v2" } ]
2024-05-02
[ [ "Nguyen", "Khai", "" ], [ "Ho", "Nhat", "" ] ]
Sliced Wasserstein (SW) and Generalized Sliced Wasserstein (GSW) have been widely used in applications due to their computational and statistical scalability. However, the SW and the GSW are only defined between distributions supported on a homogeneous domain. This limitation prevents their usage in applications with heterogeneous joint distributions with marginal distributions supported on multiple different domains. Using SW and GSW directly on the joint domains cannot make a meaningful comparison since their homogeneous slicing operator i.e., Radon Transform (RT) and Generalized Radon Transform (GRT) are not expressive enough to capture the structure of the joint supports set. To address the issue, we propose two new slicing operators i.e., Partial Generalized Radon Transform (PGRT) and Hierarchical Hybrid Radon Transform (HHRT). In greater detail, PGRT is the generalization of Partial Radon Transform (PRT), which transforms a subset of function arguments non-linearly while HHRT is the composition of PRT and multiple domain-specific PGRT on marginal domain arguments. By using HHRT, we extend the SW into Hierarchical Hybrid Sliced Wasserstein (H2SW) distance which is designed specifically for comparing heterogeneous joint distributions. We then discuss the topological, statistical, and computational properties of H2SW. Finally, we demonstrate the favorable performance of H2SW in 3D mesh deformation, deep 3D mesh autoencoders, and datasets comparison.
1705.05231
Qiang Lu
Qiang Lu and Kyoung-Dae Kim
Autonomous and Connected Intersection Crossing Traffic Management using Discrete-Time Occupancies Trajectory
34 pages, 11 figures
null
null
null
cs.SY cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we address a problem of safe and efficient intersection crossing traffic management of autonomous and connected ground traffic. Toward this objective, we propose an algorithm that is called the Discrete-time occupancies trajectory based Intersection traffic Coordination Algorithm (DICA). We first prove that the basic DICA is deadlock free and also starvation free. Then, we show that the basic DICA has a computational complexity of $\mathcal{O}(n^2 L_m^3)$ where $n$ is the number of vehicles granted to cross an intersection and $L_m$ is the maximum length of intersection crossing routes. To improve the overall computational efficiency of the algorithm, the basic DICA is enhanced by several computational approaches that are proposed in this paper. The enhanced algorithm has the computational complexity of $\mathcal{O}(n^2 L_m \log_2 L_m)$. The improved computational efficiency of the enhanced algorithm is validated through simulation using an open source traffic simulator, called the Simulation of Urban MObility (SUMO). The overall throughput as well as the computational efficiency of the enhanced algorithm are also compared with those of an optimized traffic light control.
[ { "created": "Fri, 12 May 2017 13:40:54 GMT", "version": "v1" } ]
2017-05-16
[ [ "Lu", "Qiang", "" ], [ "Kim", "Kyoung-Dae", "" ] ]
In this paper, we address a problem of safe and efficient intersection crossing traffic management of autonomous and connected ground traffic. Toward this objective, we propose an algorithm that is called the Discrete-time occupancies trajectory based Intersection traffic Coordination Algorithm (DICA). We first prove that the basic DICA is deadlock free and also starvation free. Then, we show that the basic DICA has a computational complexity of $\mathcal{O}(n^2 L_m^3)$ where $n$ is the number of vehicles granted to cross an intersection and $L_m$ is the maximum length of intersection crossing routes. To improve the overall computational efficiency of the algorithm, the basic DICA is enhanced by several computational approaches that are proposed in this paper. The enhanced algorithm has the computational complexity of $\mathcal{O}(n^2 L_m \log_2 L_m)$. The improved computational efficiency of the enhanced algorithm is validated through simulation using an open source traffic simulator, called the Simulation of Urban MObility (SUMO). The overall throughput as well as the computational efficiency of the enhanced algorithm are also compared with those of an optimized traffic light control.
2106.14126
GuangMeng Zhou
Guangmeng Zhou, Ke Xu, Qi Li, Yang Liu, Yi Zhao
AdaptCL: Efficient Collaborative Learning with Dynamic and Adaptive Pruning
null
null
null
null
cs.LG cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In multi-party collaborative learning, the parameter server sends a global model to each data holder for local training and then aggregates committed models globally to achieve privacy protection. However, both the dragger issue of synchronous collaborative learning and the staleness issue of asynchronous collaborative learning make collaborative learning inefficient in real-world heterogeneous environments. We propose a novel and efficient collaborative learning framework named AdaptCL, which generates an adaptive sub-model dynamically from the global base model for each data holder, without any prior information about worker capability. All workers (data holders) achieve approximately identical update time as the fastest worker by equipping them with capability-adapted pruned models. Thus the training process can be dramatically accelerated. Besides, we tailor the efficient pruned rate learning algorithm and pruning approach for AdaptCL. Meanwhile, AdaptCL provides a mechanism for handling the trade-off between accuracy and time overhead and can be combined with other techniques to accelerate training further. Empirical results show that AdaptCL introduces little computing and communication overhead. AdaptCL achieves time savings of more than 41\% on average and improves accuracy in a low heterogeneous environment. In a highly heterogeneous environment, AdaptCL achieves a training speedup of 6.2x with a slight loss of accuracy.
[ { "created": "Sun, 27 Jun 2021 02:41:19 GMT", "version": "v1" } ]
2021-06-29
[ [ "Zhou", "Guangmeng", "" ], [ "Xu", "Ke", "" ], [ "Li", "Qi", "" ], [ "Liu", "Yang", "" ], [ "Zhao", "Yi", "" ] ]
In multi-party collaborative learning, the parameter server sends a global model to each data holder for local training and then aggregates committed models globally to achieve privacy protection. However, both the dragger issue of synchronous collaborative learning and the staleness issue of asynchronous collaborative learning make collaborative learning inefficient in real-world heterogeneous environments. We propose a novel and efficient collaborative learning framework named AdaptCL, which generates an adaptive sub-model dynamically from the global base model for each data holder, without any prior information about worker capability. All workers (data holders) achieve approximately identical update time as the fastest worker by equipping them with capability-adapted pruned models. Thus the training process can be dramatically accelerated. Besides, we tailor the efficient pruned rate learning algorithm and pruning approach for AdaptCL. Meanwhile, AdaptCL provides a mechanism for handling the trade-off between accuracy and time overhead and can be combined with other techniques to accelerate training further. Empirical results show that AdaptCL introduces little computing and communication overhead. AdaptCL achieves time savings of more than 41\% on average and improves accuracy in a low heterogeneous environment. In a highly heterogeneous environment, AdaptCL achieves a training speedup of 6.2x with a slight loss of accuracy.