id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2312.05966
Rui Ye
Rui Ye, Yaxin Du, Zhenyang Ni, Siheng Chen, Yanfeng Wang
Fake It Till Make It: Federated Learning with Consensus-Oriented Generation
27 pages
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In federated learning (FL), data heterogeneity is one key bottleneck that causes model divergence and limits performance. Addressing this, existing methods often regard data heterogeneity as an inherent property and propose to mitigate its adverse effects by correcting models. In this paper, we seek to break this inherent property by generating data to complement the original dataset to fundamentally mitigate heterogeneity level. As a novel attempt from the perspective of data, we propose federated learning with consensus-oriented generation (FedCOG). FedCOG consists of two key components at the client side: complementary data generation, which generates data extracted from the shared global model to complement the original dataset, and knowledge-distillation-based model training, which distills knowledge from global model to local model based on the generated data to mitigate over-fitting the original heterogeneous dataset. FedCOG has two critical advantages: 1) it can be a plug-and-play module to further improve the performance of most existing FL methods, and 2) it is naturally compatible with standard FL protocols such as Secure Aggregation since it makes no modification in communication process. Extensive experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
[ { "created": "Sun, 10 Dec 2023 18:49:59 GMT", "version": "v1" } ]
2023-12-12
[ [ "Ye", "Rui", "" ], [ "Du", "Yaxin", "" ], [ "Ni", "Zhenyang", "" ], [ "Chen", "Siheng", "" ], [ "Wang", "Yanfeng", "" ] ]
In federated learning (FL), data heterogeneity is one key bottleneck that causes model divergence and limits performance. Addressing this, existing methods often regard data heterogeneity as an inherent property and propose to mitigate its adverse effects by correcting models. In this paper, we seek to break this inherent property by generating data to complement the original dataset to fundamentally mitigate heterogeneity level. As a novel attempt from the perspective of data, we propose federated learning with consensus-oriented generation (FedCOG). FedCOG consists of two key components at the client side: complementary data generation, which generates data extracted from the shared global model to complement the original dataset, and knowledge-distillation-based model training, which distills knowledge from global model to local model based on the generated data to mitigate over-fitting the original heterogeneous dataset. FedCOG has two critical advantages: 1) it can be a plug-and-play module to further improve the performance of most existing FL methods, and 2) it is naturally compatible with standard FL protocols such as Secure Aggregation since it makes no modification in communication process. Extensive experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
2307.07928
Xiaohang Ren
Xiaohang Ren, Xingyu Chen, Pengfei Yao, Heung-Yeung Shum, Baoyuan Wang
Reinforced Disentanglement for Face Swapping without Skip Connection
Accepted by ICCV 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The SOTA face swap models still suffer the problem of either target identity (i.e., shape) being leaked or the target non-identity attributes (i.e., background, hair) failing to be fully preserved in the final results. We show that this insufficient disentanglement is caused by two flawed designs that were commonly adopted in prior models: (1) counting on only one compressed encoder to represent both the semantic-level non-identity facial attributes(i.e., pose) and the pixel-level non-facial region details, which is contradictory to satisfy at the same time; (2) highly relying on long skip-connections between the encoder and the final generator, leaking a certain amount of target face identity into the result. To fix them, we introduce a new face swap framework called 'WSC-swap' that gets rid of skip connections and uses two target encoders to respectively capture the pixel-level non-facial region attributes and the semantic non-identity attributes in the face region. To further reinforce the disentanglement learning for the target encoder, we employ both identity removal loss via adversarial training (i.e., GAN) and the non-identity preservation loss via prior 3DMM models like [11]. Extensive experiments on both FaceForensics++ and CelebA-HQ show that our results significantly outperform previous works on a rich set of metrics, including one novel metric for measuring identity consistency that was completely neglected before.
[ { "created": "Sun, 16 Jul 2023 02:44:19 GMT", "version": "v1" }, { "created": "Wed, 19 Jul 2023 01:43:59 GMT", "version": "v2" }, { "created": "Wed, 26 Jul 2023 01:59:06 GMT", "version": "v3" }, { "created": "Thu, 3 Aug 2023 06:05:02 GMT", "version": "v4" } ]
2023-08-04
[ [ "Ren", "Xiaohang", "" ], [ "Chen", "Xingyu", "" ], [ "Yao", "Pengfei", "" ], [ "Shum", "Heung-Yeung", "" ], [ "Wang", "Baoyuan", "" ] ]
The SOTA face swap models still suffer the problem of either target identity (i.e., shape) being leaked or the target non-identity attributes (i.e., background, hair) failing to be fully preserved in the final results. We show that this insufficient disentanglement is caused by two flawed designs that were commonly adopted in prior models: (1) counting on only one compressed encoder to represent both the semantic-level non-identity facial attributes(i.e., pose) and the pixel-level non-facial region details, which is contradictory to satisfy at the same time; (2) highly relying on long skip-connections between the encoder and the final generator, leaking a certain amount of target face identity into the result. To fix them, we introduce a new face swap framework called 'WSC-swap' that gets rid of skip connections and uses two target encoders to respectively capture the pixel-level non-facial region attributes and the semantic non-identity attributes in the face region. To further reinforce the disentanglement learning for the target encoder, we employ both identity removal loss via adversarial training (i.e., GAN) and the non-identity preservation loss via prior 3DMM models like [11]. Extensive experiments on both FaceForensics++ and CelebA-HQ show that our results significantly outperform previous works on a rich set of metrics, including one novel metric for measuring identity consistency that was completely neglected before.
1909.04117
Sandro Rama Fiorini
Sandro Rama Fiorini, Wallas Sousa dos Santos, Rodrigo Costa Mesquita, Guilherme Ferreira Lima, Marcio F. Moreno
General Fragment Model for Information Artifacts
null
null
null
null
cs.AI cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of semantic descriptions in data intensive domains require a systematic model for linking semantic descriptions with their manifestations in fragments of heterogeneous information and data objects. Such information heterogeneity requires a fragment model that is general enough to support the specification of anchors from conceptual models to multiple types of information artifacts. While diverse proposals of anchoring models exist in the literature, they are usually focused in audiovisual information. We propose a generalized fragment model that can be instantiated to different kinds of information artifacts. Our objective is to systematize the way in which fragments and anchors can be described in conceptual models, without committing to a specific vocabulary.
[ { "created": "Mon, 9 Sep 2019 19:29:17 GMT", "version": "v1" } ]
2019-09-11
[ [ "Fiorini", "Sandro Rama", "" ], [ "Santos", "Wallas Sousa dos", "" ], [ "Mesquita", "Rodrigo Costa", "" ], [ "Lima", "Guilherme Ferreira", "" ], [ "Moreno", "Marcio F.", "" ] ]
The use of semantic descriptions in data intensive domains require a systematic model for linking semantic descriptions with their manifestations in fragments of heterogeneous information and data objects. Such information heterogeneity requires a fragment model that is general enough to support the specification of anchors from conceptual models to multiple types of information artifacts. While diverse proposals of anchoring models exist in the literature, they are usually focused in audiovisual information. We propose a generalized fragment model that can be instantiated to different kinds of information artifacts. Our objective is to systematize the way in which fragments and anchors can be described in conceptual models, without committing to a specific vocabulary.
2306.10558
Marco B. Caminati
Marco B. Caminati
Isabelle Formalisation of Original Representation Theorems
accepted by CICM 2023 conference (regular paper)
null
null
null
cs.LO cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
In a recent paper, new theorems linking apparently unrelated mathematical objects (event structures from concurrency theory and full graphs arising in computational biology) were discovered by cross-site data mining on huge databases, and building on existing Isabelle-verified event structures enumeration algorithms. Given the origin and newness of such theorems, their formal verification is particularly desirable. This paper presents such a verification via Isabelle/HOL definitions and theorems, and exposes the technical challenges found in the process. The introduced formalisation completes the verification of Isabelle-verified event structure enumeration algorithms into a fully verified framework to link event structures to full graphs.
[ { "created": "Sun, 18 Jun 2023 13:43:21 GMT", "version": "v1" } ]
2023-06-21
[ [ "Caminati", "Marco B.", "" ] ]
In a recent paper, new theorems linking apparently unrelated mathematical objects (event structures from concurrency theory and full graphs arising in computational biology) were discovered by cross-site data mining on huge databases, and building on existing Isabelle-verified event structures enumeration algorithms. Given the origin and newness of such theorems, their formal verification is particularly desirable. This paper presents such a verification via Isabelle/HOL definitions and theorems, and exposes the technical challenges found in the process. The introduced formalisation completes the verification of Isabelle-verified event structure enumeration algorithms into a fully verified framework to link event structures to full graphs.
2105.00937
Kwang Hee Lee
Kwang Hee Lee, Chaewon Park, Junghyun Oh, Nojun Kwak
LFI-CAM: Learning Feature Importance for Better Visual Explanation
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Class Activation Mapping (CAM) is a powerful technique used to understand the decision making of Convolutional Neural Network (CNN) in computer vision. Recently, there have been attempts not only to generate better visual explanations, but also to improve classification performance using visual explanations. However, the previous works still have their own drawbacks. In this paper, we propose a novel architecture, LFI-CAM, which is trainable for image classification and visual explanation in an end-to-end manner. LFI-CAM generates an attention map for visual explanation during forward propagation, at the same time, leverages the attention map to improve the classification performance through the attention mechanism. Our Feature Importance Network (FIN) focuses on learning the feature importance instead of directly learning the attention map to obtain a more reliable and consistent attention map. We confirmed that LFI-CAM model is optimized not only by learning the feature importance but also by enhancing the backbone feature representation to focus more on important features of the input image. Experimental results show that LFI-CAM outperforms the baseline models's accuracy on the classification tasks as well as significantly improves on the previous works in terms of attention map quality and stability over different hyper-parameters.
[ { "created": "Mon, 3 May 2021 15:12:21 GMT", "version": "v1" } ]
2021-05-04
[ [ "Lee", "Kwang Hee", "" ], [ "Park", "Chaewon", "" ], [ "Oh", "Junghyun", "" ], [ "Kwak", "Nojun", "" ] ]
Class Activation Mapping (CAM) is a powerful technique used to understand the decision making of Convolutional Neural Network (CNN) in computer vision. Recently, there have been attempts not only to generate better visual explanations, but also to improve classification performance using visual explanations. However, the previous works still have their own drawbacks. In this paper, we propose a novel architecture, LFI-CAM, which is trainable for image classification and visual explanation in an end-to-end manner. LFI-CAM generates an attention map for visual explanation during forward propagation, at the same time, leverages the attention map to improve the classification performance through the attention mechanism. Our Feature Importance Network (FIN) focuses on learning the feature importance instead of directly learning the attention map to obtain a more reliable and consistent attention map. We confirmed that LFI-CAM model is optimized not only by learning the feature importance but also by enhancing the backbone feature representation to focus more on important features of the input image. Experimental results show that LFI-CAM outperforms the baseline models's accuracy on the classification tasks as well as significantly improves on the previous works in terms of attention map quality and stability over different hyper-parameters.
1909.09141
Elliot Creager
Elliot Creager, David Madras, Toniann Pitassi, Richard Zemel
Causal Modeling for Fairness in Dynamical Systems
null
null
null
null
cs.LG cs.AI cs.CY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many application areas---lending, education, and online recommenders, for example---fairness and equity concerns emerge when a machine learning system interacts with a dynamically changing environment to produce both immediate and long-term effects for individuals and demographic groups. We discuss causal directed acyclic graphs (DAGs) as a unifying framework for the recent literature on fairness in such dynamical systems. We show that this formulation affords several new directions of inquiry to the modeler, where causal assumptions can be expressed and manipulated. We emphasize the importance of computing interventional quantities in the dynamical fairness setting, and show how causal assumptions enable simulation (when environment dynamics are known) and off-policy estimation (when dynamics are unknown) of intervention on short- and long-term outcomes, at both the group and individual levels.
[ { "created": "Wed, 18 Sep 2019 20:21:56 GMT", "version": "v1" }, { "created": "Mon, 6 Jul 2020 17:43:02 GMT", "version": "v2" } ]
2020-07-07
[ [ "Creager", "Elliot", "" ], [ "Madras", "David", "" ], [ "Pitassi", "Toniann", "" ], [ "Zemel", "Richard", "" ] ]
In many application areas---lending, education, and online recommenders, for example---fairness and equity concerns emerge when a machine learning system interacts with a dynamically changing environment to produce both immediate and long-term effects for individuals and demographic groups. We discuss causal directed acyclic graphs (DAGs) as a unifying framework for the recent literature on fairness in such dynamical systems. We show that this formulation affords several new directions of inquiry to the modeler, where causal assumptions can be expressed and manipulated. We emphasize the importance of computing interventional quantities in the dynamical fairness setting, and show how causal assumptions enable simulation (when environment dynamics are known) and off-policy estimation (when dynamics are unknown) of intervention on short- and long-term outcomes, at both the group and individual levels.
2111.09641
Raivo Koot
Raivo Koot, Markus Hennerbichler, Haiping Lu
Evaluating Transformers for Lightweight Action Recognition
pre-print
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In video action recognition, transformers consistently reach state-of-the-art accuracy. However, many models are too heavyweight for the average researcher with limited hardware resources. In this work, we explore the limitations of video transformers for lightweight action recognition. We benchmark 13 video transformers and baselines across 3 large-scale datasets and 10 hardware devices. Our study is the first to evaluate the efficiency of action recognition models in depth across multiple devices and train a wide range of video transformers under the same conditions. We categorize current methods into three classes and show that composite transformers that augment convolutional backbones are best at lightweight action recognition, despite lacking accuracy. Meanwhile, attention-only models need more motion modeling capabilities and stand-alone attention block models currently incur too much latency overhead. Our experiments conclude that current video transformers are not yet capable of lightweight action recognition on par with traditional convolutional baselines, and that the previously mentioned shortcomings need to be addressed to bridge this gap. Code to reproduce our experiments will be made publicly available.
[ { "created": "Thu, 18 Nov 2021 11:45:42 GMT", "version": "v1" }, { "created": "Tue, 7 Dec 2021 21:12:24 GMT", "version": "v2" } ]
2021-12-09
[ [ "Koot", "Raivo", "" ], [ "Hennerbichler", "Markus", "" ], [ "Lu", "Haiping", "" ] ]
In video action recognition, transformers consistently reach state-of-the-art accuracy. However, many models are too heavyweight for the average researcher with limited hardware resources. In this work, we explore the limitations of video transformers for lightweight action recognition. We benchmark 13 video transformers and baselines across 3 large-scale datasets and 10 hardware devices. Our study is the first to evaluate the efficiency of action recognition models in depth across multiple devices and train a wide range of video transformers under the same conditions. We categorize current methods into three classes and show that composite transformers that augment convolutional backbones are best at lightweight action recognition, despite lacking accuracy. Meanwhile, attention-only models need more motion modeling capabilities and stand-alone attention block models currently incur too much latency overhead. Our experiments conclude that current video transformers are not yet capable of lightweight action recognition on par with traditional convolutional baselines, and that the previously mentioned shortcomings need to be addressed to bridge this gap. Code to reproduce our experiments will be made publicly available.
1502.02840
Pablo Rodriguez-Mier
Pablo Rodriguez-Mier, Carlos Pedrinaci, Manuel Lama, Manuel Mucientes
An Integrated Semantic Web Service Discovery and Composition Framework
Accepted to appear in IEEE Transactions on Services Computing 2015
null
10.1109/TSC.2015.2402679
null
cs.AI cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a theoretical analysis of graph-based service composition in terms of its dependency with service discovery. Driven by this analysis we define a composition framework by means of integration with fine-grained I/O service discovery that enables the generation of a graph-based composition which contains the set of services that are semantically relevant for an input-output request. The proposed framework also includes an optimal composition search algorithm to extract the best composition from the graph minimising the length and the number of services, and different graph optimisations to improve the scalability of the system. A practical implementation used for the empirical analysis is also provided. This analysis proves the scalability and flexibility of our proposal and provides insights on how integrated composition systems can be designed in order to achieve good performance in real scenarios for the Web.
[ { "created": "Tue, 10 Feb 2015 10:25:33 GMT", "version": "v1" } ]
2015-02-11
[ [ "Rodriguez-Mier", "Pablo", "" ], [ "Pedrinaci", "Carlos", "" ], [ "Lama", "Manuel", "" ], [ "Mucientes", "Manuel", "" ] ]
In this paper we present a theoretical analysis of graph-based service composition in terms of its dependency with service discovery. Driven by this analysis we define a composition framework by means of integration with fine-grained I/O service discovery that enables the generation of a graph-based composition which contains the set of services that are semantically relevant for an input-output request. The proposed framework also includes an optimal composition search algorithm to extract the best composition from the graph minimising the length and the number of services, and different graph optimisations to improve the scalability of the system. A practical implementation used for the empirical analysis is also provided. This analysis proves the scalability and flexibility of our proposal and provides insights on how integrated composition systems can be designed in order to achieve good performance in real scenarios for the Web.
2304.06566
Filip Sroubek
Tomas Kerepecky, Filip Sroubek, Adam Novozamsky, Jan Flusser
NeRD: Neural field-based Demosaicking
5 pages, 4 figures, 1 table
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
We introduce NeRD, a new demosaicking method for generating full-color images from Bayer patterns. Our approach leverages advancements in neural fields to perform demosaicking by representing an image as a coordinate-based neural network with sine activation functions. The inputs to the network are spatial coordinates and a low-resolution Bayer pattern, while the outputs are the corresponding RGB values. An encoder network, which is a blend of ResNet and U-net, enhances the implicit neural representation of the image to improve its quality and ensure spatial consistency through prior learning. Our experimental results demonstrate that NeRD outperforms traditional and state-of-the-art CNN-based methods and significantly closes the gap to transformer-based methods.
[ { "created": "Thu, 13 Apr 2023 14:25:05 GMT", "version": "v1" } ]
2023-04-14
[ [ "Kerepecky", "Tomas", "" ], [ "Sroubek", "Filip", "" ], [ "Novozamsky", "Adam", "" ], [ "Flusser", "Jan", "" ] ]
We introduce NeRD, a new demosaicking method for generating full-color images from Bayer patterns. Our approach leverages advancements in neural fields to perform demosaicking by representing an image as a coordinate-based neural network with sine activation functions. The inputs to the network are spatial coordinates and a low-resolution Bayer pattern, while the outputs are the corresponding RGB values. An encoder network, which is a blend of ResNet and U-net, enhances the implicit neural representation of the image to improve its quality and ensure spatial consistency through prior learning. Our experimental results demonstrate that NeRD outperforms traditional and state-of-the-art CNN-based methods and significantly closes the gap to transformer-based methods.
2401.07154
Cheng Wang
Cheng Wang, Akshay Kakkar, Christopher Redino, Abdul Rahman, Ajinsyam S, Ryan Clark, Daniel Radke, Tyler Cody, Lanxiao Huang, Edward Bowen
Discovering Command and Control Channels Using Reinforcement Learning
SoutheastCon 2023. IEEE, 2023
null
10.1109/SoutheastCon51012.2023.10115173
null
cs.CR cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Command and control (C2) paths for issuing commands to malware are sometimes the only indicators of its existence within networks. Identifying potential C2 channels is often a manually driven process that involves a deep understanding of cyber tradecraft. Efforts to improve discovery of these channels through using a reinforcement learning (RL) based approach that learns to automatically carry out C2 attack campaigns on large networks, where multiple defense layers are in place serves to drive efficiency for network operators. In this paper, we model C2 traffic flow as a three-stage process and formulate it as a Markov decision process (MDP) with the objective to maximize the number of valuable hosts whose data is exfiltrated. The approach also specifically models payload and defense mechanisms such as firewalls which is a novel contribution. The attack paths learned by the RL agent can in turn help the blue team identify high-priority vulnerabilities and develop improved defense strategies. The method is evaluated on a large network with more than a thousand hosts and the results demonstrate that the agent can effectively learn attack paths while avoiding firewalls.
[ { "created": "Sat, 13 Jan 2024 20:03:11 GMT", "version": "v1" } ]
2024-01-17
[ [ "Wang", "Cheng", "" ], [ "Kakkar", "Akshay", "" ], [ "Redino", "Christopher", "" ], [ "Rahman", "Abdul", "" ], [ "S", "Ajinsyam", "" ], [ "Clark", "Ryan", "" ], [ "Radke", "Daniel", "" ], [ "Cody", "Tyler", "" ], [ "Huang", "Lanxiao", "" ], [ "Bowen", "Edward", "" ] ]
Command and control (C2) paths for issuing commands to malware are sometimes the only indicators of its existence within networks. Identifying potential C2 channels is often a manually driven process that involves a deep understanding of cyber tradecraft. Efforts to improve discovery of these channels through using a reinforcement learning (RL) based approach that learns to automatically carry out C2 attack campaigns on large networks, where multiple defense layers are in place serves to drive efficiency for network operators. In this paper, we model C2 traffic flow as a three-stage process and formulate it as a Markov decision process (MDP) with the objective to maximize the number of valuable hosts whose data is exfiltrated. The approach also specifically models payload and defense mechanisms such as firewalls which is a novel contribution. The attack paths learned by the RL agent can in turn help the blue team identify high-priority vulnerabilities and develop improved defense strategies. The method is evaluated on a large network with more than a thousand hosts and the results demonstrate that the agent can effectively learn attack paths while avoiding firewalls.
1108.5466
Nabendu Chaki Dr.
Supriya Chakrabarty and Nabendu Chaki
Quality Evaluation of Conceptual Level Object Multidimensional Data Model
14 pages - accepted in June 2011 for publication in the International Journal of Computer Science & Information Technology (ISSN: 0975-3826)
null
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advancement of technology facilitates explosive growth of mobile usage in the last decade. Numerous applications have been developed to support its usage. However, gap in technology exists in obtaining correct and trusted values for evaluation indexes of the precise amount of usage. The claims of loss in revenue by the service providers could be more due to unexpected behaviour of the hardware. A similar mistrust is often observed in the users of the services. A trustworthy subscription scheme is in demand for consumers whereas revenue needs to be assured of the service providers. Multiple Authorizations by Multiple Owners (MAMO) has already been introduced as a technology to build trust in the third party billing system. In this paper, MAMO is extended to ensure trustworthiness of the parameters for subscription. Along with call transaction data are reconciled to assure the proper revenue generation.
[ { "created": "Sat, 27 Aug 2011 17:37:50 GMT", "version": "v1" } ]
2011-08-30
[ [ "Chakrabarty", "Supriya", "" ], [ "Chaki", "Nabendu", "" ] ]
The advancement of technology facilitates explosive growth of mobile usage in the last decade. Numerous applications have been developed to support its usage. However, gap in technology exists in obtaining correct and trusted values for evaluation indexes of the precise amount of usage. The claims of loss in revenue by the service providers could be more due to unexpected behaviour of the hardware. A similar mistrust is often observed in the users of the services. A trustworthy subscription scheme is in demand for consumers whereas revenue needs to be assured of the service providers. Multiple Authorizations by Multiple Owners (MAMO) has already been introduced as a technology to build trust in the third party billing system. In this paper, MAMO is extended to ensure trustworthiness of the parameters for subscription. Along with call transaction data are reconciled to assure the proper revenue generation.
2202.06281
Piotr Koniusz
Yifei Zhang, Hao Zhu, Ziqiao Meng, Piotr Koniusz, Irwin King
Graph-adaptive Rectified Linear Unit for Graph Neural Networks
TheWebConf (WWW), 2022
null
10.1145/3485447.3512159
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data. The key to the GNNs is adopting the neural message-passing paradigm with two stages: aggregation and update. The current design of GNNs considers the topology information in the aggregation stage. However, in the updating stage, all nodes share the same updating function. The identical updating function treats each node embedding as i.i.d. random variables and thus ignores the implicit relationships between neighborhoods, which limits the capacity of the GNNs. The updating function is usually implemented with a linear transformation followed by a non-linear activation function. To make the updating function topology-aware, we inject the topological information into the non-linear activation function and propose Graph-adaptive Rectified Linear Unit (GReLU), which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way. The parameters of GReLU are obtained from a hyperfunction based on both node features and the corresponding adjacent matrix. To reduce the risk of overfitting and the computational cost, we decompose the hyperfunction as two independent components for nodes and features respectively. We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
[ { "created": "Sun, 13 Feb 2022 10:54:59 GMT", "version": "v1" } ]
2022-02-15
[ [ "Zhang", "Yifei", "" ], [ "Zhu", "Hao", "" ], [ "Meng", "Ziqiao", "" ], [ "Koniusz", "Piotr", "" ], [ "King", "Irwin", "" ] ]
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data. The key to the GNNs is adopting the neural message-passing paradigm with two stages: aggregation and update. The current design of GNNs considers the topology information in the aggregation stage. However, in the updating stage, all nodes share the same updating function. The identical updating function treats each node embedding as i.i.d. random variables and thus ignores the implicit relationships between neighborhoods, which limits the capacity of the GNNs. The updating function is usually implemented with a linear transformation followed by a non-linear activation function. To make the updating function topology-aware, we inject the topological information into the non-linear activation function and propose Graph-adaptive Rectified Linear Unit (GReLU), which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way. The parameters of GReLU are obtained from a hyperfunction based on both node features and the corresponding adjacent matrix. To reduce the risk of overfitting and the computational cost, we decompose the hyperfunction as two independent components for nodes and features respectively. We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
1506.04047
Seyed Rasoul Etesami
Seyed Rasoul Etesami, Tamer Basar
Approximation Algorithm for the Binary-Preference Capacitated Selfish Replication Game and a Tight Bound on its Price of Anarchy
null
null
null
null
cs.GT cs.DM cs.MA math.CO math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the capacitated selfish replication (CSR) game with binary preferences, over general undirected networks. We first show that such games have an associated ordinary potential function, and hence always admit a pure-strategy Nash equilibrium (NE). Further, when the minimum degree of the network and the number of resources are of the same order, there exists an exact polynomial time algorithm which can find a NE. Following this, we study the price of anarchy of such games, and show that it is bounded above by 3; we further provide some instances for which the price of anarchy is at least 2. We develop a quasi-polynomial algorithm O(n^2D^{ln n}), where n is the number of players and D is the diameter of the network, which can find, in a distributed manner, an allocation profile that is within a constant factor of the optimal allocation, and hence of any pure-strategy NE of the game. Proof of this result uses a novel potential function.
[ { "created": "Fri, 12 Jun 2015 15:43:38 GMT", "version": "v1" }, { "created": "Fri, 11 Mar 2016 05:55:02 GMT", "version": "v2" } ]
2016-03-14
[ [ "Etesami", "Seyed Rasoul", "" ], [ "Basar", "Tamer", "" ] ]
We consider the capacitated selfish replication (CSR) game with binary preferences, over general undirected networks. We first show that such games have an associated ordinary potential function, and hence always admit a pure-strategy Nash equilibrium (NE). Further, when the minimum degree of the network and the number of resources are of the same order, there exists an exact polynomial time algorithm which can find a NE. Following this, we study the price of anarchy of such games, and show that it is bounded above by 3; we further provide some instances for which the price of anarchy is at least 2. We develop a quasi-polynomial algorithm O(n^2D^{ln n}), where n is the number of players and D is the diameter of the network, which can find, in a distributed manner, an allocation profile that is within a constant factor of the optimal allocation, and hence of any pure-strategy NE of the game. Proof of this result uses a novel potential function.
2210.15764
Noam Levi
Noam Levi, Itay M. Bloch, Marat Freytsis, Tomer Volansky
Noise Injection Node Regularization for Robust Learning
16 pages, 9 figures
Proceedings of the International Conference on Learning Representations (ICLR), 2023
null
null
cs.LG cond-mat.dis-nn cond-mat.stat-mech cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Noise Injection Node Regularization (NINR), a method of injecting structured noise into Deep Neural Networks (DNN) during the training stage, resulting in an emergent regularizing effect. We present theoretical and empirical evidence for substantial improvement in robustness against various test data perturbations for feed-forward DNNs when trained under NINR. The novelty in our approach comes from the interplay of adaptive noise injection and initialization conditions such that noise is the dominant driver of dynamics at the start of training. As it simply requires the addition of external nodes without altering the existing network structure or optimization algorithms, this method can be easily incorporated into many standard problem specifications. We find improved stability against a number of data perturbations, including domain shifts, with the most dramatic improvement obtained for unstructured noise, where our technique outperforms other existing methods such as Dropout or $L_2$ regularization, in some cases. We further show that desirable generalization properties on clean data are generally maintained.
[ { "created": "Thu, 27 Oct 2022 20:51:15 GMT", "version": "v1" } ]
2023-05-03
[ [ "Levi", "Noam", "" ], [ "Bloch", "Itay M.", "" ], [ "Freytsis", "Marat", "" ], [ "Volansky", "Tomer", "" ] ]
We introduce Noise Injection Node Regularization (NINR), a method of injecting structured noise into Deep Neural Networks (DNN) during the training stage, resulting in an emergent regularizing effect. We present theoretical and empirical evidence for substantial improvement in robustness against various test data perturbations for feed-forward DNNs when trained under NINR. The novelty in our approach comes from the interplay of adaptive noise injection and initialization conditions such that noise is the dominant driver of dynamics at the start of training. As it simply requires the addition of external nodes without altering the existing network structure or optimization algorithms, this method can be easily incorporated into many standard problem specifications. We find improved stability against a number of data perturbations, including domain shifts, with the most dramatic improvement obtained for unstructured noise, where our technique outperforms other existing methods such as Dropout or $L_2$ regularization, in some cases. We further show that desirable generalization properties on clean data are generally maintained.
2005.05086
Daniel Tang
Daniel Tang
Decentralised, privacy-preserving Bayesian inference for mobile phone contact tracing
null
null
null
null
cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many countries are currently gearing up to use smart-phone apps to perform contact tracing as part of the effort to manage the COVID-19 pandemic and prevent resurgences of the disease after the initial outbreak. With the announcement of the Apple/Google partnership to introduce contact-tracing functionality to iOS and Android, it seems likely that this will be adopted in many countries. An important part of the functionality of the app will be to decide whether a person should be advised to self-isolate, be tested or end isolation. However, the privacy preserving nature of the Apple/Google contact tracing algorithm means that centralised curation of these decisions is not possible so each phone must use its own "risk model" to inform decisions. Ideally, the risk model should use Bayesian inference to decide the best course of action given the test results of the user and those of other users. Here we present a decentralised algorithm that estimates the Bayesian posterior probability of viral transmission events and evaluates when a user should be notified, tested or released from isolation while preserving user privacy. The algorithm also allows the disease models on the phones to learn from everyone's contact-tracing data and will allow Epidemiologists to better understand the dynamics of the disease. The algorithm is a message passing algorithm, based on belief propagation, so each smart-phone can be used to execute a small part of the algorithm without releasing any sensitive information. In this way, the network of all participating smart-phones forms a distributed computation device that performs Bayesian inference, informs each user when they should start/end isolation or be tested and learns about the disease from user's data.
[ { "created": "Mon, 11 May 2020 13:13:36 GMT", "version": "v1" } ]
2020-05-12
[ [ "Tang", "Daniel", "" ] ]
Many countries are currently gearing up to use smart-phone apps to perform contact tracing as part of the effort to manage the COVID-19 pandemic and prevent resurgences of the disease after the initial outbreak. With the announcement of the Apple/Google partnership to introduce contact-tracing functionality to iOS and Android, it seems likely that this will be adopted in many countries. An important part of the functionality of the app will be to decide whether a person should be advised to self-isolate, be tested or end isolation. However, the privacy preserving nature of the Apple/Google contact tracing algorithm means that centralised curation of these decisions is not possible so each phone must use its own "risk model" to inform decisions. Ideally, the risk model should use Bayesian inference to decide the best course of action given the test results of the user and those of other users. Here we present a decentralised algorithm that estimates the Bayesian posterior probability of viral transmission events and evaluates when a user should be notified, tested or released from isolation while preserving user privacy. The algorithm also allows the disease models on the phones to learn from everyone's contact-tracing data and will allow Epidemiologists to better understand the dynamics of the disease. The algorithm is a message passing algorithm, based on belief propagation, so each smart-phone can be used to execute a small part of the algorithm without releasing any sensitive information. In this way, the network of all participating smart-phones forms a distributed computation device that performs Bayesian inference, informs each user when they should start/end isolation or be tested and learns about the disease from user's data.
cs/0511062
Liping Lu
Gerd Bumiller, Liping Lu (INRIA Lorraine - LORIA), Yeqiong Song (INRIA Lorraine - LORIA)
Analytic performance comparison of routing protocols in master-slave PLC networks
null
null
null
null
cs.NI
null
In the wide area master-slave PLC (powerline communication) system, the source node cannot reach the destination node without packet relay. Due to the time-variable attenuation in the powerline, the communication distance cannot be defined. Two kind of dynamic repeater algorithms are developed, dynamic source routing and flooding based routing. In this paper, we use analytic approach to compare the performance of those two routing protocols. We give formulas to calculate the average duration of a polling cycle for each protocols. Then we present simulation results to bolster the results of our analysis. We use three metrics, which are bandwidth consumed for routing signaling, normalized routing load and average duration of a polling cycle to evaluate those routing protocols.
[ { "created": "Wed, 16 Nov 2005 16:24:23 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bumiller", "Gerd", "", "INRIA Lorraine - LORIA" ], [ "Lu", "Liping", "", "INRIA Lorraine - LORIA" ], [ "Song", "Yeqiong", "", "INRIA\n Lorraine - LORIA" ] ]
In the wide area master-slave PLC (powerline communication) system, the source node cannot reach the destination node without packet relay. Due to the time-variable attenuation in the powerline, the communication distance cannot be defined. Two kind of dynamic repeater algorithms are developed, dynamic source routing and flooding based routing. In this paper, we use analytic approach to compare the performance of those two routing protocols. We give formulas to calculate the average duration of a polling cycle for each protocols. Then we present simulation results to bolster the results of our analysis. We use three metrics, which are bandwidth consumed for routing signaling, normalized routing load and average duration of a polling cycle to evaluate those routing protocols.
2107.09937
Huimin Wu
Huimin Wu and Zhengmian Hu and Bin Gu
Fast and Scalable Adversarial Training of Kernel SVM via Doubly Stochastic Gradients
null
null
null
null
cs.LG cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial attacks by generating examples which are almost indistinguishable from natural examples, pose a serious threat to learning models. Defending against adversarial attacks is a critical element for a reliable learning system. Support vector machine (SVM) is a classical yet still important learning algorithm even in the current deep learning era. Although a wide range of researches have been done in recent years to improve the adversarial robustness of learning models, but most of them are limited to deep neural networks (DNNs) and the work for kernel SVM is still vacant. In this paper, we aim at kernel SVM and propose adv-SVM to improve its adversarial robustness via adversarial training, which has been demonstrated to be the most promising defense techniques. To the best of our knowledge, this is the first work that devotes to the fast and scalable adversarial training of kernel SVM. Specifically, we first build connection of perturbations of samples between original and kernel spaces, and then give a reduced and equivalent formulation of adversarial training of kernel SVM based on the connection. Next, doubly stochastic gradients (DSG) based on two unbiased stochastic approximations (i.e., one is on training points and another is on random features) are applied to update the solution of our objective function. Finally, we prove that our algorithm optimized by DSG converges to the optimal solution at the rate of O(1/t) under the constant and diminishing stepsizes. Comprehensive experimental results show that our adversarial training algorithm enjoys robustness against various attacks and meanwhile has the similar efficiency and scalability with classical DSG algorithm.
[ { "created": "Wed, 21 Jul 2021 08:15:32 GMT", "version": "v1" } ]
2021-07-22
[ [ "Wu", "Huimin", "" ], [ "Hu", "Zhengmian", "" ], [ "Gu", "Bin", "" ] ]
Adversarial attacks by generating examples which are almost indistinguishable from natural examples, pose a serious threat to learning models. Defending against adversarial attacks is a critical element for a reliable learning system. Support vector machine (SVM) is a classical yet still important learning algorithm even in the current deep learning era. Although a wide range of researches have been done in recent years to improve the adversarial robustness of learning models, but most of them are limited to deep neural networks (DNNs) and the work for kernel SVM is still vacant. In this paper, we aim at kernel SVM and propose adv-SVM to improve its adversarial robustness via adversarial training, which has been demonstrated to be the most promising defense techniques. To the best of our knowledge, this is the first work that devotes to the fast and scalable adversarial training of kernel SVM. Specifically, we first build connection of perturbations of samples between original and kernel spaces, and then give a reduced and equivalent formulation of adversarial training of kernel SVM based on the connection. Next, doubly stochastic gradients (DSG) based on two unbiased stochastic approximations (i.e., one is on training points and another is on random features) are applied to update the solution of our objective function. Finally, we prove that our algorithm optimized by DSG converges to the optimal solution at the rate of O(1/t) under the constant and diminishing stepsizes. Comprehensive experimental results show that our adversarial training algorithm enjoys robustness against various attacks and meanwhile has the similar efficiency and scalability with classical DSG algorithm.
2311.09613
Yuling Gu
Yuling Gu, Oyvind Tafjord, Peter Clark
Digital Socrates: Evaluating LLMs through Explanation Critiques
ACL 2024
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
While LLMs can provide reasoned explanations along with their answers, the nature and quality of those explanations are still poorly understood. In response, our goal is to define a detailed way of characterizing the explanation capabilities of modern models and to create a nuanced, interpretable explanation evaluation tool that can generate such characterizations automatically, without relying on expensive API calls or human annotations. Our approach is to (a) define the new task of explanation critiquing - identifying and categorizing any main flaw in an explanation and providing suggestions to address the flaw, (b) create a sizeable, human-verified dataset for this task, and (c) train an open-source, automatic critique model (called Digital Socrates) using this data. Through quantitative and qualitative analysis, we demonstrate how Digital Socrates is useful for revealing insights about student models by examining their reasoning chains, and how it can provide high-quality, nuanced, automatic evaluation of those model explanations for the first time. Digital Socrates thus fills an important gap in evaluation tools for understanding and improving the explanation behavior of models.
[ { "created": "Thu, 16 Nov 2023 06:51:46 GMT", "version": "v1" }, { "created": "Fri, 16 Feb 2024 08:49:07 GMT", "version": "v2" }, { "created": "Sun, 11 Aug 2024 05:46:15 GMT", "version": "v3" } ]
2024-08-13
[ [ "Gu", "Yuling", "" ], [ "Tafjord", "Oyvind", "" ], [ "Clark", "Peter", "" ] ]
While LLMs can provide reasoned explanations along with their answers, the nature and quality of those explanations are still poorly understood. In response, our goal is to define a detailed way of characterizing the explanation capabilities of modern models and to create a nuanced, interpretable explanation evaluation tool that can generate such characterizations automatically, without relying on expensive API calls or human annotations. Our approach is to (a) define the new task of explanation critiquing - identifying and categorizing any main flaw in an explanation and providing suggestions to address the flaw, (b) create a sizeable, human-verified dataset for this task, and (c) train an open-source, automatic critique model (called Digital Socrates) using this data. Through quantitative and qualitative analysis, we demonstrate how Digital Socrates is useful for revealing insights about student models by examining their reasoning chains, and how it can provide high-quality, nuanced, automatic evaluation of those model explanations for the first time. Digital Socrates thus fills an important gap in evaluation tools for understanding and improving the explanation behavior of models.
2407.07038
Ruiran Su
Ruiran Su, Janet B. Pierrehumbert
Decoding Climate Disagreement: A Graph Neural Network-Based Approach to Understanding Social Media Dynamics
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This work introduces the ClimateSent-GAT Model, an innovative method that integrates Graph Attention Networks (GATs) with techniques from natural language processing to accurately identify and predict disagreements within Reddit comment-reply pairs. Our model classifies disagreements into three categories: agree, disagree, and neutral. Leveraging the inherent graph structure of Reddit comment-reply pairs, the model significantly outperforms existing benchmarks by capturing complex interaction patterns and sentiment dynamics. This research advances graph-based NLP methodologies and provides actionable insights for policymakers and educators in climate science communication.
[ { "created": "Tue, 9 Jul 2024 17:00:39 GMT", "version": "v1" } ]
2024-07-10
[ [ "Su", "Ruiran", "" ], [ "Pierrehumbert", "Janet B.", "" ] ]
This work introduces the ClimateSent-GAT Model, an innovative method that integrates Graph Attention Networks (GATs) with techniques from natural language processing to accurately identify and predict disagreements within Reddit comment-reply pairs. Our model classifies disagreements into three categories: agree, disagree, and neutral. Leveraging the inherent graph structure of Reddit comment-reply pairs, the model significantly outperforms existing benchmarks by capturing complex interaction patterns and sentiment dynamics. This research advances graph-based NLP methodologies and provides actionable insights for policymakers and educators in climate science communication.
1607.07472
Liang He
Liang He and Jia Pan and Dinesh Manocha
Efficient Multi-Agent Global Navigation Using Interpolating Bridges
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel approach for collision-free global navigation for continuous-time multi-agent systems with general linear dynamics. Our approach is general and can be used to perform collision-free navigation in 2D and 3D workspaces with narrow passages and crowded regions. As part of pre-computation, we compute multiple bridges in the narrow or tight regions in the workspace using kinodynamic RRT algorithms. Our bridge has certain geometric characteristics, that en- able us to calculate a collision-free trajectory for each agent using simple interpolation at runtime. Moreover, we combine interpolated bridge trajectories with local multi-agent navigation algorithms to compute global collision-free paths for each agent. The overall approach combines the performance benefits of coupled multi-agent algorithms with the pre- computed trajectories of the bridges to handle challenging scenarios. In practice, our approach can handle tens to hundreds of agents in real-time on a single CPU core in 2D and 3D workspaces.
[ { "created": "Mon, 25 Jul 2016 20:50:47 GMT", "version": "v1" } ]
2016-10-03
[ [ "He", "Liang", "" ], [ "Pan", "Jia", "" ], [ "Manocha", "Dinesh", "" ] ]
We present a novel approach for collision-free global navigation for continuous-time multi-agent systems with general linear dynamics. Our approach is general and can be used to perform collision-free navigation in 2D and 3D workspaces with narrow passages and crowded regions. As part of pre-computation, we compute multiple bridges in the narrow or tight regions in the workspace using kinodynamic RRT algorithms. Our bridge has certain geometric characteristics, that en- able us to calculate a collision-free trajectory for each agent using simple interpolation at runtime. Moreover, we combine interpolated bridge trajectories with local multi-agent navigation algorithms to compute global collision-free paths for each agent. The overall approach combines the performance benefits of coupled multi-agent algorithms with the pre- computed trajectories of the bridges to handle challenging scenarios. In practice, our approach can handle tens to hundreds of agents in real-time on a single CPU core in 2D and 3D workspaces.
2211.02533
Wenting Ye
Wenting Ye, Hongfei Yang, Shuai Zhao, Haoyang Fang, Xingjian Shi, Naveen Neppalli
A Transformer-Based Substitute Recommendation Model Incorporating Weakly Supervised Customer Behavior Data
6 pages, 3 figures, 5 tables, accepted in 2023 SIGIR Industry track (SIGIR Symposium on IR in Practice, SIRIP)
null
null
null
cs.IR cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The substitute-based recommendation is widely used in E-commerce to provide better alternatives to customers. However, existing research typically uses the customer behavior signals like co-view and view-but-purchase-another to capture the substitute relationship. Despite its intuitive soundness, we find that such an approach might ignore the functionality and characteristics of products. In this paper, we adapt substitute recommendation into language matching problem by taking product title description as model input to consider product functionality. We design a new transformation method to de-noise the signals derived from production data. In addition, we consider multilingual support from the engineering point of view. Our proposed end-to-end transformer-based model achieves both successes from offline and online experiments. The proposed model has been deployed in a large-scale E-commerce website for 11 marketplaces in 6 languages. Our proposed model is demonstrated to increase revenue by 19% based on an online A/B experiment.
[ { "created": "Fri, 4 Nov 2022 15:57:19 GMT", "version": "v1" }, { "created": "Sat, 8 Apr 2023 15:27:17 GMT", "version": "v2" } ]
2023-04-11
[ [ "Ye", "Wenting", "" ], [ "Yang", "Hongfei", "" ], [ "Zhao", "Shuai", "" ], [ "Fang", "Haoyang", "" ], [ "Shi", "Xingjian", "" ], [ "Neppalli", "Naveen", "" ] ]
The substitute-based recommendation is widely used in E-commerce to provide better alternatives to customers. However, existing research typically uses the customer behavior signals like co-view and view-but-purchase-another to capture the substitute relationship. Despite its intuitive soundness, we find that such an approach might ignore the functionality and characteristics of products. In this paper, we adapt substitute recommendation into language matching problem by taking product title description as model input to consider product functionality. We design a new transformation method to de-noise the signals derived from production data. In addition, we consider multilingual support from the engineering point of view. Our proposed end-to-end transformer-based model achieves both successes from offline and online experiments. The proposed model has been deployed in a large-scale E-commerce website for 11 marketplaces in 6 languages. Our proposed model is demonstrated to increase revenue by 19% based on an online A/B experiment.
2301.03282
Jie Lou
Jie Lou, Yaojie Lu, Dai Dai, Wei Jia, Hongyu Lin, Xianpei Han, Le Sun, Hua Wu
Universal Information Extraction as Unified Semantic Matching
accepted by AAAI2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The challenge of information extraction (IE) lies in the diversity of label schemas and the heterogeneity of structures. Traditional methods require task-specific model design and rely heavily on expensive supervision, making them difficult to generalize to new schemas. In this paper, we decouple IE into two basic abilities, structuring and conceptualizing, which are shared by different tasks and schemas. Based on this paradigm, we propose to universally model various IE tasks with Unified Semantic Matching (USM) framework, which introduces three unified token linking operations to model the abilities of structuring and conceptualizing. In this way, USM can jointly encode schema and input text, uniformly extract substructures in parallel, and controllably decode target structures on demand. Empirical evaluation on 4 IE tasks shows that the proposed method achieves state-of-the-art performance under the supervised experiments and shows strong generalization ability in zero/few-shot transfer settings.
[ { "created": "Mon, 9 Jan 2023 11:51:31 GMT", "version": "v1" } ]
2023-01-10
[ [ "Lou", "Jie", "" ], [ "Lu", "Yaojie", "" ], [ "Dai", "Dai", "" ], [ "Jia", "Wei", "" ], [ "Lin", "Hongyu", "" ], [ "Han", "Xianpei", "" ], [ "Sun", "Le", "" ], [ "Wu", "Hua", "" ] ]
The challenge of information extraction (IE) lies in the diversity of label schemas and the heterogeneity of structures. Traditional methods require task-specific model design and rely heavily on expensive supervision, making them difficult to generalize to new schemas. In this paper, we decouple IE into two basic abilities, structuring and conceptualizing, which are shared by different tasks and schemas. Based on this paradigm, we propose to universally model various IE tasks with Unified Semantic Matching (USM) framework, which introduces three unified token linking operations to model the abilities of structuring and conceptualizing. In this way, USM can jointly encode schema and input text, uniformly extract substructures in parallel, and controllably decode target structures on demand. Empirical evaluation on 4 IE tasks shows that the proposed method achieves state-of-the-art performance under the supervised experiments and shows strong generalization ability in zero/few-shot transfer settings.
2310.15080
Ji Liu
Tianshi Che, Ji Liu, Yang Zhou, Jiaxiang Ren, Jiwen Zhou, Victor S. Sheng, Huaiyu Dai, Dejing Dou
Federated Learning of Large Language Models with Parameter-Efficient Prompt Tuning and Adaptive Optimization
18 pages, accepted by EMNLP 2023
null
null
null
cs.LG cs.CL cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data. However, the training process of Large Language Models (LLMs) generally incurs the update of significant parameters, which limits the applicability of FL techniques to tackle the LLMs in real scenarios. Prompt tuning can significantly reduce the number of parameters to update, but it either incurs performance degradation or low training efficiency. The straightforward utilization of prompt tuning in the FL often raises non-trivial communication costs and dramatically degrades performance. In addition, the decentralized data is generally non-Independent and Identically Distributed (non-IID), which brings client drift problems and thus poor performance. This paper proposes a Parameter-efficient prompt Tuning approach with Adaptive Optimization, i.e., FedPepTAO, to enable efficient and effective FL of LLMs. First, an efficient partial prompt tuning approach is proposed to improve performance and efficiency simultaneously. Second, a novel adaptive optimization method is developed to address the client drift problems on both the device and server sides to enhance performance further. Extensive experiments based on 10 datasets demonstrate the superb performance (up to 60.8\% in terms of accuracy) and efficiency (up to 97.59\% in terms of training time) of FedPepTAO compared with 9 baseline approaches. Our code is available at https://github.com/llm-eff/FedPepTAO.
[ { "created": "Mon, 23 Oct 2023 16:37:59 GMT", "version": "v1" }, { "created": "Sun, 29 Oct 2023 07:17:45 GMT", "version": "v2" }, { "created": "Sun, 11 Feb 2024 11:59:52 GMT", "version": "v3" } ]
2024-02-13
[ [ "Che", "Tianshi", "" ], [ "Liu", "Ji", "" ], [ "Zhou", "Yang", "" ], [ "Ren", "Jiaxiang", "" ], [ "Zhou", "Jiwen", "" ], [ "Sheng", "Victor S.", "" ], [ "Dai", "Huaiyu", "" ], [ "Dou", "Dejing", "" ] ]
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data. However, the training process of Large Language Models (LLMs) generally incurs the update of significant parameters, which limits the applicability of FL techniques to tackle the LLMs in real scenarios. Prompt tuning can significantly reduce the number of parameters to update, but it either incurs performance degradation or low training efficiency. The straightforward utilization of prompt tuning in the FL often raises non-trivial communication costs and dramatically degrades performance. In addition, the decentralized data is generally non-Independent and Identically Distributed (non-IID), which brings client drift problems and thus poor performance. This paper proposes a Parameter-efficient prompt Tuning approach with Adaptive Optimization, i.e., FedPepTAO, to enable efficient and effective FL of LLMs. First, an efficient partial prompt tuning approach is proposed to improve performance and efficiency simultaneously. Second, a novel adaptive optimization method is developed to address the client drift problems on both the device and server sides to enhance performance further. Extensive experiments based on 10 datasets demonstrate the superb performance (up to 60.8\% in terms of accuracy) and efficiency (up to 97.59\% in terms of training time) of FedPepTAO compared with 9 baseline approaches. Our code is available at https://github.com/llm-eff/FedPepTAO.
2303.13518
Relja Arandjelovi\'c
Relja Arandjelovi\'c, Alex Andonian, Arthur Mensch, Olivier J. H\'enaff, Jean-Baptiste Alayrac, Andrew Zisserman
Three ways to improve feature alignment for open vocabulary detection
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The core problem in zero-shot open vocabulary detection is how to align visual and text features, so that the detector performs well on unseen classes. Previous approaches train the feature pyramid and detection head from scratch, which breaks the vision-text feature alignment established during pretraining, and struggles to prevent the language model from forgetting unseen classes. We propose three methods to alleviate these issues. Firstly, a simple scheme is used to augment the text embeddings which prevents overfitting to a small number of classes seen during training, while simultaneously saving memory and computation. Secondly, the feature pyramid network and the detection head are modified to include trainable gated shortcuts, which encourages vision-text feature alignment and guarantees it at the start of detection training. Finally, a self-training approach is used to leverage a larger corpus of image-text pairs thus improving detection performance on classes with no human annotated bounding boxes. Our three methods are evaluated on the zero-shot version of the LVIS benchmark, each of them showing clear and significant benefits. Our final network achieves the new stateof-the-art on the mAP-all metric and demonstrates competitive performance for mAP-rare, as well as superior transfer to COCO and Objects365.
[ { "created": "Thu, 23 Mar 2023 17:59:53 GMT", "version": "v1" } ]
2023-03-24
[ [ "Arandjelović", "Relja", "" ], [ "Andonian", "Alex", "" ], [ "Mensch", "Arthur", "" ], [ "Hénaff", "Olivier J.", "" ], [ "Alayrac", "Jean-Baptiste", "" ], [ "Zisserman", "Andrew", "" ] ]
The core problem in zero-shot open vocabulary detection is how to align visual and text features, so that the detector performs well on unseen classes. Previous approaches train the feature pyramid and detection head from scratch, which breaks the vision-text feature alignment established during pretraining, and struggles to prevent the language model from forgetting unseen classes. We propose three methods to alleviate these issues. Firstly, a simple scheme is used to augment the text embeddings which prevents overfitting to a small number of classes seen during training, while simultaneously saving memory and computation. Secondly, the feature pyramid network and the detection head are modified to include trainable gated shortcuts, which encourages vision-text feature alignment and guarantees it at the start of detection training. Finally, a self-training approach is used to leverage a larger corpus of image-text pairs thus improving detection performance on classes with no human annotated bounding boxes. Our three methods are evaluated on the zero-shot version of the LVIS benchmark, each of them showing clear and significant benefits. Our final network achieves the new stateof-the-art on the mAP-all metric and demonstrates competitive performance for mAP-rare, as well as superior transfer to COCO and Objects365.
1607.08421
Anita Sellent
Anita Sellent, Carsten Rother and Stefan Roth
Stereo Video Deblurring
Accepted to the 14th European Conference on Computer Vision (ECCV 2016). Includes supplemental material
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Videos acquired in low-light conditions often exhibit motion blur, which depends on the motion of the objects relative to the camera. This is not only visually unpleasing, but can hamper further processing. With this paper we are the first to show how the availability of stereo video can aid the challenging video deblurring task. We leverage 3D scene flow, which can be estimated robustly even under adverse conditions. We go beyond simply determining the object motion in two ways: First, we show how a piecewise rigid 3D scene flow representation allows to induce accurate blur kernels via local homographies. Second, we exploit the estimated motion boundaries of the 3D scene flow to mitigate ringing artifacts using an iterative weighting scheme. Being aware of 3D object motion, our approach can deal robustly with an arbitrary number of independently moving objects. We demonstrate its benefit over state-of-the-art video deblurring using quantitative and qualitative experiments on rendered scenes and real videos.
[ { "created": "Thu, 28 Jul 2016 12:13:10 GMT", "version": "v1" } ]
2016-07-29
[ [ "Sellent", "Anita", "" ], [ "Rother", "Carsten", "" ], [ "Roth", "Stefan", "" ] ]
Videos acquired in low-light conditions often exhibit motion blur, which depends on the motion of the objects relative to the camera. This is not only visually unpleasing, but can hamper further processing. With this paper we are the first to show how the availability of stereo video can aid the challenging video deblurring task. We leverage 3D scene flow, which can be estimated robustly even under adverse conditions. We go beyond simply determining the object motion in two ways: First, we show how a piecewise rigid 3D scene flow representation allows to induce accurate blur kernels via local homographies. Second, we exploit the estimated motion boundaries of the 3D scene flow to mitigate ringing artifacts using an iterative weighting scheme. Being aware of 3D object motion, our approach can deal robustly with an arbitrary number of independently moving objects. We demonstrate its benefit over state-of-the-art video deblurring using quantitative and qualitative experiments on rendered scenes and real videos.
1901.00101
Jinwook Huh
Jinwook Huh, Omur Arslan, Daniel D. Lee
Probabilistically Safe Corridors to Guide Sampling-Based Motion Planning
10 pages
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce a new probabilistically safe local steering primitive for sampling-based motion planning in complex high-dimensional configuration spaces. Our local steering procedure is based on a new notion of a convex probabilistically safe corridor that is constructed around a configuration using tangent hyperplanes of confidence ellipsoids of Gaussian mixture models learned from prior collision history. Accordingly, we propose to expand a random motion planning graph towards a sample goal using its projection onto probabilistically safe corridors, which efficiently exploits the local geometry of configuration spaces for selecting proper steering direction and adapting steering stepsize. We observe that the proposed local steering procedure generates effective steering motion around difficult regions of configuration spaces, such as narrow passages, while minimizing collision likelihood. We evaluate the proposed steering method with randomized motion planners in a number of planning scenarios, both in simulation and on a physical 7DoF robot arm, demonstrating the effectiveness of our safety guided local planner over the standard straight-line planner.
[ { "created": "Tue, 1 Jan 2019 05:59:19 GMT", "version": "v1" } ]
2019-01-03
[ [ "Huh", "Jinwook", "" ], [ "Arslan", "Omur", "" ], [ "Lee", "Daniel D.", "" ] ]
In this paper, we introduce a new probabilistically safe local steering primitive for sampling-based motion planning in complex high-dimensional configuration spaces. Our local steering procedure is based on a new notion of a convex probabilistically safe corridor that is constructed around a configuration using tangent hyperplanes of confidence ellipsoids of Gaussian mixture models learned from prior collision history. Accordingly, we propose to expand a random motion planning graph towards a sample goal using its projection onto probabilistically safe corridors, which efficiently exploits the local geometry of configuration spaces for selecting proper steering direction and adapting steering stepsize. We observe that the proposed local steering procedure generates effective steering motion around difficult regions of configuration spaces, such as narrow passages, while minimizing collision likelihood. We evaluate the proposed steering method with randomized motion planners in a number of planning scenarios, both in simulation and on a physical 7DoF robot arm, demonstrating the effectiveness of our safety guided local planner over the standard straight-line planner.
2407.00131
Shuang Wang
Xian Wu, Qingchuan Tao and Shuang Wang
RepAct: The Re-parameterizable Adaptive Activation Function
null
null
null
null
cs.LG cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Addressing the imperative need for efficient artificial intelligence in IoT and edge computing, this study presents RepAct, a re-parameterizable adaptive activation function tailored for optimizing lightweight neural networks within the computational limitations of edge devices. By employing a multi-branch structure with learnable adaptive weights, RepAct enriches feature processing and enhances cross-layer interpretability. When evaluated on tasks such as image classification and object detection, RepAct notably surpassed conventional activation functions in lightweight networks, delivering up to a 7.92% accuracy boost on MobileNetV3-Small for the ImageNet100 dataset, while maintaining computational complexity on par with HardSwish. This innovative approach not only maximizes model parameter efficiency but also significantly improves the performance and understanding capabilities of lightweight neural networks, demonstrating its potential for real-time edge computing applications.
[ { "created": "Fri, 28 Jun 2024 08:25:45 GMT", "version": "v1" } ]
2024-07-02
[ [ "Wu", "Xian", "" ], [ "Tao", "Qingchuan", "" ], [ "Wang", "Shuang", "" ] ]
Addressing the imperative need for efficient artificial intelligence in IoT and edge computing, this study presents RepAct, a re-parameterizable adaptive activation function tailored for optimizing lightweight neural networks within the computational limitations of edge devices. By employing a multi-branch structure with learnable adaptive weights, RepAct enriches feature processing and enhances cross-layer interpretability. When evaluated on tasks such as image classification and object detection, RepAct notably surpassed conventional activation functions in lightweight networks, delivering up to a 7.92% accuracy boost on MobileNetV3-Small for the ImageNet100 dataset, while maintaining computational complexity on par with HardSwish. This innovative approach not only maximizes model parameter efficiency but also significantly improves the performance and understanding capabilities of lightweight neural networks, demonstrating its potential for real-time edge computing applications.
2205.14077
Daniel Reynolds
Daniel R. Reynolds and David J. Gardner and Carol S. Woodward and Rujeko Chinomona
ARKODE: a flexible IVP solver infrastructure for one-step methods
null
ACM Transactions on Mathematical Software, Volume 49, Issue 2, June 2023, Article No.: 19
10.1145/3594632
null
cs.MS cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe the ARKODE library of one-step time integration methods for ordinary differential equation (ODE) initial-value problems (IVPs). In addition to providing standard explicit and diagonally implicit Runge--Kutta methods, ARKODE also supports one-step methods designed to treat additive splittings of the IVP, including implicit-explicit (ImEx) additive Runge--Kutta methods and multirate infinitesimal (MRI) methods. We present the role of ARKODE within the SUNDIALS suite of time integration and nonlinear solver libraries, the core ARKODE infrastructure for utilities common to large classes of one-step methods, as well as its use of ``time stepper'' modules enabling easy incorporation of novel algorithms into the library. Numerical results show example problems of increasing complexity, highlighting the algorithmic flexibility afforded through this infrastructure, and include a larger multiphysics application leveraging multiple algorithmic features from ARKODE and SUNDIALS.
[ { "created": "Fri, 27 May 2022 16:16:19 GMT", "version": "v1" }, { "created": "Wed, 21 Dec 2022 21:50:25 GMT", "version": "v2" } ]
2024-03-19
[ [ "Reynolds", "Daniel R.", "" ], [ "Gardner", "David J.", "" ], [ "Woodward", "Carol S.", "" ], [ "Chinomona", "Rujeko", "" ] ]
We describe the ARKODE library of one-step time integration methods for ordinary differential equation (ODE) initial-value problems (IVPs). In addition to providing standard explicit and diagonally implicit Runge--Kutta methods, ARKODE also supports one-step methods designed to treat additive splittings of the IVP, including implicit-explicit (ImEx) additive Runge--Kutta methods and multirate infinitesimal (MRI) methods. We present the role of ARKODE within the SUNDIALS suite of time integration and nonlinear solver libraries, the core ARKODE infrastructure for utilities common to large classes of one-step methods, as well as its use of ``time stepper'' modules enabling easy incorporation of novel algorithms into the library. Numerical results show example problems of increasing complexity, highlighting the algorithmic flexibility afforded through this infrastructure, and include a larger multiphysics application leveraging multiple algorithmic features from ARKODE and SUNDIALS.
1610.03525
Yann Thorimbert
Yann Thorimbert, Bastien Chopard
Polynomial methods for Procedural Terrain Generation
27 pages, 15 figures
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new method is presented, allowing for the generation of 3D terrain and texture from coherent noise. The method is significantly faster than prevailing fractal brownian motion approaches, while producing results of equivalent quality. The algorithm is derived through a systematic approach that generalizes to an arbitrary number of spatial dimensions and gradient smoothness. The results are compared, in terms of performance and quality, to fundamental and efficient gradient noise methods widely used in the domain of fast terrain generation: Perlin noise and OpenSimplex noise. Finally, to objectively quantify the degree of realism of the results, a fractal analysis of the generated landscapes is performed and compared to real terrain data.
[ { "created": "Tue, 11 Oct 2016 20:51:48 GMT", "version": "v1" }, { "created": "Wed, 2 Nov 2016 13:47:38 GMT", "version": "v2" }, { "created": "Sun, 7 Oct 2018 22:39:53 GMT", "version": "v3" }, { "created": "Wed, 5 Dec 2018 16:32:14 GMT", "version": "v4" } ]
2018-12-06
[ [ "Thorimbert", "Yann", "" ], [ "Chopard", "Bastien", "" ] ]
A new method is presented, allowing for the generation of 3D terrain and texture from coherent noise. The method is significantly faster than prevailing fractal brownian motion approaches, while producing results of equivalent quality. The algorithm is derived through a systematic approach that generalizes to an arbitrary number of spatial dimensions and gradient smoothness. The results are compared, in terms of performance and quality, to fundamental and efficient gradient noise methods widely used in the domain of fast terrain generation: Perlin noise and OpenSimplex noise. Finally, to objectively quantify the degree of realism of the results, a fractal analysis of the generated landscapes is performed and compared to real terrain data.
2307.11194
Lindsey Kuper
Patrick Redmond, Lindsey Kuper
An Exceptional Actor System (Functional Pearl)
To appear at Haskell Symposium 2023
null
10.1145/3609026.3609728
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Glasgow Haskell Compiler is known for its feature-laden runtime system (RTS), which includes lightweight threads, asynchronous exceptions, and a slew of other features. Their combination is powerful enough that a programmer may complete the same task in many different ways -- some more advisable than others. We present a user-accessible actor framework hidden in plain sight within the RTS and demonstrate it on a classic example from the distributed systems literature. We then extend both the framework and example to the realm of dynamic types. Finally, we raise questions about how RTS features intersect and possibly subsume one another, and suggest that GHC can guide good practice by constraining the use of some features.
[ { "created": "Thu, 20 Jul 2023 19:11:54 GMT", "version": "v1" } ]
2023-07-24
[ [ "Redmond", "Patrick", "" ], [ "Kuper", "Lindsey", "" ] ]
The Glasgow Haskell Compiler is known for its feature-laden runtime system (RTS), which includes lightweight threads, asynchronous exceptions, and a slew of other features. Their combination is powerful enough that a programmer may complete the same task in many different ways -- some more advisable than others. We present a user-accessible actor framework hidden in plain sight within the RTS and demonstrate it on a classic example from the distributed systems literature. We then extend both the framework and example to the realm of dynamic types. Finally, we raise questions about how RTS features intersect and possibly subsume one another, and suggest that GHC can guide good practice by constraining the use of some features.
1502.05696
Nihar Shah
Nihar B. Shah, Dengyong Zhou, Yuval Peres
Approval Voting and Incentives in Crowdsourcing
null
null
null
null
cs.GT cs.AI cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growing need for labeled training data has made crowdsourcing an important part of machine learning. The quality of crowdsourced labels is, however, adversely affected by three factors: (1) the workers are not experts; (2) the incentives of the workers are not aligned with those of the requesters; and (3) the interface does not allow workers to convey their knowledge accurately, by forcing them to make a single choice among a set of options. In this paper, we address these issues by introducing approval voting to utilize the expertise of workers who have partial knowledge of the true answer, and coupling it with a ("strictly proper") incentive-compatible compensation mechanism. We show rigorous theoretical guarantees of optimality of our mechanism together with a simple axiomatic characterization. We also conduct preliminary empirical studies on Amazon Mechanical Turk which validate our approach.
[ { "created": "Thu, 19 Feb 2015 20:42:55 GMT", "version": "v1" }, { "created": "Tue, 19 May 2015 09:12:50 GMT", "version": "v2" }, { "created": "Mon, 7 Sep 2015 05:21:06 GMT", "version": "v3" } ]
2015-09-08
[ [ "Shah", "Nihar B.", "" ], [ "Zhou", "Dengyong", "" ], [ "Peres", "Yuval", "" ] ]
The growing need for labeled training data has made crowdsourcing an important part of machine learning. The quality of crowdsourced labels is, however, adversely affected by three factors: (1) the workers are not experts; (2) the incentives of the workers are not aligned with those of the requesters; and (3) the interface does not allow workers to convey their knowledge accurately, by forcing them to make a single choice among a set of options. In this paper, we address these issues by introducing approval voting to utilize the expertise of workers who have partial knowledge of the true answer, and coupling it with a ("strictly proper") incentive-compatible compensation mechanism. We show rigorous theoretical guarantees of optimality of our mechanism together with a simple axiomatic characterization. We also conduct preliminary empirical studies on Amazon Mechanical Turk which validate our approach.
2407.05064
Evgenii Vinogradov A
Dmitrii Belimov and Evgenii Vinogradov
Reverse Engineered MiniFS File System
The 19th International Conference on Availability, Reliability and Security (ARES 2024), July 30-August 2, 2024, Vienna, Austria
null
10.1145/3664476.3664511
null
cs.CR cs.NI
http://creativecommons.org/licenses/by/4.0/
In an era where digital connectivity is increasingly foundational to daily life, the security of Wi-Fi Access Points (APs) is a critical concern. This paper addresses the vulnerabilities inherent in Wi-Fi APs, with a particular focus on those using proprietary file systems like MiniFS found in TP-Link's AC1900 WiFi router. Through reverse engineering, we unravel the structure and operation of MiniFS, marking a significant advancement in our understanding of this previously opaque file system. Our investigation reveals not only the architecture of MiniFS but also identifies several private keys and underscores a concerning lack of cryptographic protection. These findings point to broader security vulnerabilities, emphasizing the risks of security-by-obscurity practices in an interconnected environment. Our contributions are twofold: firstly, based, on the file system structure, we develop a methodology for the extraction and analysis of MiniFS, facilitating the identification and mitigation of potential vulnerabilities. Secondly, our work lays the groundwork for further research into WiFi APs' security, particularly those running on similar proprietary systems. By highlighting the critical need for transparency and community engagement in firmware analysis, this study contributes to the development of more secure network devices, thus enhancing the overall security posture of digital infrastructures.
[ { "created": "Sat, 6 Jul 2024 12:49:37 GMT", "version": "v1" } ]
2024-07-09
[ [ "Belimov", "Dmitrii", "" ], [ "Vinogradov", "Evgenii", "" ] ]
In an era where digital connectivity is increasingly foundational to daily life, the security of Wi-Fi Access Points (APs) is a critical concern. This paper addresses the vulnerabilities inherent in Wi-Fi APs, with a particular focus on those using proprietary file systems like MiniFS found in TP-Link's AC1900 WiFi router. Through reverse engineering, we unravel the structure and operation of MiniFS, marking a significant advancement in our understanding of this previously opaque file system. Our investigation reveals not only the architecture of MiniFS but also identifies several private keys and underscores a concerning lack of cryptographic protection. These findings point to broader security vulnerabilities, emphasizing the risks of security-by-obscurity practices in an interconnected environment. Our contributions are twofold: firstly, based, on the file system structure, we develop a methodology for the extraction and analysis of MiniFS, facilitating the identification and mitigation of potential vulnerabilities. Secondly, our work lays the groundwork for further research into WiFi APs' security, particularly those running on similar proprietary systems. By highlighting the critical need for transparency and community engagement in firmware analysis, this study contributes to the development of more secure network devices, thus enhancing the overall security posture of digital infrastructures.
2307.02447
Timon B\"ohler
Timon B\"ohler, David Richter, Mira Mezini
Using Rewrite Strategies for Efficient Functional Automatic Differentiation
to be published in FTfJP 2023
null
10.1145/3605156.3606456
null
cs.PL
http://creativecommons.org/licenses/by/4.0/
Automatic Differentiation (AD) has become a dominant technique in ML. AD frameworks have first been implemented for imperative languages using tapes. Meanwhile, functional implementations of AD have been developed, often based on dual numbers, which are close to the formal specification of differentiation and hence easier to prove correct. But these papers have focussed on correctness not efficiency. Recently, it was shown how an approach using dual numbers could be made efficient through the right optimizations. Optimizations are highly dependent on order, as one optimization can enable another. It can therefore be useful to have fine-grained control over the scheduling of optimizations. One method expresses compiler optimizations as rewrite rules, whose application can be combined and controlled using strategy languages. Previous work describes the use of term rewriting and strategies to generate high-performance code in a compiler for a functional language. In this work, we implement dual numbers AD in a functional array programming language using rewrite rules and strategy combinators for optimization. We aim to combine the elegance of differentiation using dual numbers with a succinct expression of the optimization schedule using a strategy language. We give preliminary evidence suggesting the viability of the approach on a micro-benchmark.
[ { "created": "Wed, 5 Jul 2023 17:17:16 GMT", "version": "v1" }, { "created": "Fri, 7 Jul 2023 09:29:25 GMT", "version": "v2" } ]
2023-07-10
[ [ "Böhler", "Timon", "" ], [ "Richter", "David", "" ], [ "Mezini", "Mira", "" ] ]
Automatic Differentiation (AD) has become a dominant technique in ML. AD frameworks have first been implemented for imperative languages using tapes. Meanwhile, functional implementations of AD have been developed, often based on dual numbers, which are close to the formal specification of differentiation and hence easier to prove correct. But these papers have focussed on correctness not efficiency. Recently, it was shown how an approach using dual numbers could be made efficient through the right optimizations. Optimizations are highly dependent on order, as one optimization can enable another. It can therefore be useful to have fine-grained control over the scheduling of optimizations. One method expresses compiler optimizations as rewrite rules, whose application can be combined and controlled using strategy languages. Previous work describes the use of term rewriting and strategies to generate high-performance code in a compiler for a functional language. In this work, we implement dual numbers AD in a functional array programming language using rewrite rules and strategy combinators for optimization. We aim to combine the elegance of differentiation using dual numbers with a succinct expression of the optimization schedule using a strategy language. We give preliminary evidence suggesting the viability of the approach on a micro-benchmark.
1401.1152
Radek Stefan
Michal Bene\v{s}, Radek \v{S}tefan
Hygro-thermo-mechanical analysis of spalling in concrete walls at high temperatures as a moving boundary problem
null
null
10.1016/j.ijheatmasstransfer.2015.01.050
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A mathematical model allowing coupled hygro-thermo-mechanical analysis of spalling in concrete walls at high temperatures by means of the moving boundary problem is presented. A simplified mechanical approach to account for effects of thermal stresses and pore pressure build-up on spalling is incorporated into the model. The numerical algorithm based on finite element discretization in space and the semi-implicit method for discretization in time is presented. The validity of the developed model is carefully examined by a comparison between experimental tests performed by Kalifa et al. (2000) and Mindeguia (2009) on concrete prismatic specimens under unidirectional heating of temperature of 600 ${\deg}$C and ISO 834 fire curve and the results obtained from the numerical model.
[ { "created": "Mon, 6 Jan 2014 17:42:41 GMT", "version": "v1" }, { "created": "Tue, 7 Jan 2014 08:46:25 GMT", "version": "v2" }, { "created": "Fri, 9 Jan 2015 14:24:19 GMT", "version": "v3" } ]
2015-02-13
[ [ "Beneš", "Michal", "" ], [ "Štefan", "Radek", "" ] ]
A mathematical model allowing coupled hygro-thermo-mechanical analysis of spalling in concrete walls at high temperatures by means of the moving boundary problem is presented. A simplified mechanical approach to account for effects of thermal stresses and pore pressure build-up on spalling is incorporated into the model. The numerical algorithm based on finite element discretization in space and the semi-implicit method for discretization in time is presented. The validity of the developed model is carefully examined by a comparison between experimental tests performed by Kalifa et al. (2000) and Mindeguia (2009) on concrete prismatic specimens under unidirectional heating of temperature of 600 ${\deg}$C and ISO 834 fire curve and the results obtained from the numerical model.
1912.07018
Adarsh Kappiyath
Silpa Vadakkeeveetil Sreelatha, Adarsh Kappiyath, Sumitra S
Disentanglement based Active Learning
Published in International Joint Conference on Neural Networks (IJCNN), 2021
2021 International Joint Conference on Neural Networks (IJCNN), 2021, pp. 1-8
10.1109/IJCNN52387.2021.9534033
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose Disentanglement based Active Learning (DAL), a new active learning technique based on self-supervision which leverages the concept of disentanglement. Instead of requesting labels from human oracle, our method automatically labels the majority of the datapoints, thus drastically reducing the human labeling budget in Generative Adversarial Net (GAN) based active learning approaches. The proposed method uses Information Maximizing Generative Adversarial Nets (InfoGAN) to learn disentangled class category representations. Disagreement between active learner predictions and InfoGAN labels decides if the datapoints need to be human-labeled. We also introduce a label correction mechanism that aims to filter out label noise that occurs due to automatic labeling. Results on three benchmark datasets for the image classification task demonstrate that our method achieves better performance compared to existing GAN-based active learning approaches.
[ { "created": "Sun, 15 Dec 2019 10:48:06 GMT", "version": "v1" }, { "created": "Sat, 25 Sep 2021 13:30:29 GMT", "version": "v2" } ]
2021-09-28
[ [ "Sreelatha", "Silpa Vadakkeeveetil", "" ], [ "Kappiyath", "Adarsh", "" ], [ "S", "Sumitra", "" ] ]
We propose Disentanglement based Active Learning (DAL), a new active learning technique based on self-supervision which leverages the concept of disentanglement. Instead of requesting labels from human oracle, our method automatically labels the majority of the datapoints, thus drastically reducing the human labeling budget in Generative Adversarial Net (GAN) based active learning approaches. The proposed method uses Information Maximizing Generative Adversarial Nets (InfoGAN) to learn disentangled class category representations. Disagreement between active learner predictions and InfoGAN labels decides if the datapoints need to be human-labeled. We also introduce a label correction mechanism that aims to filter out label noise that occurs due to automatic labeling. Results on three benchmark datasets for the image classification task demonstrate that our method achieves better performance compared to existing GAN-based active learning approaches.
1707.04202
Jiancao Hou
Jiancao Hou, Sandeep Narayanan, Yi Ma, and Mohammad Shikh-Bahaei
Multi-Antenna Assisted Virtual Full-Duplex Relaying with Reliability-Aware Iterative Decoding
6 pages, 4 figures, conference paper has been submitted
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a multi-antenna assisted virtual full-duplex (FD) relaying with reliability-aware iterative decoding at destination node is proposed to improve system spectral efficiency and reliability. This scheme enables two half-duplex relay nodes, mimicked as FD relaying, to alternatively serve as transmitter and receiver to relay their decoded data signals regardless the decoding errors, meanwhile, cancel the inter-relay interference with QR-decomposition. Then, by deploying the reliability-aware iterative detection/decoding process, destination node can efficiently mitigate inter-frame interference and error propagation effect at the same time. Simulation results show that, without extra cost of time delay and signalling overhead, our proposed scheme outperforms the conventional selective decode-and-forward (S-DF) relaying schemes, such as cyclic redundancy check based S-DF relaying and threshold based S-DF relaying, by up to 8 dB in terms of bit-error-rate.
[ { "created": "Thu, 13 Jul 2017 16:23:09 GMT", "version": "v1" }, { "created": "Wed, 18 Oct 2017 16:20:03 GMT", "version": "v2" } ]
2017-10-19
[ [ "Hou", "Jiancao", "" ], [ "Narayanan", "Sandeep", "" ], [ "Ma", "Yi", "" ], [ "Shikh-Bahaei", "Mohammad", "" ] ]
In this paper, a multi-antenna assisted virtual full-duplex (FD) relaying with reliability-aware iterative decoding at destination node is proposed to improve system spectral efficiency and reliability. This scheme enables two half-duplex relay nodes, mimicked as FD relaying, to alternatively serve as transmitter and receiver to relay their decoded data signals regardless the decoding errors, meanwhile, cancel the inter-relay interference with QR-decomposition. Then, by deploying the reliability-aware iterative detection/decoding process, destination node can efficiently mitigate inter-frame interference and error propagation effect at the same time. Simulation results show that, without extra cost of time delay and signalling overhead, our proposed scheme outperforms the conventional selective decode-and-forward (S-DF) relaying schemes, such as cyclic redundancy check based S-DF relaying and threshold based S-DF relaying, by up to 8 dB in terms of bit-error-rate.
2107.01835
Juliette Achddou
Juliette Achddou (DI-ENS, VALDA ), Olivier Capp\'e (VALDA, DI-ENS), Aur\'elien Garivier (UMPA-ENSL)
Fast Rate Learning in Stochastic First Price Bidding
null
ACML 2021 - Proceedings of Machine Learning Research 157, 2021, Nov 2021, SIngapore, Singapore
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
First-price auctions have largely replaced traditional bidding approaches based on Vickrey auctions in programmatic advertising. As far as learning is concerned, first-price auctions are more challenging because the optimal bidding strategy does not only depend on the value of the item but also requires some knowledge of the other bids. They have already given rise to several works in sequential learning, many of which consider models for which the value of the buyer or the opponents' maximal bid is chosen in an adversarial manner. Even in the simplest settings, this gives rise to algorithms whose regret grows as $\sqrt{T}$ with respect to the time horizon $T$. Focusing on the case where the buyer plays against a stationary stochastic environment, we show how to achieve significantly lower regret: when the opponents' maximal bid distribution is known we provide an algorithm whose regret can be as low as $\log^2(T)$; in the case where the distribution must be learnt sequentially, a generalization of this algorithm can achieve $T^{1/3+ \epsilon}$ regret, for any $\epsilon>0$. To obtain these results, we introduce two novel ideas that can be of interest in their own right. First, by transposing results obtained in the posted price setting, we provide conditions under which the first-price biding utility is locally quadratic around its optimum. Second, we leverage the observation that, on small sub-intervals, the concentration of the variations of the empirical distribution function may be controlled more accurately than by using the classical Dvoretzky-Kiefer-Wolfowitz inequality. Numerical simulations confirm that our algorithms converge much faster than alternatives proposed in the literature for various bid distributions, including for bids collected on an actual programmatic advertising platform.
[ { "created": "Mon, 5 Jul 2021 07:48:52 GMT", "version": "v1" }, { "created": "Mon, 22 Nov 2021 15:24:19 GMT", "version": "v2" } ]
2021-11-23
[ [ "Achddou", "Juliette", "", "DI-ENS, VALDA" ], [ "Cappé", "Olivier", "", "VALDA, DI-ENS" ], [ "Garivier", "Aurélien", "", "UMPA-ENSL" ] ]
First-price auctions have largely replaced traditional bidding approaches based on Vickrey auctions in programmatic advertising. As far as learning is concerned, first-price auctions are more challenging because the optimal bidding strategy does not only depend on the value of the item but also requires some knowledge of the other bids. They have already given rise to several works in sequential learning, many of which consider models for which the value of the buyer or the opponents' maximal bid is chosen in an adversarial manner. Even in the simplest settings, this gives rise to algorithms whose regret grows as $\sqrt{T}$ with respect to the time horizon $T$. Focusing on the case where the buyer plays against a stationary stochastic environment, we show how to achieve significantly lower regret: when the opponents' maximal bid distribution is known we provide an algorithm whose regret can be as low as $\log^2(T)$; in the case where the distribution must be learnt sequentially, a generalization of this algorithm can achieve $T^{1/3+ \epsilon}$ regret, for any $\epsilon>0$. To obtain these results, we introduce two novel ideas that can be of interest in their own right. First, by transposing results obtained in the posted price setting, we provide conditions under which the first-price biding utility is locally quadratic around its optimum. Second, we leverage the observation that, on small sub-intervals, the concentration of the variations of the empirical distribution function may be controlled more accurately than by using the classical Dvoretzky-Kiefer-Wolfowitz inequality. Numerical simulations confirm that our algorithms converge much faster than alternatives proposed in the literature for various bid distributions, including for bids collected on an actual programmatic advertising platform.
2407.09506
Parag Jain
Parag Jain, Mirella Lapata
Integrating Large Language Models with Graph-based Reasoning for Conversational Question Answering
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We focus on a conversational question answering task which combines the challenges of understanding questions in context and reasoning over evidence gathered from heterogeneous sources like text, knowledge graphs, tables, and infoboxes. Our method utilizes a graph structured representation to aggregate information about a question and its context (i.e., the conversation so far and evidence retrieved to find an answer), while also harnessing the reasoning and text generation capabilities of large language models (LLMs). Graph embeddings are directly injected into the LLM, bypassing the token embedding layers, and learned end-to-end by minimizing cross-entropy. Our model maintains a memory module to track and update past evidence, thus influencing the graph's structure, as the conversation evolves. Experimental results on the ConvMix benchmark(Christmann et al., 2022a) show that graph embeddings enhance the LLM's ability to reason, while the memory module provides robustness against noise and retrieval errors.
[ { "created": "Fri, 14 Jun 2024 13:28:03 GMT", "version": "v1" } ]
2024-07-16
[ [ "Jain", "Parag", "" ], [ "Lapata", "Mirella", "" ] ]
We focus on a conversational question answering task which combines the challenges of understanding questions in context and reasoning over evidence gathered from heterogeneous sources like text, knowledge graphs, tables, and infoboxes. Our method utilizes a graph structured representation to aggregate information about a question and its context (i.e., the conversation so far and evidence retrieved to find an answer), while also harnessing the reasoning and text generation capabilities of large language models (LLMs). Graph embeddings are directly injected into the LLM, bypassing the token embedding layers, and learned end-to-end by minimizing cross-entropy. Our model maintains a memory module to track and update past evidence, thus influencing the graph's structure, as the conversation evolves. Experimental results on the ConvMix benchmark(Christmann et al., 2022a) show that graph embeddings enhance the LLM's ability to reason, while the memory module provides robustness against noise and retrieval errors.
2002.03258
Dingwen Tao
Cody Rivera, Jieyang Chen, Nan Xiong, Shuaiwen Leon Song, and Dingwen Tao
TSM2X: High-Performance Tall-and-Skinny Matrix-Matrix Multiplication on GPUs
17 pages, 14 figures, published in JPDC
null
10.1016/j.jpdc.2021.02.013
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linear algebra operations have been widely used in big data analytics and scientific computations. Many works have been done on optimizing linear algebra operations on GPUs with regular-shaped input. However, few works focus on fully utilizing GPU resources when the input is not regular-shaped. Current optimizations do not consider fully utilizing the memory bandwidth and computing power; therefore, they can only achieve sub-optimal performance. In this paper, we propose two efficient algorithms -- TSM2R and TSM2L -- for two classes of tall-and-skinny matrix-matrix multiplications on GPUs. Both of them focus on optimizing linear algebra operation with at least one of the input matrices is tall-and-skinny. Specifically, TSM2R is designed for a large regular-shaped matrix multiplying a tall-and-skinny matrix, while TSM2L is designed for a tall-and-skinny matrix multiplying a small regular-shaped matrix. We implement our proposed algorithms and test on several modern NVIDIA GPU micro-architectures. Experiments show that, compared to the current state-of-the-art works, (1) TSM2R speeds up the computation by 1.1x~3x and improves the memory bandwidth utilization and computing power utilization by 8%~47.6% and 7%~37.3%, respectively, when the regular-shaped matrix size is relatively large or medium; and (2) TSM2L speeds up the computation by 1.1x~3.5x and improve the memory bandwidth utilization by up to 55% when the regular-shaped matrix size is relatively small.
[ { "created": "Sun, 9 Feb 2020 00:53:35 GMT", "version": "v1" }, { "created": "Wed, 12 Feb 2020 05:07:00 GMT", "version": "v2" }, { "created": "Mon, 27 Jul 2020 17:09:09 GMT", "version": "v3" }, { "created": "Tue, 28 Jul 2020 04:07:49 GMT", "version": "v4" }, { "created": "Thu, 18 Feb 2021 07:34:19 GMT", "version": "v5" } ]
2021-02-19
[ [ "Rivera", "Cody", "" ], [ "Chen", "Jieyang", "" ], [ "Xiong", "Nan", "" ], [ "Song", "Shuaiwen Leon", "" ], [ "Tao", "Dingwen", "" ] ]
Linear algebra operations have been widely used in big data analytics and scientific computations. Many works have been done on optimizing linear algebra operations on GPUs with regular-shaped input. However, few works focus on fully utilizing GPU resources when the input is not regular-shaped. Current optimizations do not consider fully utilizing the memory bandwidth and computing power; therefore, they can only achieve sub-optimal performance. In this paper, we propose two efficient algorithms -- TSM2R and TSM2L -- for two classes of tall-and-skinny matrix-matrix multiplications on GPUs. Both of them focus on optimizing linear algebra operation with at least one of the input matrices is tall-and-skinny. Specifically, TSM2R is designed for a large regular-shaped matrix multiplying a tall-and-skinny matrix, while TSM2L is designed for a tall-and-skinny matrix multiplying a small regular-shaped matrix. We implement our proposed algorithms and test on several modern NVIDIA GPU micro-architectures. Experiments show that, compared to the current state-of-the-art works, (1) TSM2R speeds up the computation by 1.1x~3x and improves the memory bandwidth utilization and computing power utilization by 8%~47.6% and 7%~37.3%, respectively, when the regular-shaped matrix size is relatively large or medium; and (2) TSM2L speeds up the computation by 1.1x~3.5x and improve the memory bandwidth utilization by up to 55% when the regular-shaped matrix size is relatively small.
2401.06550
Chuanji Shi
Chuanji Shi, Yingying Zhang, Jiaotuan Wang, Xin Guo and Qiqi Zhu
Multimodal Urban Areas of Interest Generation via Remote Sensing Imagery and Geographical Prior
9 pages, 9 figures
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Urban area-of-interest (AOI) refers to an integrated urban functional zone with defined polygonal boundaries. The rapid development of urban commerce has led to increasing demands for highly accurate and timely AOI data. However, existing research primarily focuses on coarse-grained functional zones for urban planning or regional economic analysis, and often neglects the expiration of AOI in the real world. They fail to fulfill the precision demands of Mobile Internet Online-to-Offline (O2O) businesses. These businesses require accuracy down to a specific community, school, or hospital. In this paper, we propose a comprehensive end-to-end multimodal deep learning framework designed for simultaneously detecting accurate AOI boundaries and validating the reliability of AOI by leveraging remote sensing imagery coupled with geographical prior, titled AOITR. Unlike conventional AOI generation methods, such as the Road-cut method that segments road networks at various levels, our approach diverges from semantic segmentation algorithms that depend on pixel-level classification. Instead, our AOITR begins by selecting a point-of-interest (POI) of specific category, and uses it to retrieve corresponding remote sensing imagery and geographical prior such as entrance POIs and road nodes. This information helps to build a multimodal detection model based on transformer encoder-decoder architecture to regress the AOI polygon. Additionally, we utilize the dynamic features from human mobility, nearby POIs, and logistics addresses for AOI reliability evaluation via a cascaded network module. The experimental results reveal that our algorithm achieves a significant improvement on Intersection over Union (IoU) metric, surpassing previous methods by a large margin.
[ { "created": "Fri, 12 Jan 2024 12:54:30 GMT", "version": "v1" }, { "created": "Wed, 31 Jan 2024 18:13:53 GMT", "version": "v2" }, { "created": "Thu, 8 Feb 2024 06:23:42 GMT", "version": "v3" } ]
2024-02-09
[ [ "Shi", "Chuanji", "" ], [ "Zhang", "Yingying", "" ], [ "Wang", "Jiaotuan", "" ], [ "Guo", "Xin", "" ], [ "Zhu", "Qiqi", "" ] ]
Urban area-of-interest (AOI) refers to an integrated urban functional zone with defined polygonal boundaries. The rapid development of urban commerce has led to increasing demands for highly accurate and timely AOI data. However, existing research primarily focuses on coarse-grained functional zones for urban planning or regional economic analysis, and often neglects the expiration of AOI in the real world. They fail to fulfill the precision demands of Mobile Internet Online-to-Offline (O2O) businesses. These businesses require accuracy down to a specific community, school, or hospital. In this paper, we propose a comprehensive end-to-end multimodal deep learning framework designed for simultaneously detecting accurate AOI boundaries and validating the reliability of AOI by leveraging remote sensing imagery coupled with geographical prior, titled AOITR. Unlike conventional AOI generation methods, such as the Road-cut method that segments road networks at various levels, our approach diverges from semantic segmentation algorithms that depend on pixel-level classification. Instead, our AOITR begins by selecting a point-of-interest (POI) of specific category, and uses it to retrieve corresponding remote sensing imagery and geographical prior such as entrance POIs and road nodes. This information helps to build a multimodal detection model based on transformer encoder-decoder architecture to regress the AOI polygon. Additionally, we utilize the dynamic features from human mobility, nearby POIs, and logistics addresses for AOI reliability evaluation via a cascaded network module. The experimental results reveal that our algorithm achieves a significant improvement on Intersection over Union (IoU) metric, surpassing previous methods by a large margin.
2212.08688
Yiming Xiao
Owen Hellum, Marta Kersten-Oertel, and Yiming Xiao
Assessment of user-interaction strategies for neurosurgical data navigation and annotation in virtual reality
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
While virtual-reality (VR) has shown great promise in radiological tasks, effective user-interaction strategies that can improve efficiency and ergonomics are still under-explored and systematic evaluations of VR interaction techniques in the context of complex anatomical models are rare. Therefore, our study aims to identify the most effective interaction techniques for two common neurosurgical planning tasks in VR (point annotation and note-taking) from the state-of-the-arts, and propose a novel technique for efficient sub-volume selection necessary in neuroanatomical navigation. We assessed seven user-interaction methods with multiple input modalities (gaze, head motion, controller, and voice) for point placement and note-taking in the context of annotating brain aneurysms for cerebrovascular surgery. Furthermore, we proposed and evaluated a novel technique, called magnified selection diorama (Maserama) for easy navigation and selection of complex 3D anatomies in VR. Both quantitative and semi-quantitative (i.e., NASA Task Load Index) metrics were employed through user studies to reveal the performance of each interaction scheme in terms of accuracy, efficiency, and usability. Our evaluations demonstrated that controller-based interaction is preferred over eye-tracking-based methods for point placement while voice recording and virtual keyboard typing are better than freehand writing for note-taking. Furthermore, our new Maserama sub-volume selection technique was proven to be highly efficient and easy-to-use. Our study is the first to provide a systematic assessment of existing and new VR interaction schemes for neurosurgical data navigation and annotation. It offers valuable insights and tools to guide the design of future VR systems for radiological and surgical applications.
[ { "created": "Fri, 16 Dec 2022 19:36:04 GMT", "version": "v1" } ]
2022-12-20
[ [ "Hellum", "Owen", "" ], [ "Kersten-Oertel", "Marta", "" ], [ "Xiao", "Yiming", "" ] ]
While virtual-reality (VR) has shown great promise in radiological tasks, effective user-interaction strategies that can improve efficiency and ergonomics are still under-explored and systematic evaluations of VR interaction techniques in the context of complex anatomical models are rare. Therefore, our study aims to identify the most effective interaction techniques for two common neurosurgical planning tasks in VR (point annotation and note-taking) from the state-of-the-arts, and propose a novel technique for efficient sub-volume selection necessary in neuroanatomical navigation. We assessed seven user-interaction methods with multiple input modalities (gaze, head motion, controller, and voice) for point placement and note-taking in the context of annotating brain aneurysms for cerebrovascular surgery. Furthermore, we proposed and evaluated a novel technique, called magnified selection diorama (Maserama) for easy navigation and selection of complex 3D anatomies in VR. Both quantitative and semi-quantitative (i.e., NASA Task Load Index) metrics were employed through user studies to reveal the performance of each interaction scheme in terms of accuracy, efficiency, and usability. Our evaluations demonstrated that controller-based interaction is preferred over eye-tracking-based methods for point placement while voice recording and virtual keyboard typing are better than freehand writing for note-taking. Furthermore, our new Maserama sub-volume selection technique was proven to be highly efficient and easy-to-use. Our study is the first to provide a systematic assessment of existing and new VR interaction schemes for neurosurgical data navigation and annotation. It offers valuable insights and tools to guide the design of future VR systems for radiological and surgical applications.
2302.07549
Mila Nambiar
Milashini Nambiar and Supriyo Ghosh and Priscilla Ong and Yu En Chan and Yong Mong Bee and Pavitra Krishnaswamy
Deep Offline Reinforcement Learning for Real-world Treatment Optimization Applications
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is increasing interest in data-driven approaches for recommending optimal treatment strategies in many chronic disease management and critical care applications. Reinforcement learning methods are well-suited to this sequential decision-making problem, but must be trained and evaluated exclusively on retrospective medical record datasets as direct online exploration is unsafe and infeasible. Despite this requirement, the vast majority of treatment optimization studies use off-policy RL methods (e.g., Double Deep Q Networks (DDQN) or its variants) that are known to perform poorly in purely offline settings. Recent advances in offline RL, such as Conservative Q-Learning (CQL), offer a suitable alternative. But there remain challenges in adapting these approaches to real-world applications where suboptimal examples dominate the retrospective dataset and strict safety constraints need to be satisfied. In this work, we introduce a practical and theoretically grounded transition sampling approach to address action imbalance during offline RL training. We perform extensive experiments on two real-world tasks for diabetes and sepsis treatment optimization to compare performance of the proposed approach against prominent off-policy and offline RL baselines (DDQN and CQL). Across a range of principled and clinically relevant metrics, we show that our proposed approach enables substantial improvements in expected health outcomes and in accordance with relevant practice and safety guidelines.
[ { "created": "Wed, 15 Feb 2023 09:30:57 GMT", "version": "v1" }, { "created": "Tue, 13 Jun 2023 12:24:32 GMT", "version": "v2" } ]
2023-06-14
[ [ "Nambiar", "Milashini", "" ], [ "Ghosh", "Supriyo", "" ], [ "Ong", "Priscilla", "" ], [ "Chan", "Yu En", "" ], [ "Bee", "Yong Mong", "" ], [ "Krishnaswamy", "Pavitra", "" ] ]
There is increasing interest in data-driven approaches for recommending optimal treatment strategies in many chronic disease management and critical care applications. Reinforcement learning methods are well-suited to this sequential decision-making problem, but must be trained and evaluated exclusively on retrospective medical record datasets as direct online exploration is unsafe and infeasible. Despite this requirement, the vast majority of treatment optimization studies use off-policy RL methods (e.g., Double Deep Q Networks (DDQN) or its variants) that are known to perform poorly in purely offline settings. Recent advances in offline RL, such as Conservative Q-Learning (CQL), offer a suitable alternative. But there remain challenges in adapting these approaches to real-world applications where suboptimal examples dominate the retrospective dataset and strict safety constraints need to be satisfied. In this work, we introduce a practical and theoretically grounded transition sampling approach to address action imbalance during offline RL training. We perform extensive experiments on two real-world tasks for diabetes and sepsis treatment optimization to compare performance of the proposed approach against prominent off-policy and offline RL baselines (DDQN and CQL). Across a range of principled and clinically relevant metrics, we show that our proposed approach enables substantial improvements in expected health outcomes and in accordance with relevant practice and safety guidelines.
1907.02427
Youmna Farag
Youmna Farag and Helen Yannakoudakis
Multi-Task Learning for Coherence Modeling
11 pages, 3 figures, Accepted at ACL 2019
THE 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019)
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the task of assessing discourse coherence, an aspect of text quality that is essential for many NLP tasks, such as summarization and language assessment. We propose a hierarchical neural network trained in a multi-task fashion that learns to predict a document-level coherence score (at the network's top layers) along with word-level grammatical roles (at the bottom layers), taking advantage of inductive transfer between the two tasks. We assess the extent to which our framework generalizes to different domains and prediction tasks, and demonstrate its effectiveness not only on standard binary evaluation coherence tasks, but also on real-world tasks involving the prediction of varying degrees of coherence, achieving a new state of the art.
[ { "created": "Thu, 4 Jul 2019 14:40:22 GMT", "version": "v1" }, { "created": "Thu, 30 Apr 2020 17:30:15 GMT", "version": "v2" } ]
2020-05-01
[ [ "Farag", "Youmna", "" ], [ "Yannakoudakis", "Helen", "" ] ]
We address the task of assessing discourse coherence, an aspect of text quality that is essential for many NLP tasks, such as summarization and language assessment. We propose a hierarchical neural network trained in a multi-task fashion that learns to predict a document-level coherence score (at the network's top layers) along with word-level grammatical roles (at the bottom layers), taking advantage of inductive transfer between the two tasks. We assess the extent to which our framework generalizes to different domains and prediction tasks, and demonstrate its effectiveness not only on standard binary evaluation coherence tasks, but also on real-world tasks involving the prediction of varying degrees of coherence, achieving a new state of the art.
2402.09702
Yiyang Sun
Yiyang Sun, Zhi Chen, Vittorio Orlandi, Tong Wang, Cynthia Rudin
Sparse and Faithful Explanations Without Sparse Models
Accepted in AISTATS 2024
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Even if a model is not globally sparse, it is possible for decisions made from that model to be accurately and faithfully described by a small number of features. For instance, an application for a large loan might be denied to someone because they have no credit history, which overwhelms any evidence towards their creditworthiness. In this work, we introduce the Sparse Explanation Value (SEV), a new way of measuring sparsity in machine learning models. In the loan denial example above, the SEV is 1 because only one factor is needed to explain why the loan was denied. SEV is a measure of decision sparsity rather than overall model sparsity, and we are able to show that many machine learning models -- even if they are not sparse -- actually have low decision sparsity, as measured by SEV. SEV is defined using movements over a hypercube, allowing SEV to be defined consistently over various model classes, with movement restrictions reflecting real-world constraints. We proposed the algorithms that reduce SEV without sacrificing accuracy, providing sparse and completely faithful explanations, even without globally sparse models.
[ { "created": "Thu, 15 Feb 2024 04:36:52 GMT", "version": "v1" }, { "created": "Mon, 4 Mar 2024 17:32:32 GMT", "version": "v2" }, { "created": "Sat, 9 Mar 2024 01:01:27 GMT", "version": "v3" } ]
2024-03-12
[ [ "Sun", "Yiyang", "" ], [ "Chen", "Zhi", "" ], [ "Orlandi", "Vittorio", "" ], [ "Wang", "Tong", "" ], [ "Rudin", "Cynthia", "" ] ]
Even if a model is not globally sparse, it is possible for decisions made from that model to be accurately and faithfully described by a small number of features. For instance, an application for a large loan might be denied to someone because they have no credit history, which overwhelms any evidence towards their creditworthiness. In this work, we introduce the Sparse Explanation Value (SEV), a new way of measuring sparsity in machine learning models. In the loan denial example above, the SEV is 1 because only one factor is needed to explain why the loan was denied. SEV is a measure of decision sparsity rather than overall model sparsity, and we are able to show that many machine learning models -- even if they are not sparse -- actually have low decision sparsity, as measured by SEV. SEV is defined using movements over a hypercube, allowing SEV to be defined consistently over various model classes, with movement restrictions reflecting real-world constraints. We proposed the algorithms that reduce SEV without sacrificing accuracy, providing sparse and completely faithful explanations, even without globally sparse models.
2302.04068
Lioba Heimbach
Lioba Heimbach, Eric G. Schertenleib, Roger Wattenhofer
Short Squeeze in DeFi Lending Market: Decentralization in Jeopardy?
In Proceedings of Workshop on Decentralized Finance (DeFi@FC)
null
null
null
cs.CR q-fin.RM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anxiety levels in the Aave community spiked in November 2022 as Avi Eisenberg performed an attack on Aave. Eisenberg attempted to short the CRV token by using funds borrowed on the protocol to artificially deflate the value of CRV. While the attack was ultimately unsuccessful, it left the Aave community scared and even raised question marks regarding the feasibility of large lending platforms under decentralized governance. In this work, we analyze Avi Eisenberg's actions and show how he was able to artificially lower the price of CRV by selling large quantities of borrowed CRV for stablecoins on both decentralized and centralized exchanges. Despite the failure of his attack, it still led to irretrievable debt worth more than 1.5 Mio USD at the time and, thereby, quadrupled the protocol's irretrievable debt. Furthermore, we highlight that his attack was enabled by the vast proportion of CRV available to borrow as well as Aave's lending protocol design hindering rapid intervention. We stress Eisenberg's attack exposes a predicament of large DeFi lending protocols: limit the scope or compromise on 'decentralization'.
[ { "created": "Wed, 8 Feb 2023 14:09:38 GMT", "version": "v1" }, { "created": "Wed, 21 Jun 2023 10:17:33 GMT", "version": "v2" } ]
2023-06-22
[ [ "Heimbach", "Lioba", "" ], [ "Schertenleib", "Eric G.", "" ], [ "Wattenhofer", "Roger", "" ] ]
Anxiety levels in the Aave community spiked in November 2022 as Avi Eisenberg performed an attack on Aave. Eisenberg attempted to short the CRV token by using funds borrowed on the protocol to artificially deflate the value of CRV. While the attack was ultimately unsuccessful, it left the Aave community scared and even raised question marks regarding the feasibility of large lending platforms under decentralized governance. In this work, we analyze Avi Eisenberg's actions and show how he was able to artificially lower the price of CRV by selling large quantities of borrowed CRV for stablecoins on both decentralized and centralized exchanges. Despite the failure of his attack, it still led to irretrievable debt worth more than 1.5 Mio USD at the time and, thereby, quadrupled the protocol's irretrievable debt. Furthermore, we highlight that his attack was enabled by the vast proportion of CRV available to borrow as well as Aave's lending protocol design hindering rapid intervention. We stress Eisenberg's attack exposes a predicament of large DeFi lending protocols: limit the scope or compromise on 'decentralization'.
2106.13937
Dong In Kim
Jong Jin Park, Jong Ho Moon, Hyeon Ho Jang, and Dong In Kim
Unified Simultaneous Wireless Information and Power Transfer for IoT: Signaling and Architecture with Deep Learning Adaptive Control
15 pages, 15 figures
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a unified SWIPT signal and its architecture design in order to take advantage of both single tone and multi-tone signaling by adjusting only the power allocation ratio of a unified signal. For this, we design a novel unified and integrated receiver architecture for the proposed unified SWIPT signaling, which consumes low power with an envelope detection. To relieve the computational complexity of the receiver, we propose an adaptive control algorithm by which the transmitter adjusts the communication mode through temporal convolutional network (TCN) based asymmetric processing. To this end, the transmitter optimizes the modulation index and power allocation ratio in short-term scale while updating the mode switching threshold in long-term scale. We demonstrate that the proposed unified SWIPT system improves the achievable rate under the self-powering condition of low-power IoT devices. Consequently it is foreseen to effectively deploy low-power IoT networks that concurrently supply both information and energy wirelessly to the devices by using the proposed unified SWIPT and adaptive control algorithm in place at the transmitter side.
[ { "created": "Sat, 26 Jun 2021 03:58:22 GMT", "version": "v1" } ]
2021-06-29
[ [ "Park", "Jong Jin", "" ], [ "Moon", "Jong Ho", "" ], [ "Jang", "Hyeon Ho", "" ], [ "Kim", "Dong In", "" ] ]
In this paper, we propose a unified SWIPT signal and its architecture design in order to take advantage of both single tone and multi-tone signaling by adjusting only the power allocation ratio of a unified signal. For this, we design a novel unified and integrated receiver architecture for the proposed unified SWIPT signaling, which consumes low power with an envelope detection. To relieve the computational complexity of the receiver, we propose an adaptive control algorithm by which the transmitter adjusts the communication mode through temporal convolutional network (TCN) based asymmetric processing. To this end, the transmitter optimizes the modulation index and power allocation ratio in short-term scale while updating the mode switching threshold in long-term scale. We demonstrate that the proposed unified SWIPT system improves the achievable rate under the self-powering condition of low-power IoT devices. Consequently it is foreseen to effectively deploy low-power IoT networks that concurrently supply both information and energy wirelessly to the devices by using the proposed unified SWIPT and adaptive control algorithm in place at the transmitter side.
2305.00382
Leon Moonen
Anders M{\o}lmen H{\o}st and Pierre Lison and Leon Moonen
Constructing a Knowledge Graph from Textual Descriptions of Software Vulnerabilities in the National Vulnerability Database
Accepted for publication in the 24th Nordic Conference on Computational Linguistics (NoDaLiDa), T\'{o}rshavn, Faroe Islands, May 22nd-24th, 2023. [v2]: added funding acknowledgments
null
null
null
cs.CR cs.AI cs.CL cs.SE
http://creativecommons.org/licenses/by/4.0/
Knowledge graphs have shown promise for several cybersecurity tasks, such as vulnerability assessment and threat analysis. In this work, we present a new method for constructing a vulnerability knowledge graph from information in the National Vulnerability Database (NVD). Our approach combines named entity recognition (NER), relation extraction (RE), and entity prediction using a combination of neural models, heuristic rules, and knowledge graph embeddings. We demonstrate how our method helps to fix missing entities in knowledge graphs used for cybersecurity and evaluate the performance.
[ { "created": "Sun, 30 Apr 2023 04:23:40 GMT", "version": "v1" }, { "created": "Mon, 15 May 2023 07:36:11 GMT", "version": "v2" } ]
2023-05-16
[ [ "Høst", "Anders Mølmen", "" ], [ "Lison", "Pierre", "" ], [ "Moonen", "Leon", "" ] ]
Knowledge graphs have shown promise for several cybersecurity tasks, such as vulnerability assessment and threat analysis. In this work, we present a new method for constructing a vulnerability knowledge graph from information in the National Vulnerability Database (NVD). Our approach combines named entity recognition (NER), relation extraction (RE), and entity prediction using a combination of neural models, heuristic rules, and knowledge graph embeddings. We demonstrate how our method helps to fix missing entities in knowledge graphs used for cybersecurity and evaluate the performance.
1103.2408
Sandeep Tata
Jun Rao (LinkedIn), Eugene J. Shekita (IBM Research), Sandeep Tata (IBM Research)
Using Paxos to Build a Scalable, Consistent, and Highly Available Datastore
VLDB2011
Proceedings of the VLDB Endowment (PVLDB), Vol. 4, No. 4, pp. 243-254 (2011)
null
null
cs.DB cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spinnaker is an experimental datastore that is designed to run on a large cluster of commodity servers in a single datacenter. It features key-based range partitioning, 3-way replication, and a transactional get-put API with the option to choose either strong or timeline consistency on reads. This paper describes Spinnaker's Paxos-based replication protocol. The use of Paxos ensures that a data partition in Spinnaker will be available for reads and writes as long a majority of its replicas are alive. Unlike traditional master-slave replication, this is true regardless of the failure sequence that occurs. We show that Paxos replication can be competitive with alternatives that provide weaker consistency guarantees. Compared to an eventually consistent datastore, we show that Spinnaker can be as fast or even faster on reads and only 5% to 10% slower on writes.
[ { "created": "Sat, 12 Mar 2011 01:06:32 GMT", "version": "v1" } ]
2011-03-15
[ [ "Rao", "Jun", "", "LinkedIn" ], [ "Shekita", "Eugene J.", "", "IBM Research" ], [ "Tata", "Sandeep", "", "IBM Research" ] ]
Spinnaker is an experimental datastore that is designed to run on a large cluster of commodity servers in a single datacenter. It features key-based range partitioning, 3-way replication, and a transactional get-put API with the option to choose either strong or timeline consistency on reads. This paper describes Spinnaker's Paxos-based replication protocol. The use of Paxos ensures that a data partition in Spinnaker will be available for reads and writes as long a majority of its replicas are alive. Unlike traditional master-slave replication, this is true regardless of the failure sequence that occurs. We show that Paxos replication can be competitive with alternatives that provide weaker consistency guarantees. Compared to an eventually consistent datastore, we show that Spinnaker can be as fast or even faster on reads and only 5% to 10% slower on writes.
2305.17390
Bill Yuchen Lin
Bill Yuchen Lin, Yicheng Fu, Karina Yang, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Prithviraj Ammanabrolu, Yejin Choi, Xiang Ren
SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
Accepted to NeurIPS 2023 (spotlight). Project website: https://swiftsage.github.io
null
null
null
cs.CL cs.AI cs.LG cs.MA cs.RO
http://creativecommons.org/licenses/by/4.0/
We introduce SwiftSage, a novel agent framework inspired by the dual-process theory of human cognition, designed to excel in action planning for complex interactive reasoning tasks. SwiftSage integrates the strengths of behavior cloning and prompting large language models (LLMs) to enhance task completion performance. The framework comprises two primary modules: the Swift module, representing fast and intuitive thinking, and the Sage module, emulating deliberate thought processes. The Swift module is a small encoder-decoder LM fine-tuned on the oracle agent's action trajectories, while the Sage module employs LLMs such as GPT-4 for subgoal planning and grounding. We develop a heuristic method to harmoniously integrate the two modules, resulting in a more efficient and robust problem-solving process. In 30 tasks from the ScienceWorld benchmark, SwiftSage significantly outperforms other methods such as SayCan, ReAct, and Reflexion, demonstrating its effectiveness in solving complex interactive tasks.
[ { "created": "Sat, 27 May 2023 07:04:15 GMT", "version": "v1" }, { "created": "Wed, 6 Dec 2023 10:07:01 GMT", "version": "v2" } ]
2023-12-07
[ [ "Lin", "Bill Yuchen", "" ], [ "Fu", "Yicheng", "" ], [ "Yang", "Karina", "" ], [ "Brahman", "Faeze", "" ], [ "Huang", "Shiyu", "" ], [ "Bhagavatula", "Chandra", "" ], [ "Ammanabrolu", "Prithviraj", "" ], [ "Choi", "Yejin", "" ], [ "Ren", "Xiang", "" ] ]
We introduce SwiftSage, a novel agent framework inspired by the dual-process theory of human cognition, designed to excel in action planning for complex interactive reasoning tasks. SwiftSage integrates the strengths of behavior cloning and prompting large language models (LLMs) to enhance task completion performance. The framework comprises two primary modules: the Swift module, representing fast and intuitive thinking, and the Sage module, emulating deliberate thought processes. The Swift module is a small encoder-decoder LM fine-tuned on the oracle agent's action trajectories, while the Sage module employs LLMs such as GPT-4 for subgoal planning and grounding. We develop a heuristic method to harmoniously integrate the two modules, resulting in a more efficient and robust problem-solving process. In 30 tasks from the ScienceWorld benchmark, SwiftSage significantly outperforms other methods such as SayCan, ReAct, and Reflexion, demonstrating its effectiveness in solving complex interactive tasks.
2204.13619
Boxiang Lyu
Boxiang Lyu, Filip Hanzely, Mladen Kolar
Personalized Federated Learning with Multiple Known Clusters
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
We consider the problem of personalized federated learning when there are known cluster structures within users. An intuitive approach would be to regularize the parameters so that users in the same cluster share similar model weights. The distances between the clusters can then be regularized to reflect the similarity between different clusters of users. We develop an algorithm that allows each cluster to communicate independently and derive the convergence results. We study a hierarchical linear model to theoretically demonstrate that our approach outperforms agents learning independently and agents learning a single shared weight. Finally, we demonstrate the advantages of our approach using both simulated and real-world data.
[ { "created": "Thu, 28 Apr 2022 16:32:29 GMT", "version": "v1" } ]
2022-04-29
[ [ "Lyu", "Boxiang", "" ], [ "Hanzely", "Filip", "" ], [ "Kolar", "Mladen", "" ] ]
We consider the problem of personalized federated learning when there are known cluster structures within users. An intuitive approach would be to regularize the parameters so that users in the same cluster share similar model weights. The distances between the clusters can then be regularized to reflect the similarity between different clusters of users. We develop an algorithm that allows each cluster to communicate independently and derive the convergence results. We study a hierarchical linear model to theoretically demonstrate that our approach outperforms agents learning independently and agents learning a single shared weight. Finally, we demonstrate the advantages of our approach using both simulated and real-world data.
2305.02401
Anand Sampat
Sai Chowdary Gullapally, Yibo Zhang, Nitin Kumar Mittal, Deeksha Kartik, Sandhya Srinivasan, Kevin Rose, Daniel Shenker, Dinkar Juyal, Harshith Padigela, Raymond Biju, Victor Minden, Chirag Maheshwari, Marc Thibault, Zvi Goldstein, Luke Novak, Nidhi Chandra, Justin Lee, Aaditya Prakash, Chintan Shah, John Abel, Darren Fahy, Amaro Taylor-Weiner, Anand Sampat
Synthetic DOmain-Targeted Augmentation (S-DOTA) Improves Model Generalization in Digital Pathology
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Machine learning algorithms have the potential to improve patient outcomes in digital pathology. However, generalization of these tools is currently limited by sensitivity to variations in tissue preparation, staining procedures and scanning equipment that lead to domain shift in digitized slides. To overcome this limitation and improve model generalization, we studied the effectiveness of two Synthetic DOmain-Targeted Augmentation (S-DOTA) methods, namely CycleGAN-enabled Scanner Transform (ST) and targeted Stain Vector Augmentation (SVA), and compared them against the International Color Consortium (ICC) profile-based color calibration (ICC Cal) method and a baseline method using traditional brightness, color and noise augmentations. We evaluated the ability of these techniques to improve model generalization to various tasks and settings: four models, two model types (tissue segmentation and cell classification), two loss functions, six labs, six scanners, and three indications (hepatocellular carcinoma (HCC), nonalcoholic steatohepatitis (NASH), prostate adenocarcinoma). We compared these methods based on the macro-averaged F1 scores on in-distribution (ID) and out-of-distribution (OOD) test sets across multiple domains, and found that S-DOTA methods (i.e., ST and SVA) led to significant improvements over ICC Cal and baseline on OOD data while maintaining comparable performance on ID data. Thus, we demonstrate that S-DOTA may help address generalization due to domain shift in real world applications.
[ { "created": "Wed, 3 May 2023 19:53:30 GMT", "version": "v1" } ]
2023-08-02
[ [ "Gullapally", "Sai Chowdary", "" ], [ "Zhang", "Yibo", "" ], [ "Mittal", "Nitin Kumar", "" ], [ "Kartik", "Deeksha", "" ], [ "Srinivasan", "Sandhya", "" ], [ "Rose", "Kevin", "" ], [ "Shenker", "Daniel", "" ], [ "Juyal", "Dinkar", "" ], [ "Padigela", "Harshith", "" ], [ "Biju", "Raymond", "" ], [ "Minden", "Victor", "" ], [ "Maheshwari", "Chirag", "" ], [ "Thibault", "Marc", "" ], [ "Goldstein", "Zvi", "" ], [ "Novak", "Luke", "" ], [ "Chandra", "Nidhi", "" ], [ "Lee", "Justin", "" ], [ "Prakash", "Aaditya", "" ], [ "Shah", "Chintan", "" ], [ "Abel", "John", "" ], [ "Fahy", "Darren", "" ], [ "Taylor-Weiner", "Amaro", "" ], [ "Sampat", "Anand", "" ] ]
Machine learning algorithms have the potential to improve patient outcomes in digital pathology. However, generalization of these tools is currently limited by sensitivity to variations in tissue preparation, staining procedures and scanning equipment that lead to domain shift in digitized slides. To overcome this limitation and improve model generalization, we studied the effectiveness of two Synthetic DOmain-Targeted Augmentation (S-DOTA) methods, namely CycleGAN-enabled Scanner Transform (ST) and targeted Stain Vector Augmentation (SVA), and compared them against the International Color Consortium (ICC) profile-based color calibration (ICC Cal) method and a baseline method using traditional brightness, color and noise augmentations. We evaluated the ability of these techniques to improve model generalization to various tasks and settings: four models, two model types (tissue segmentation and cell classification), two loss functions, six labs, six scanners, and three indications (hepatocellular carcinoma (HCC), nonalcoholic steatohepatitis (NASH), prostate adenocarcinoma). We compared these methods based on the macro-averaged F1 scores on in-distribution (ID) and out-of-distribution (OOD) test sets across multiple domains, and found that S-DOTA methods (i.e., ST and SVA) led to significant improvements over ICC Cal and baseline on OOD data while maintaining comparable performance on ID data. Thus, we demonstrate that S-DOTA may help address generalization due to domain shift in real world applications.
2307.13412
Javier Fernandez-Marques
Stylianos I. Venieris, Javier Fernandez-Marques, Nicholas D. Lane
Mitigating Memory Wall Effects in CNN Engines with On-the-Fly Weights Generation
Accepted at ACM TODAES, 2023. arXiv admin note: substantial text overlap with arXiv:2103.05600
null
null
null
cs.LG cs.AR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The unprecedented accuracy of convolutional neural networks (CNNs) across a broad range of AI tasks has led to their widespread deployment in mobile and embedded settings. In a pursuit for high-performance and energy-efficient inference, significant research effort has been invested in the design of FPGA-based CNN accelerators. In this context, single computation engines constitute a popular approach to support diverse CNN modes without the overhead of fabric reconfiguration. Nevertheless, this flexibility often comes with significantly degraded performance on memory-bound layers and resource underutilisation due to the suboptimal mapping of certain layers on the engine's fixed configuration. In this work, we investigate the implications in terms of CNN engine design for a class of models that introduce a pre-convolution stage to decompress the weights at run time. We refer to these approaches as on-the-fly. This paper presents unzipFPGA, a novel CNN inference system that counteracts the limitations of existing CNN engines. The proposed framework comprises a novel CNN hardware architecture that introduces a weights generator module that enables the on-chip on-the-fly generation of weights, alleviating the negative impact of limited bandwidth on memory-bound layers. We further enhance unzipFPGA with an automated hardware-aware methodology that tailors the weights generation mechanism to the target CNN-device pair, leading to an improved accuracy-performance balance. Finally, we introduce an input selective processing element (PE) design that balances the load between PEs in suboptimally mapped layers. The proposed framework yields hardware designs that achieve an average of 2.57x performance efficiency gain over highly optimised GPU designs for the same power constraints and up to 3.94x higher performance density over a diverse range of state-of-the-art FPGA-based CNN accelerators.
[ { "created": "Tue, 25 Jul 2023 11:19:21 GMT", "version": "v1" } ]
2023-07-26
[ [ "Venieris", "Stylianos I.", "" ], [ "Fernandez-Marques", "Javier", "" ], [ "Lane", "Nicholas D.", "" ] ]
The unprecedented accuracy of convolutional neural networks (CNNs) across a broad range of AI tasks has led to their widespread deployment in mobile and embedded settings. In a pursuit for high-performance and energy-efficient inference, significant research effort has been invested in the design of FPGA-based CNN accelerators. In this context, single computation engines constitute a popular approach to support diverse CNN modes without the overhead of fabric reconfiguration. Nevertheless, this flexibility often comes with significantly degraded performance on memory-bound layers and resource underutilisation due to the suboptimal mapping of certain layers on the engine's fixed configuration. In this work, we investigate the implications in terms of CNN engine design for a class of models that introduce a pre-convolution stage to decompress the weights at run time. We refer to these approaches as on-the-fly. This paper presents unzipFPGA, a novel CNN inference system that counteracts the limitations of existing CNN engines. The proposed framework comprises a novel CNN hardware architecture that introduces a weights generator module that enables the on-chip on-the-fly generation of weights, alleviating the negative impact of limited bandwidth on memory-bound layers. We further enhance unzipFPGA with an automated hardware-aware methodology that tailors the weights generation mechanism to the target CNN-device pair, leading to an improved accuracy-performance balance. Finally, we introduce an input selective processing element (PE) design that balances the load between PEs in suboptimally mapped layers. The proposed framework yields hardware designs that achieve an average of 2.57x performance efficiency gain over highly optimised GPU designs for the same power constraints and up to 3.94x higher performance density over a diverse range of state-of-the-art FPGA-based CNN accelerators.
2406.17532
Keyu Wang
Keyu Wang, Guilin Qi, Jiaqi Li, Songlin Zhai
Can Large Language Models Understand DL-Lite Ontologies? An Empirical Study
null
null
null
null
cs.AI cs.CL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have shown significant achievements in solving a wide range of tasks. Recently, LLMs' capability to store, retrieve and infer with symbolic knowledge has drawn a great deal of attention, showing their potential to understand structured information. However, it is not yet known whether LLMs can understand Description Logic (DL) ontologies. In this work, we empirically analyze the LLMs' capability of understanding DL-Lite ontologies covering 6 representative tasks from syntactic and semantic aspects. With extensive experiments, we demonstrate both the effectiveness and limitations of LLMs in understanding DL-Lite ontologies. We find that LLMs can understand formal syntax and model-theoretic semantics of concepts and roles. However, LLMs struggle with understanding TBox NI transitivity and handling ontologies with large ABoxes. We hope that our experiments and analyses provide more insights into LLMs and inspire to build more faithful knowledge engineering solutions.
[ { "created": "Tue, 25 Jun 2024 13:16:34 GMT", "version": "v1" } ]
2024-06-26
[ [ "Wang", "Keyu", "" ], [ "Qi", "Guilin", "" ], [ "Li", "Jiaqi", "" ], [ "Zhai", "Songlin", "" ] ]
Large language models (LLMs) have shown significant achievements in solving a wide range of tasks. Recently, LLMs' capability to store, retrieve and infer with symbolic knowledge has drawn a great deal of attention, showing their potential to understand structured information. However, it is not yet known whether LLMs can understand Description Logic (DL) ontologies. In this work, we empirically analyze the LLMs' capability of understanding DL-Lite ontologies covering 6 representative tasks from syntactic and semantic aspects. With extensive experiments, we demonstrate both the effectiveness and limitations of LLMs in understanding DL-Lite ontologies. We find that LLMs can understand formal syntax and model-theoretic semantics of concepts and roles. However, LLMs struggle with understanding TBox NI transitivity and handling ontologies with large ABoxes. We hope that our experiments and analyses provide more insights into LLMs and inspire to build more faithful knowledge engineering solutions.
1003.3821
Dan Guralnik
Dan Guralnik
A Formal Approach to Modeling the Memory of a Living Organism
33 pages, 8 figures
null
null
null
cs.AI cs.DS cs.LG q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a living organism as an observer of the evolution of its environment recording sensory information about the state space X of the environment in real time. Sensory information is sampled and then processed on two levels. On the biological level, the organism serves as an evaluation mechanism of the subjective relevance of the incoming data to the observer: the observer assigns excitation values to events in X it could recognize using its sensory equipment. On the algorithmic level, sensory input is used for updating a database, the memory of the observer whose purpose is to serve as a geometric/combinatorial model of X, whose nodes are weighted by the excitation values produced by the evaluation mechanism. These values serve as a guidance system for deciding how the database should transform as observation data mounts. We define a searching problem for the proposed model and discuss the model's flexibility and its computational efficiency, as well as the possibility of implementing it as a dynamic network of neuron-like units. We show how various easily observable properties of the human memory and thought process can be explained within the framework of this model. These include: reasoning (with efficiency bounds), errors, temporary and permanent loss of information. We are also able to define general learning problems in terms of the new model, such as the language acquisition problem.
[ { "created": "Fri, 19 Mar 2010 15:56:37 GMT", "version": "v1" } ]
2010-03-22
[ [ "Guralnik", "Dan", "" ] ]
We consider a living organism as an observer of the evolution of its environment recording sensory information about the state space X of the environment in real time. Sensory information is sampled and then processed on two levels. On the biological level, the organism serves as an evaluation mechanism of the subjective relevance of the incoming data to the observer: the observer assigns excitation values to events in X it could recognize using its sensory equipment. On the algorithmic level, sensory input is used for updating a database, the memory of the observer whose purpose is to serve as a geometric/combinatorial model of X, whose nodes are weighted by the excitation values produced by the evaluation mechanism. These values serve as a guidance system for deciding how the database should transform as observation data mounts. We define a searching problem for the proposed model and discuss the model's flexibility and its computational efficiency, as well as the possibility of implementing it as a dynamic network of neuron-like units. We show how various easily observable properties of the human memory and thought process can be explained within the framework of this model. These include: reasoning (with efficiency bounds), errors, temporary and permanent loss of information. We are also able to define general learning problems in terms of the new model, such as the language acquisition problem.
2112.13734
Enoch Tetteh
Enoch Tetteh, Joseph Viviano, Yoshua Bengio, David Krueger, Joseph Paul Cohen
Multi-Domain Balanced Sampling Improves Out-of-Distribution Generalization of Chest X-ray Pathology Prediction Models
MED-NEURIPS 2021
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Learning models that generalize under different distribution shifts in medical imaging has been a long-standing research challenge. There have been several proposals for efficient and robust visual representation learning among vision research practitioners, especially in the sensitive and critical biomedical domain. In this paper, we propose an idea for out-of-distribution generalization of chest X-ray pathologies that uses a simple balanced batch sampling technique. We observed that balanced sampling between the multiple training datasets improves the performance over baseline models trained without balancing.
[ { "created": "Mon, 27 Dec 2021 15:28:01 GMT", "version": "v1" }, { "created": "Tue, 28 Dec 2021 02:36:40 GMT", "version": "v2" } ]
2021-12-30
[ [ "Tetteh", "Enoch", "" ], [ "Viviano", "Joseph", "" ], [ "Bengio", "Yoshua", "" ], [ "Krueger", "David", "" ], [ "Cohen", "Joseph Paul", "" ] ]
Learning models that generalize under different distribution shifts in medical imaging has been a long-standing research challenge. There have been several proposals for efficient and robust visual representation learning among vision research practitioners, especially in the sensitive and critical biomedical domain. In this paper, we propose an idea for out-of-distribution generalization of chest X-ray pathologies that uses a simple balanced batch sampling technique. We observed that balanced sampling between the multiple training datasets improves the performance over baseline models trained without balancing.
2104.13298
Yixiao Ge
Yixiao Ge, Xiao Zhang, Ching Lam Choi, Ka Chun Cheung, Peipei Zhao, Feng Zhu, Xiaogang Wang, Rui Zhao, Hongsheng Li
Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification
Project Page: https://geyixiao.com/projects/bake
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent studies of knowledge distillation have discovered that ensembling the "dark knowledge" from multiple teachers or students contributes to creating better soft targets for training, but at the cost of significantly more computations and/or parameters. In this work, we present BAtch Knowledge Ensembling (BAKE) to produce refined soft targets for anchor images by propagating and ensembling the knowledge of the other samples in the same mini-batch. Specifically, for each sample of interest, the propagation of knowledge is weighted in accordance with the inter-sample affinities, which are estimated on-the-fly with the current network. The propagated knowledge can then be ensembled to form a better soft target for distillation. In this way, our BAKE framework achieves online knowledge ensembling across multiple samples with only a single network. It requires minimal computational and memory overhead compared to existing knowledge ensembling methods. Extensive experiments demonstrate that the lightweight yet effective BAKE consistently boosts the classification performance of various architectures on multiple datasets, e.g., a significant +0.7% gain of Swin-T on ImageNet with only +1.5% computational overhead and zero additional parameters. BAKE does not only improve the vanilla baselines, but also surpasses the single-network state-of-the-arts on all the benchmarks.
[ { "created": "Tue, 27 Apr 2021 16:11:45 GMT", "version": "v1" }, { "created": "Sat, 20 Nov 2021 09:22:24 GMT", "version": "v2" } ]
2021-11-23
[ [ "Ge", "Yixiao", "" ], [ "Zhang", "Xiao", "" ], [ "Choi", "Ching Lam", "" ], [ "Cheung", "Ka Chun", "" ], [ "Zhao", "Peipei", "" ], [ "Zhu", "Feng", "" ], [ "Wang", "Xiaogang", "" ], [ "Zhao", "Rui", "" ], [ "Li", "Hongsheng", "" ] ]
The recent studies of knowledge distillation have discovered that ensembling the "dark knowledge" from multiple teachers or students contributes to creating better soft targets for training, but at the cost of significantly more computations and/or parameters. In this work, we present BAtch Knowledge Ensembling (BAKE) to produce refined soft targets for anchor images by propagating and ensembling the knowledge of the other samples in the same mini-batch. Specifically, for each sample of interest, the propagation of knowledge is weighted in accordance with the inter-sample affinities, which are estimated on-the-fly with the current network. The propagated knowledge can then be ensembled to form a better soft target for distillation. In this way, our BAKE framework achieves online knowledge ensembling across multiple samples with only a single network. It requires minimal computational and memory overhead compared to existing knowledge ensembling methods. Extensive experiments demonstrate that the lightweight yet effective BAKE consistently boosts the classification performance of various architectures on multiple datasets, e.g., a significant +0.7% gain of Swin-T on ImageNet with only +1.5% computational overhead and zero additional parameters. BAKE does not only improve the vanilla baselines, but also surpasses the single-network state-of-the-arts on all the benchmarks.
2304.02814
Hongwei Xu
Wei Chen and HongWei Xu and Jelo Wang
4D Agnostic Real-Time Facial Animation Pipeline for Desktop Scenarios
7pages, 5 figures
null
null
null
cs.GR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a high-precision real-time facial animation pipeline suitable for animators to use on their desktops. This pipeline is about to be launched in FACEGOOD's Avatary\footnote{https://www.avatary.com/} software, which will accelerate animators' productivity. The pipeline differs from professional head-mounted facial capture solutions in that it only requires the use of a consumer-grade 3D camera on the desk to achieve high-precision real-time facial capture. The system enables animators to create high-quality facial animations with ease and speed, while reducing the cost and complexity of traditional facial capture solutions. Our approach has the potential to revolutionize the way facial animation is done in the entertainment industry.
[ { "created": "Thu, 6 Apr 2023 01:32:58 GMT", "version": "v1" } ]
2023-04-07
[ [ "Chen", "Wei", "" ], [ "Xu", "HongWei", "" ], [ "Wang", "Jelo", "" ] ]
We present a high-precision real-time facial animation pipeline suitable for animators to use on their desktops. This pipeline is about to be launched in FACEGOOD's Avatary\footnote{https://www.avatary.com/} software, which will accelerate animators' productivity. The pipeline differs from professional head-mounted facial capture solutions in that it only requires the use of a consumer-grade 3D camera on the desk to achieve high-precision real-time facial capture. The system enables animators to create high-quality facial animations with ease and speed, while reducing the cost and complexity of traditional facial capture solutions. Our approach has the potential to revolutionize the way facial animation is done in the entertainment industry.
2006.04767
Elena Corina Grigore
Freddy A. Boulton and Elena Corina Grigore and Eric M. Wolff
Motion Prediction using Trajectory Sets and Self-Driving Domain Knowledge
null
null
null
null
cs.LG cs.CV cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting the future motion of vehicles has been studied using various techniques, including stochastic policies, generative models, and regression. Recent work has shown that classification over a trajectory set, which approximates possible motions, achieves state-of-the-art performance and avoids issues like mode collapse. However, map information and the physical relationships between nearby trajectories is not fully exploited in this formulation. We build on classification-based approaches to motion prediction by adding an auxiliary loss that penalizes off-road predictions. This auxiliary loss can easily be pretrained using only map information (e.g., off-road area), which significantly improves performance on small datasets. We also investigate weighted cross-entropy losses to capture spatial-temporal relationships among trajectories. Our final contribution is a detailed comparison of classification and ordinal regression on two public self-driving datasets.
[ { "created": "Mon, 8 Jun 2020 17:37:15 GMT", "version": "v1" }, { "created": "Wed, 13 Jan 2021 20:41:54 GMT", "version": "v2" } ]
2021-01-15
[ [ "Boulton", "Freddy A.", "" ], [ "Grigore", "Elena Corina", "" ], [ "Wolff", "Eric M.", "" ] ]
Predicting the future motion of vehicles has been studied using various techniques, including stochastic policies, generative models, and regression. Recent work has shown that classification over a trajectory set, which approximates possible motions, achieves state-of-the-art performance and avoids issues like mode collapse. However, map information and the physical relationships between nearby trajectories is not fully exploited in this formulation. We build on classification-based approaches to motion prediction by adding an auxiliary loss that penalizes off-road predictions. This auxiliary loss can easily be pretrained using only map information (e.g., off-road area), which significantly improves performance on small datasets. We also investigate weighted cross-entropy losses to capture spatial-temporal relationships among trajectories. Our final contribution is a detailed comparison of classification and ordinal regression on two public self-driving datasets.
2310.09874
Jiahao Wu
Jiahao Wu, Qijiong Liu, Hengchang Hu, Wenqi Fan, Shengcai Liu, Qing Li, Xiao-Ming Wu, Ke Tang
TF-DCon: Leveraging Large Language Models (LLMs) to Empower Training-Free Dataset Condensation for Content-Based Recommendation
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern techniques in Content-based Recommendation (CBR) leverage item content information to provide personalized services to users, but suffer from resource-intensive training on large datasets. To address this issue, we explore the dataset condensation for textual CBR in this paper. The goal of dataset condensation is to synthesize a small yet informative dataset, upon which models can achieve performance comparable to those trained on large datasets. While existing condensation approaches are tailored to classification tasks for continuous data like images or embeddings, direct application of them to CBR has limitations. To bridge this gap, we investigate efficient dataset condensation for content-based recommendation. Inspired by the remarkable abilities of large language models (LLMs) in text comprehension and generation, we leverage LLMs to empower the generation of textual content during condensation. To handle the interaction data involving both users and items, we devise a dual-level condensation method: content-level and user-level. At content-level, we utilize LLMs to condense all contents of an item into a new informative title. At user-level, we design a clustering-based synthesis module, where we first utilize LLMs to extract user interests. Then, the user interests and user embeddings are incorporated to condense users and generate interactions for condensed users. Notably, the condensation paradigm of this method is forward and free from iterative optimization on the synthesized dataset. Extensive empirical findings from our study, conducted on three authentic datasets, substantiate the efficacy of the proposed method. Particularly, we are able to approximate up to 97% of the original performance while reducing the dataset size by 95% (i.e., on dataset MIND).
[ { "created": "Sun, 15 Oct 2023 16:15:07 GMT", "version": "v1" }, { "created": "Wed, 1 Nov 2023 19:02:09 GMT", "version": "v2" }, { "created": "Fri, 12 Jan 2024 07:16:42 GMT", "version": "v3" } ]
2024-01-15
[ [ "Wu", "Jiahao", "" ], [ "Liu", "Qijiong", "" ], [ "Hu", "Hengchang", "" ], [ "Fan", "Wenqi", "" ], [ "Liu", "Shengcai", "" ], [ "Li", "Qing", "" ], [ "Wu", "Xiao-Ming", "" ], [ "Tang", "Ke", "" ] ]
Modern techniques in Content-based Recommendation (CBR) leverage item content information to provide personalized services to users, but suffer from resource-intensive training on large datasets. To address this issue, we explore the dataset condensation for textual CBR in this paper. The goal of dataset condensation is to synthesize a small yet informative dataset, upon which models can achieve performance comparable to those trained on large datasets. While existing condensation approaches are tailored to classification tasks for continuous data like images or embeddings, direct application of them to CBR has limitations. To bridge this gap, we investigate efficient dataset condensation for content-based recommendation. Inspired by the remarkable abilities of large language models (LLMs) in text comprehension and generation, we leverage LLMs to empower the generation of textual content during condensation. To handle the interaction data involving both users and items, we devise a dual-level condensation method: content-level and user-level. At content-level, we utilize LLMs to condense all contents of an item into a new informative title. At user-level, we design a clustering-based synthesis module, where we first utilize LLMs to extract user interests. Then, the user interests and user embeddings are incorporated to condense users and generate interactions for condensed users. Notably, the condensation paradigm of this method is forward and free from iterative optimization on the synthesized dataset. Extensive empirical findings from our study, conducted on three authentic datasets, substantiate the efficacy of the proposed method. Particularly, we are able to approximate up to 97% of the original performance while reducing the dataset size by 95% (i.e., on dataset MIND).
cs/0411007
Benoit Masson
Julien Cervelle (IGM), Enrico Formenti (I3S), Benoit Masson (I3S)
Basic properties for sand automata
submitted to STACS 2005
null
null
null
cs.CC
null
We prove several results about the relations between injectivity and surjectivity for sand automata. Moreover, we begin the exploration of the dynamical behavior of sand automata proving that the property of nilpotency is undecidable. We believe that the proof technique used for this last result might reveal useful for many other results in this context.
[ { "created": "Thu, 4 Nov 2004 12:33:37 GMT", "version": "v1" } ]
2007-05-23
[ [ "Cervelle", "Julien", "", "IGM" ], [ "Formenti", "Enrico", "", "I3S" ], [ "Masson", "Benoit", "", "I3S" ] ]
We prove several results about the relations between injectivity and surjectivity for sand automata. Moreover, we begin the exploration of the dynamical behavior of sand automata proving that the property of nilpotency is undecidable. We believe that the proof technique used for this last result might reveal useful for many other results in this context.
cs/0606048
Rudi Cilibrasi
Rudi Cilibrasi and Paul M.B. Vitanyi
A New Quartet Tree Heuristic for Hierarchical Clustering
22 pages, 14 figures
null
null
null
cs.DS cs.CV cs.DM math.ST physics.data-an q-bio.QM stat.TH
null
We consider the problem of constructing an an optimal-weight tree from the 3*(n choose 4) weighted quartet topologies on n objects, where optimality means that the summed weight of the embedded quartet topologiesis optimal (so it can be the case that the optimal tree embeds all quartets as non-optimal topologies). We present a heuristic for reconstructing the optimal-weight tree, and a canonical manner to derive the quartet-topology weights from a given distance matrix. The method repeatedly transforms a bifurcating tree, with all objects involved as leaves, achieving a monotonic approximation to the exact single globally optimal tree. This contrasts to other heuristic search methods from biological phylogeny, like DNAML or quartet puzzling, which, repeatedly, incrementally construct a solution from a random order of objects, and subsequently add agreement values.
[ { "created": "Sun, 11 Jun 2006 16:05:51 GMT", "version": "v1" } ]
2011-11-09
[ [ "Cilibrasi", "Rudi", "" ], [ "Vitanyi", "Paul M. B.", "" ] ]
We consider the problem of constructing an an optimal-weight tree from the 3*(n choose 4) weighted quartet topologies on n objects, where optimality means that the summed weight of the embedded quartet topologiesis optimal (so it can be the case that the optimal tree embeds all quartets as non-optimal topologies). We present a heuristic for reconstructing the optimal-weight tree, and a canonical manner to derive the quartet-topology weights from a given distance matrix. The method repeatedly transforms a bifurcating tree, with all objects involved as leaves, achieving a monotonic approximation to the exact single globally optimal tree. This contrasts to other heuristic search methods from biological phylogeny, like DNAML or quartet puzzling, which, repeatedly, incrementally construct a solution from a random order of objects, and subsequently add agreement values.
2401.09721
Ryosuke Watanabe
Ryosuke Watanabe and Keisuke Nonaka and Eduardo Pavez and Tatsuya Kobayashi and Antonio Ortega
Fast graph-based denoising for point cloud color information
Published in the proceeding of 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2024)
null
10.1109/ICASSP48485.2024.10446200
null
cs.CV eess.IV eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point clouds are utilized in various 3D applications such as cross-reality (XR) and realistic 3D displays. In some applications, e.g., for live streaming using a 3D point cloud, real-time point cloud denoising methods are required to enhance the visual quality. However, conventional high-precision denoising methods cannot be executed in real time for large-scale point clouds owing to the complexity of graph constructions with K nearest neighbors and noise level estimation. This paper proposes a fast graph-based denoising (FGBD) for a large-scale point cloud. First, high-speed graph construction is achieved by scanning a point cloud in various directions and searching adjacent neighborhoods on the scanning lines. Second, we propose a fast noise level estimation method using eigenvalues of the covariance matrix on a graph. Finally, we also propose a new low-cost filter selection method to enhance denoising accuracy to compensate for the degradation caused by the acceleration algorithms. In our experiments, we succeeded in reducing the processing time dramatically while maintaining accuracy relative to conventional denoising methods. Denoising was performed at 30fps, with frames containing approximately 1 million points.
[ { "created": "Thu, 18 Jan 2024 04:51:41 GMT", "version": "v1" }, { "created": "Fri, 19 Jan 2024 04:07:33 GMT", "version": "v2" }, { "created": "Sat, 15 Jun 2024 05:38:29 GMT", "version": "v3" } ]
2024-06-18
[ [ "Watanabe", "Ryosuke", "" ], [ "Nonaka", "Keisuke", "" ], [ "Pavez", "Eduardo", "" ], [ "Kobayashi", "Tatsuya", "" ], [ "Ortega", "Antonio", "" ] ]
Point clouds are utilized in various 3D applications such as cross-reality (XR) and realistic 3D displays. In some applications, e.g., for live streaming using a 3D point cloud, real-time point cloud denoising methods are required to enhance the visual quality. However, conventional high-precision denoising methods cannot be executed in real time for large-scale point clouds owing to the complexity of graph constructions with K nearest neighbors and noise level estimation. This paper proposes a fast graph-based denoising (FGBD) for a large-scale point cloud. First, high-speed graph construction is achieved by scanning a point cloud in various directions and searching adjacent neighborhoods on the scanning lines. Second, we propose a fast noise level estimation method using eigenvalues of the covariance matrix on a graph. Finally, we also propose a new low-cost filter selection method to enhance denoising accuracy to compensate for the degradation caused by the acceleration algorithms. In our experiments, we succeeded in reducing the processing time dramatically while maintaining accuracy relative to conventional denoising methods. Denoising was performed at 30fps, with frames containing approximately 1 million points.
2208.12646
Keisuke Fujii
Tomohiro Suzuki, Kazuya Takeda, Keisuke Fujii
Automatic detection of faults in race walking from a smartphone camera: a comparison of an Olympic medalist and university athletes
16 pages, 9 figures
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Automatic fault detection is a major challenge in many sports. In race walking, referees visually judge faults according to the rules. Hence, ensuring objectivity and fairness while judging is important. To address this issue, some studies have attempted to use sensors and machine learning to automatically detect faults. However, there are problems associated with sensor attachments and equipment such as a high-speed camera, which conflict with the visual judgement of referees, and the interpretability of the fault detection models. In this study, we proposed a fault detection system for non-contact measurement. We used pose estimation and machine learning models trained based on the judgements of multiple qualified referees to realize fair fault judgement. We verified them using smartphone videos of normal race walking and walking with intentional faults in several athletes including the medalist of the Tokyo Olympics. The validation results show that the proposed system detected faults with an average accuracy of over 90%. We also revealed that the machine learning model detects faults according to the rules of race walking. In addition, the intentional faulty walking movement of the medalist was different from that of university walkers. This finding informs realization of a more general fault detection model. The code and data are available at https://github.com/SZucchini/racewalk-aijudge.
[ { "created": "Wed, 24 Aug 2022 07:04:36 GMT", "version": "v1" } ]
2022-08-29
[ [ "Suzuki", "Tomohiro", "" ], [ "Takeda", "Kazuya", "" ], [ "Fujii", "Keisuke", "" ] ]
Automatic fault detection is a major challenge in many sports. In race walking, referees visually judge faults according to the rules. Hence, ensuring objectivity and fairness while judging is important. To address this issue, some studies have attempted to use sensors and machine learning to automatically detect faults. However, there are problems associated with sensor attachments and equipment such as a high-speed camera, which conflict with the visual judgement of referees, and the interpretability of the fault detection models. In this study, we proposed a fault detection system for non-contact measurement. We used pose estimation and machine learning models trained based on the judgements of multiple qualified referees to realize fair fault judgement. We verified them using smartphone videos of normal race walking and walking with intentional faults in several athletes including the medalist of the Tokyo Olympics. The validation results show that the proposed system detected faults with an average accuracy of over 90%. We also revealed that the machine learning model detects faults according to the rules of race walking. In addition, the intentional faulty walking movement of the medalist was different from that of university walkers. This finding informs realization of a more general fault detection model. The code and data are available at https://github.com/SZucchini/racewalk-aijudge.
2102.12220
Yuanxin Wu
Wei Ouyang, Yuanxin Wu
A Trident Quaternion Framework for Inertial-based Navigation Part II: Error Models and Application to Initial Alignment
17 pages, 13 figures
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work deals with error models for trident quaternion framework proposed in the companion paper (Part I) and further uses them to investigate the odometer-aided static/in-motion inertial navigation attitude alignment for land vehicles. By linearizing the trident quaternion kinematic equation, the left and right trident quaternion error models are obtained, which are found to be equivalent to those derived from profound group affine. The two error models are used to design their corresponding extended Kalman filters (EKF), namely, the left-quaternion EKF (LQEKF) and the right-quaternion EKF (RQEKF). Simulations and field tests are conducted to evaluate their actual performances. Owing to the high estimation consistency, the L/RQEKF converge much faster in the static alignment than the traditional error model-based EKF, even under arbitrary large heading initialization. For the in-motion alignment, the L/RQEKF possess much larger convergence region than the traditional EKF does, although they still require the aid of attitude initialization so as to avoid large initial attitude errors.
[ { "created": "Wed, 24 Feb 2021 11:21:03 GMT", "version": "v1" }, { "created": "Sun, 16 May 2021 08:07:48 GMT", "version": "v2" } ]
2021-05-18
[ [ "Ouyang", "Wei", "" ], [ "Wu", "Yuanxin", "" ] ]
This work deals with error models for trident quaternion framework proposed in the companion paper (Part I) and further uses them to investigate the odometer-aided static/in-motion inertial navigation attitude alignment for land vehicles. By linearizing the trident quaternion kinematic equation, the left and right trident quaternion error models are obtained, which are found to be equivalent to those derived from profound group affine. The two error models are used to design their corresponding extended Kalman filters (EKF), namely, the left-quaternion EKF (LQEKF) and the right-quaternion EKF (RQEKF). Simulations and field tests are conducted to evaluate their actual performances. Owing to the high estimation consistency, the L/RQEKF converge much faster in the static alignment than the traditional error model-based EKF, even under arbitrary large heading initialization. For the in-motion alignment, the L/RQEKF possess much larger convergence region than the traditional EKF does, although they still require the aid of attitude initialization so as to avoid large initial attitude errors.
1804.10025
Anton Kocheturov
Anton Kocheturov, Petar Momcilovic, Azra Bihorac, Panos M. Pardalos
Extended Vertical Lists for Temporal Pattern Mining from Multivariate Time Series
16 pages, 7 figures, 2 tables
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Temporal Pattern Mining (TPM) is the problem of mining predictive complex temporal patterns from multivariate time series in a supervised setting. We develop a new method called the Fast Temporal Pattern Mining with Extended Vertical Lists. This method utilizes an extension of the Apriori property which requires a more complex pattern to appear within records only at places where all of its subpatterns are detected as well. The approach is based on a novel data structure called the Extended Vertical List that tracks positions of the first state of the pattern inside records. Extensive computational results indicate that the new method performs significantly faster than the previous version of the algorithm for TMP. However, the speed-up comes at the expense of memory usage.
[ { "created": "Thu, 26 Apr 2018 12:49:26 GMT", "version": "v1" } ]
2018-04-27
[ [ "Kocheturov", "Anton", "" ], [ "Momcilovic", "Petar", "" ], [ "Bihorac", "Azra", "" ], [ "Pardalos", "Panos M.", "" ] ]
Temporal Pattern Mining (TPM) is the problem of mining predictive complex temporal patterns from multivariate time series in a supervised setting. We develop a new method called the Fast Temporal Pattern Mining with Extended Vertical Lists. This method utilizes an extension of the Apriori property which requires a more complex pattern to appear within records only at places where all of its subpatterns are detected as well. The approach is based on a novel data structure called the Extended Vertical List that tracks positions of the first state of the pattern inside records. Extensive computational results indicate that the new method performs significantly faster than the previous version of the algorithm for TMP. However, the speed-up comes at the expense of memory usage.
2402.03379
Yinqiu Huang
Yinqiu Huang, Shuli Wang, Min Gao, Xue Wei, Changhao Li, Chuan Luo, Yinhua Zhu, Xiong Xiao, Yi Luo
Entire Chain Uplift Modeling with Context-Enhanced Learning for Intelligent Marketing
Accepted by WWW2024
null
null
null
cs.IR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Uplift modeling, vital in online marketing, seeks to accurately measure the impact of various strategies, such as coupons or discounts, on different users by predicting the Individual Treatment Effect (ITE). In an e-commerce setting, user behavior follows a defined sequential chain, including impression, click, and conversion. Marketing strategies exert varied uplift effects at each stage within this chain, impacting metrics like click-through and conversion rate. Despite its utility, existing research has neglected to consider the inter-task across all stages impacts within a specific treatment and has insufficiently utilized the treatment information, potentially introducing substantial bias into subsequent marketing decisions. We identify these two issues as the chain-bias problem and the treatment-unadaptive problem. This paper introduces the Entire Chain UPlift method with context-enhanced learning (ECUP), devised to tackle these issues. ECUP consists of two primary components: 1) the Entire Chain-Enhanced Network, which utilizes user behavior patterns to estimate ITE throughout the entire chain space, models the various impacts of treatments on each task, and integrates task prior information to enhance context awareness across all stages, capturing the impact of treatment on different tasks, and 2) the Treatment-Enhanced Network, which facilitates fine-grained treatment modeling through bit-level feature interactions, thereby enabling adaptive feature adjustment. Extensive experiments on public and industrial datasets validate ECUPs effectiveness. Moreover, ECUP has been deployed on the Meituan food delivery platform, serving millions of daily active users, with the related dataset released for future research.
[ { "created": "Sun, 4 Feb 2024 03:30:25 GMT", "version": "v1" } ]
2024-02-07
[ [ "Huang", "Yinqiu", "" ], [ "Wang", "Shuli", "" ], [ "Gao", "Min", "" ], [ "Wei", "Xue", "" ], [ "Li", "Changhao", "" ], [ "Luo", "Chuan", "" ], [ "Zhu", "Yinhua", "" ], [ "Xiao", "Xiong", "" ], [ "Luo", "Yi", "" ] ]
Uplift modeling, vital in online marketing, seeks to accurately measure the impact of various strategies, such as coupons or discounts, on different users by predicting the Individual Treatment Effect (ITE). In an e-commerce setting, user behavior follows a defined sequential chain, including impression, click, and conversion. Marketing strategies exert varied uplift effects at each stage within this chain, impacting metrics like click-through and conversion rate. Despite its utility, existing research has neglected to consider the inter-task across all stages impacts within a specific treatment and has insufficiently utilized the treatment information, potentially introducing substantial bias into subsequent marketing decisions. We identify these two issues as the chain-bias problem and the treatment-unadaptive problem. This paper introduces the Entire Chain UPlift method with context-enhanced learning (ECUP), devised to tackle these issues. ECUP consists of two primary components: 1) the Entire Chain-Enhanced Network, which utilizes user behavior patterns to estimate ITE throughout the entire chain space, models the various impacts of treatments on each task, and integrates task prior information to enhance context awareness across all stages, capturing the impact of treatment on different tasks, and 2) the Treatment-Enhanced Network, which facilitates fine-grained treatment modeling through bit-level feature interactions, thereby enabling adaptive feature adjustment. Extensive experiments on public and industrial datasets validate ECUPs effectiveness. Moreover, ECUP has been deployed on the Meituan food delivery platform, serving millions of daily active users, with the related dataset released for future research.
2304.00426
Yifan Zhao
Zeyin Song, Yifan Zhao, Yujun Shi, Peixi Peng, Li Yuan, Yonghong Tian
Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for Few-Shot Class-Incremental Learning
Accepted by CVPR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Few-shot class-incremental learning (FSCIL) aims at learning to classify new classes continually from limited samples without forgetting the old classes. The mainstream framework tackling FSCIL is first to adopt the cross-entropy (CE) loss for training at the base session, then freeze the feature extractor to adapt to new classes. However, in this work, we find that the CE loss is not ideal for the base session training as it suffers poor class separation in terms of representations, which further degrades generalization to novel classes. One tempting method to mitigate this problem is to apply an additional naive supervised contrastive learning (SCL) in the base session. Unfortunately, we find that although SCL can create a slightly better representation separation among different base classes, it still struggles to separate base classes and new classes. Inspired by the observations made, we propose Semantic-Aware Virtual Contrastive model (SAVC), a novel method that facilitates separation between new classes and base classes by introducing virtual classes to SCL. These virtual classes, which are generated via pre-defined transformations, not only act as placeholders for unseen classes in the representation space, but also provide diverse semantic information. By learning to recognize and contrast in the fantasy space fostered by virtual classes, our SAVC significantly boosts base class separation and novel class generalization, achieving new state-of-the-art performance on the three widely-used FSCIL benchmark datasets. Code is available at: https://github.com/zysong0113/SAVC.
[ { "created": "Sun, 2 Apr 2023 01:51:24 GMT", "version": "v1" } ]
2023-04-04
[ [ "Song", "Zeyin", "" ], [ "Zhao", "Yifan", "" ], [ "Shi", "Yujun", "" ], [ "Peng", "Peixi", "" ], [ "Yuan", "Li", "" ], [ "Tian", "Yonghong", "" ] ]
Few-shot class-incremental learning (FSCIL) aims at learning to classify new classes continually from limited samples without forgetting the old classes. The mainstream framework tackling FSCIL is first to adopt the cross-entropy (CE) loss for training at the base session, then freeze the feature extractor to adapt to new classes. However, in this work, we find that the CE loss is not ideal for the base session training as it suffers poor class separation in terms of representations, which further degrades generalization to novel classes. One tempting method to mitigate this problem is to apply an additional naive supervised contrastive learning (SCL) in the base session. Unfortunately, we find that although SCL can create a slightly better representation separation among different base classes, it still struggles to separate base classes and new classes. Inspired by the observations made, we propose Semantic-Aware Virtual Contrastive model (SAVC), a novel method that facilitates separation between new classes and base classes by introducing virtual classes to SCL. These virtual classes, which are generated via pre-defined transformations, not only act as placeholders for unseen classes in the representation space, but also provide diverse semantic information. By learning to recognize and contrast in the fantasy space fostered by virtual classes, our SAVC significantly boosts base class separation and novel class generalization, achieving new state-of-the-art performance on the three widely-used FSCIL benchmark datasets. Code is available at: https://github.com/zysong0113/SAVC.
1912.07381
Q.Vera Liao
Q. Vera Liao, Michael Muller
Enabling Value Sensitive AI Systems through Participatory Design Fictions
null
null
null
null
cs.HC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two general routes have been followed to develop artificial agents that are sensitive to human values---a top-down approach to encode values into the agents, and a bottom-up approach to learn from human actions, whether from real-world interactions or stories. Although both approaches have made exciting scientific progress, they may face challenges when applied to the current development practices of AI systems, which require the under-standing of the specific domains and specific stakeholders involved. In this work, we bring together perspectives from the human-computer interaction (HCI) community, where designing technologies sensitive to user values has been a longstanding focus. We highlight several well-established areas focusing on developing empirical methods for inquiring user values. Based on these methods, we propose participatory design fictions to study user values involved in AI systems and present preliminary results from a case study. With this paper, we invite the consideration of user-centered value inquiry and value learning.
[ { "created": "Fri, 13 Dec 2019 01:16:03 GMT", "version": "v1" } ]
2019-12-17
[ [ "Liao", "Q. Vera", "" ], [ "Muller", "Michael", "" ] ]
Two general routes have been followed to develop artificial agents that are sensitive to human values---a top-down approach to encode values into the agents, and a bottom-up approach to learn from human actions, whether from real-world interactions or stories. Although both approaches have made exciting scientific progress, they may face challenges when applied to the current development practices of AI systems, which require the under-standing of the specific domains and specific stakeholders involved. In this work, we bring together perspectives from the human-computer interaction (HCI) community, where designing technologies sensitive to user values has been a longstanding focus. We highlight several well-established areas focusing on developing empirical methods for inquiring user values. Based on these methods, we propose participatory design fictions to study user values involved in AI systems and present preliminary results from a case study. With this paper, we invite the consideration of user-centered value inquiry and value learning.
2007.06759
Noranart Vesdapunt
Bindita Chaudhuri, Noranart Vesdapunt, Linda Shapiro, Baoyuan Wang
Personalized Face Modeling for Improved Face Reconstruction and Motion Retargeting
ECCV 2020 (spotlight), webpage: https://homes.cs.washington.edu/~bindita/personalizedfacemodeling.html
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional methods for image-based 3D face reconstruction and facial motion retargeting fit a 3D morphable model (3DMM) to the face, which has limited modeling capacity and fail to generalize well to in-the-wild data. Use of deformation transfer or multilinear tensor as a personalized 3DMM for blendshape interpolation does not address the fact that facial expressions result in different local and global skin deformations in different persons. Moreover, existing methods learn a single albedo per user which is not enough to capture the expression-specific skin reflectance variations. We propose an end-to-end framework that jointly learns a personalized face model per user and per-frame facial motion parameters from a large corpus of in-the-wild videos of user expressions. Specifically, we learn user-specific expression blendshapes and dynamic (expression-specific) albedo maps by predicting personalized corrections on top of a 3DMM prior. We introduce novel constraints to ensure that the corrected blendshapes retain their semantic meanings and the reconstructed geometry is disentangled from the albedo. Experimental results show that our personalization accurately captures fine-grained facial dynamics in a wide range of conditions and efficiently decouples the learned face model from facial motion, resulting in more accurate face reconstruction and facial motion retargeting compared to state-of-the-art methods.
[ { "created": "Tue, 14 Jul 2020 01:30:14 GMT", "version": "v1" }, { "created": "Fri, 17 Jul 2020 23:08:43 GMT", "version": "v2" } ]
2020-07-21
[ [ "Chaudhuri", "Bindita", "" ], [ "Vesdapunt", "Noranart", "" ], [ "Shapiro", "Linda", "" ], [ "Wang", "Baoyuan", "" ] ]
Traditional methods for image-based 3D face reconstruction and facial motion retargeting fit a 3D morphable model (3DMM) to the face, which has limited modeling capacity and fail to generalize well to in-the-wild data. Use of deformation transfer or multilinear tensor as a personalized 3DMM for blendshape interpolation does not address the fact that facial expressions result in different local and global skin deformations in different persons. Moreover, existing methods learn a single albedo per user which is not enough to capture the expression-specific skin reflectance variations. We propose an end-to-end framework that jointly learns a personalized face model per user and per-frame facial motion parameters from a large corpus of in-the-wild videos of user expressions. Specifically, we learn user-specific expression blendshapes and dynamic (expression-specific) albedo maps by predicting personalized corrections on top of a 3DMM prior. We introduce novel constraints to ensure that the corrected blendshapes retain their semantic meanings and the reconstructed geometry is disentangled from the albedo. Experimental results show that our personalization accurately captures fine-grained facial dynamics in a wide range of conditions and efficiently decouples the learned face model from facial motion, resulting in more accurate face reconstruction and facial motion retargeting compared to state-of-the-art methods.
1002.1300
Mukul Agarwal
Mukul Agarwal, Sanjoy Mitter
Architecture for communication with a fidelity criterion in unknown networks
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that in order to communicate independent sources (this is the unicast problem) between various users over an unknown medium to within various distortion levels, it is sufficient to consider source-channel separation based architectures: architectures which first compress the sources to within the corresponding distortion levels followed by reliable communication over the unknown medium. We are reducing the problem of universal rate-distortion communication of independent sources over a network to the universal reliable communication problem over networks. This is a reductionist view. We are not solving the reliable communication problem in networks.
[ { "created": "Fri, 5 Feb 2010 17:21:42 GMT", "version": "v1" }, { "created": "Fri, 21 Jan 2011 03:57:20 GMT", "version": "v2" } ]
2011-01-24
[ [ "Agarwal", "Mukul", "" ], [ "Mitter", "Sanjoy", "" ] ]
We prove that in order to communicate independent sources (this is the unicast problem) between various users over an unknown medium to within various distortion levels, it is sufficient to consider source-channel separation based architectures: architectures which first compress the sources to within the corresponding distortion levels followed by reliable communication over the unknown medium. We are reducing the problem of universal rate-distortion communication of independent sources over a network to the universal reliable communication problem over networks. This is a reductionist view. We are not solving the reliable communication problem in networks.
2307.06632
Hailiang Tang
Hailiang Tang, Tisheng Zhang, Xiaoji Niu, Liqiang Wang, Linfu Wei, and Jingnan Liu
FF-LINS: A Consistent Frame-to-Frame Solid-State-LiDAR-Inertial State Estimator
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most of the existing LiDAR-inertial navigation systems are based on frame-to-map registrations, leading to inconsistency in state estimation. The newest solid-state LiDAR with a non-repetitive scanning pattern makes it possible to achieve a consistent LiDAR-inertial estimator by employing a frame-to-frame data association. In this letter, we propose a robust and consistent frame-to-frame LiDAR-inertial navigation system (FF-LINS) for solid-state LiDARs. With the INS-centric LiDAR frame processing, the keyframe point-cloud map is built using the accumulated point clouds to construct the frame-to-frame data association. The LiDAR frame-to-frame and the inertial measurement unit (IMU) preintegration measurements are tightly integrated using the factor graph optimization, with online calibration of the LiDAR-IMU extrinsic and time-delay parameters. The experiments on the public and private datasets demonstrate that the proposed FF-LINS achieves superior accuracy and robustness than the state-of-the-art systems. Besides, the LiDAR-IMU extrinsic and time-delay parameters are estimated effectively, and the online calibration notably improves the pose accuracy. The proposed FF-LINS and the employed datasets are open-sourced on GitHub (https://github.com/i2Nav-WHU/FF-LINS).
[ { "created": "Thu, 13 Jul 2023 08:59:39 GMT", "version": "v1" } ]
2023-07-14
[ [ "Tang", "Hailiang", "" ], [ "Zhang", "Tisheng", "" ], [ "Niu", "Xiaoji", "" ], [ "Wang", "Liqiang", "" ], [ "Wei", "Linfu", "" ], [ "Liu", "Jingnan", "" ] ]
Most of the existing LiDAR-inertial navigation systems are based on frame-to-map registrations, leading to inconsistency in state estimation. The newest solid-state LiDAR with a non-repetitive scanning pattern makes it possible to achieve a consistent LiDAR-inertial estimator by employing a frame-to-frame data association. In this letter, we propose a robust and consistent frame-to-frame LiDAR-inertial navigation system (FF-LINS) for solid-state LiDARs. With the INS-centric LiDAR frame processing, the keyframe point-cloud map is built using the accumulated point clouds to construct the frame-to-frame data association. The LiDAR frame-to-frame and the inertial measurement unit (IMU) preintegration measurements are tightly integrated using the factor graph optimization, with online calibration of the LiDAR-IMU extrinsic and time-delay parameters. The experiments on the public and private datasets demonstrate that the proposed FF-LINS achieves superior accuracy and robustness than the state-of-the-art systems. Besides, the LiDAR-IMU extrinsic and time-delay parameters are estimated effectively, and the online calibration notably improves the pose accuracy. The proposed FF-LINS and the employed datasets are open-sourced on GitHub (https://github.com/i2Nav-WHU/FF-LINS).
2305.00410
Rahul Meshram Dr.
Vishesh Mittal, Rahul Meshram, Deepak Dev and Surya Prakash
Indexability of Finite State Restless Multi-Armed Bandit and Rollout Policy
15 Pages, submitted to conference
null
null
null
cs.LG cs.SY eess.SY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider finite state restless multi-armed bandit problem. The decision maker can act on M bandits out of N bandits in each time step. The play of arm (active arm) yields state dependent rewards based on action and when the arm is not played, it also provides rewards based on the state and action. The objective of the decision maker is to maximize the infinite horizon discounted reward. The classical approach to restless bandits is Whittle index policy. In such policy, the M arms with highest indices are played at each time step. Here, one decouples the restless bandits problem by analyzing relaxed constrained restless bandits problem. Then by Lagrangian relaxation problem, one decouples restless bandits problem into N single-armed restless bandit problems. We analyze the single-armed restless bandit. In order to study the Whittle index policy, we show structural results on the single armed bandit model. We define indexability and show indexability in special cases. We propose an alternative approach to verify the indexable criteria for a single armed bandit model using value iteration algorithm. We demonstrate the performance of our algorithm with different examples. We provide insight on condition of indexability of restless bandits using different structural assumptions on transition probability and reward matrices. We also study online rollout policy and discuss the computation complexity of algorithm and compare that with complexity of index computation. Numerical examples illustrate that index policy and rollout policy performs better than myopic policy.
[ { "created": "Sun, 30 Apr 2023 06:53:44 GMT", "version": "v1" } ]
2023-05-02
[ [ "Mittal", "Vishesh", "" ], [ "Meshram", "Rahul", "" ], [ "Dev", "Deepak", "" ], [ "Prakash", "Surya", "" ] ]
We consider finite state restless multi-armed bandit problem. The decision maker can act on M bandits out of N bandits in each time step. The play of arm (active arm) yields state dependent rewards based on action and when the arm is not played, it also provides rewards based on the state and action. The objective of the decision maker is to maximize the infinite horizon discounted reward. The classical approach to restless bandits is Whittle index policy. In such policy, the M arms with highest indices are played at each time step. Here, one decouples the restless bandits problem by analyzing relaxed constrained restless bandits problem. Then by Lagrangian relaxation problem, one decouples restless bandits problem into N single-armed restless bandit problems. We analyze the single-armed restless bandit. In order to study the Whittle index policy, we show structural results on the single armed bandit model. We define indexability and show indexability in special cases. We propose an alternative approach to verify the indexable criteria for a single armed bandit model using value iteration algorithm. We demonstrate the performance of our algorithm with different examples. We provide insight on condition of indexability of restless bandits using different structural assumptions on transition probability and reward matrices. We also study online rollout policy and discuss the computation complexity of algorithm and compare that with complexity of index computation. Numerical examples illustrate that index policy and rollout policy performs better than myopic policy.
2205.15812
Iknoor Singh
Iknoor Singh, Yue Li, Melissa Thong, Carolina Scarton
GateNLP-UShef at SemEval-2022 Task 8: Entity-Enriched Siamese Transformer for Multilingual News Article Similarity
Accepted at SemEval-2022 Task 8: Multilingual News Article Similarity (co-located with NAACL 2022)
null
null
null
cs.CL cs.AI cs.CY cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the second-placed system on the leaderboard of SemEval-2022 Task 8: Multilingual News Article Similarity. We propose an entity-enriched Siamese Transformer which computes news article similarity based on different sub-dimensions, such as the shared narrative, entities, location and time of the event discussed in the news article. Our system exploits a Siamese network architecture using a Transformer encoder to learn document-level representations for the purpose of capturing the narrative together with the auxiliary entity-based features extracted from the news articles. The intuition behind using all these features together is to capture the similarity between news articles at different granularity levels and to assess the extent to which different news outlets write about "the same events". Our experimental results and detailed ablation study demonstrate the effectiveness and the validity of our proposed method.
[ { "created": "Tue, 31 May 2022 14:11:45 GMT", "version": "v1" }, { "created": "Wed, 29 Jun 2022 14:28:37 GMT", "version": "v2" } ]
2022-06-30
[ [ "Singh", "Iknoor", "" ], [ "Li", "Yue", "" ], [ "Thong", "Melissa", "" ], [ "Scarton", "Carolina", "" ] ]
This paper describes the second-placed system on the leaderboard of SemEval-2022 Task 8: Multilingual News Article Similarity. We propose an entity-enriched Siamese Transformer which computes news article similarity based on different sub-dimensions, such as the shared narrative, entities, location and time of the event discussed in the news article. Our system exploits a Siamese network architecture using a Transformer encoder to learn document-level representations for the purpose of capturing the narrative together with the auxiliary entity-based features extracted from the news articles. The intuition behind using all these features together is to capture the similarity between news articles at different granularity levels and to assess the extent to which different news outlets write about "the same events". Our experimental results and detailed ablation study demonstrate the effectiveness and the validity of our proposed method.
2210.15723
Stefan Wojcik
Stefan Wojcik and Sophie Hilgard and Nick Judd and Delia Mocanu and Stephen Ragain and M.B. Fallin Hunzaker and Keith Coleman and Jay Baxter
Birdwatch: Crowd Wisdom and Bridging Algorithms can Inform Understanding and Reduce the Spread of Misinformation
null
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
We present an approach for selecting objectively informative and subjectively helpful annotations to social media posts. We draw on data from on an online environment where contributors annotate misinformation and simultaneously rate the contributions of others. Our algorithm uses a matrix-factorization (MF) based approach to identify annotations that appeal broadly across heterogeneous user groups - sometimes referred to as "bridging-based ranking." We pair these data with a survey experiment in which individuals are randomly assigned to see annotations to posts. We find that annotations selected by the algorithm improve key indicators compared with overall average and crowd-generated baselines. Further, when deployed on Twitter, people who saw annotations selected through this bridging-based approach were significantly less likely to reshare social media posts than those who did not see the annotations.
[ { "created": "Thu, 27 Oct 2022 18:57:20 GMT", "version": "v1" } ]
2022-10-31
[ [ "Wojcik", "Stefan", "" ], [ "Hilgard", "Sophie", "" ], [ "Judd", "Nick", "" ], [ "Mocanu", "Delia", "" ], [ "Ragain", "Stephen", "" ], [ "Hunzaker", "M. B. Fallin", "" ], [ "Coleman", "Keith", "" ], [ "Baxter", "Jay", "" ] ]
We present an approach for selecting objectively informative and subjectively helpful annotations to social media posts. We draw on data from on an online environment where contributors annotate misinformation and simultaneously rate the contributions of others. Our algorithm uses a matrix-factorization (MF) based approach to identify annotations that appeal broadly across heterogeneous user groups - sometimes referred to as "bridging-based ranking." We pair these data with a survey experiment in which individuals are randomly assigned to see annotations to posts. We find that annotations selected by the algorithm improve key indicators compared with overall average and crowd-generated baselines. Further, when deployed on Twitter, people who saw annotations selected through this bridging-based approach were significantly less likely to reshare social media posts than those who did not see the annotations.
1312.5547
Lassi A Liikkanen
Lassi A Liikkanen
Three Metrics for Measuring User Engagement with Online Media and a YouTube Case Study
4 pages, 1 figure, 3 tables, 2 appendixes
null
null
null
cs.HC cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This technical report discusses three metrics of user engagement with online media. They are Commenting frequency, Voting frequency, and Voting balance. These relative figures can be derived from established, basic statistics available for many services, prominently YouTube. The paper includes case a study of popular YouTube videos to illustrate the characteristics and usefulness of the measures. The study documents the range of observed values and their relationships. The empirical sample shows the three measures to be only moderately correlated with the original statistics despite the common numerators and denominators. The paper concludes by discussing future applications and the needs of the quantification of user interaction with new media services.
[ { "created": "Thu, 19 Dec 2013 13:45:11 GMT", "version": "v1" }, { "created": "Thu, 10 Apr 2014 12:18:57 GMT", "version": "v2" } ]
2014-04-11
[ [ "Liikkanen", "Lassi A", "" ] ]
This technical report discusses three metrics of user engagement with online media. They are Commenting frequency, Voting frequency, and Voting balance. These relative figures can be derived from established, basic statistics available for many services, prominently YouTube. The paper includes case a study of popular YouTube videos to illustrate the characteristics and usefulness of the measures. The study documents the range of observed values and their relationships. The empirical sample shows the three measures to be only moderately correlated with the original statistics despite the common numerators and denominators. The paper concludes by discussing future applications and the needs of the quantification of user interaction with new media services.
2006.11513
Haisen Zhang
Haijun Zhang, Haisen Zhang, Keping Long and George K. Karagiannidis
Deep Learning based Radio Resource Management in NOMA Networks: User Association, Subchannel and Power Allocation
to appear in IEEE Transactions on Network Science and Engineering
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid development of future wireless communication, the combination of NOMA technology and millimeter-wave(mmWave) technology has become a research hotspot. The application of NOMA in mmWave heterogeneous networks can meet the diverse needs of users in different applications and scenarios in future communications. In this paper, we propose a machine learning framework to deal with the user association, subchannel and power allocation problems in such a complex scenario. We focus on maximizing the energy efficiency (EE) of the system under the constraints of quality of service (QoS), interference limitation, and power limitation. Specifically, user association is solved through the Lagrange dual decomposition method, while semi-supervised learning and deep neural network (DNN) are used for the subchannel and power allocation, respectively. In particular, unlabeled samples are introduced to improve approximation and generalization ability for subchannel allocation. The simulation indicates that the proposed scheme can achieve higher EE with lower complexity.
[ { "created": "Sat, 20 Jun 2020 07:49:24 GMT", "version": "v1" } ]
2020-06-23
[ [ "Zhang", "Haijun", "" ], [ "Zhang", "Haisen", "" ], [ "Long", "Keping", "" ], [ "Karagiannidis", "George K.", "" ] ]
With the rapid development of future wireless communication, the combination of NOMA technology and millimeter-wave(mmWave) technology has become a research hotspot. The application of NOMA in mmWave heterogeneous networks can meet the diverse needs of users in different applications and scenarios in future communications. In this paper, we propose a machine learning framework to deal with the user association, subchannel and power allocation problems in such a complex scenario. We focus on maximizing the energy efficiency (EE) of the system under the constraints of quality of service (QoS), interference limitation, and power limitation. Specifically, user association is solved through the Lagrange dual decomposition method, while semi-supervised learning and deep neural network (DNN) are used for the subchannel and power allocation, respectively. In particular, unlabeled samples are introduced to improve approximation and generalization ability for subchannel allocation. The simulation indicates that the proposed scheme can achieve higher EE with lower complexity.
0911.1813
Aaron Roth
Aaron Roth, Tim Roughgarden
Interactive Privacy via the Median Mechanism
Appeared in STOC 2010
null
null
null
cs.CR cs.CC cs.DB cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We define a new interactive differentially private mechanism -- the median mechanism -- for answering arbitrary predicate queries that arrive online. Relative to fixed accuracy and privacy constraints, this mechanism can answer exponentially more queries than the previously best known interactive privacy mechanism (the Laplace mechanism, which independently perturbs each query result). Our guarantee is almost the best possible, even for non-interactive privacy mechanisms. Conceptually, the median mechanism is the first privacy mechanism capable of identifying and exploiting correlations among queries in an interactive setting. We also give an efficient implementation of the median mechanism, with running time polynomial in the number of queries, the database size, and the domain size. This efficient implementation guarantees privacy for all input databases, and accurate query results for almost all input databases. The dependence of the privacy on the number of queries in this mechanism improves over that of the best previously known efficient mechanism by a super-polynomial factor, even in the non-interactive setting.
[ { "created": "Tue, 10 Nov 2009 03:55:44 GMT", "version": "v1" }, { "created": "Wed, 19 Jan 2011 16:09:18 GMT", "version": "v2" } ]
2011-01-20
[ [ "Roth", "Aaron", "" ], [ "Roughgarden", "Tim", "" ] ]
We define a new interactive differentially private mechanism -- the median mechanism -- for answering arbitrary predicate queries that arrive online. Relative to fixed accuracy and privacy constraints, this mechanism can answer exponentially more queries than the previously best known interactive privacy mechanism (the Laplace mechanism, which independently perturbs each query result). Our guarantee is almost the best possible, even for non-interactive privacy mechanisms. Conceptually, the median mechanism is the first privacy mechanism capable of identifying and exploiting correlations among queries in an interactive setting. We also give an efficient implementation of the median mechanism, with running time polynomial in the number of queries, the database size, and the domain size. This efficient implementation guarantees privacy for all input databases, and accurate query results for almost all input databases. The dependence of the privacy on the number of queries in this mechanism improves over that of the best previously known efficient mechanism by a super-polynomial factor, even in the non-interactive setting.
1611.07724
Frank Gurski
Carolin Albrecht, Frank Gurski, Jochen Rethmann, Eda Yilmaz
Knapsack Problems: A Parameterized Point of View
27 pages, 1 figure
null
null
null
cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The knapsack problem (KP) is a very famous NP-hard problem in combinatorial optimization. Also its generalization to multiple dimensions named d-dimensional knapsack problem (d-KP) and to multiple knapsacks named multiple knapsack problem (MKP) are well known problems. Since KP, d-KP, and MKP are integer-valued problems defined on inputs of various informations, we study the fixed-parameter tractability of these problems. The idea behind fixed-parameter tractability is to split the complexity into two parts - one part that depends purely on the size of the input, and one part that depends on some parameter of the problem that tends to be small in practice. Further we consider the closely related question, whether the sizes and the values can be reduced, such that their bit-length is bounded polynomially or even constantly in a given parameter, i.e. the existence of kernelizations is studied. We discuss the following parameters: the number of items, the threshold value for the profit, the sizes, the profits, the number d of dimensions, and the number m of knapsacks. We also consider the connection of parameterized knapsack problems to linear programming, approximation, and pseudo-polynomial algorithms.
[ { "created": "Wed, 23 Nov 2016 10:23:46 GMT", "version": "v1" } ]
2016-11-24
[ [ "Albrecht", "Carolin", "" ], [ "Gurski", "Frank", "" ], [ "Rethmann", "Jochen", "" ], [ "Yilmaz", "Eda", "" ] ]
The knapsack problem (KP) is a very famous NP-hard problem in combinatorial optimization. Also its generalization to multiple dimensions named d-dimensional knapsack problem (d-KP) and to multiple knapsacks named multiple knapsack problem (MKP) are well known problems. Since KP, d-KP, and MKP are integer-valued problems defined on inputs of various informations, we study the fixed-parameter tractability of these problems. The idea behind fixed-parameter tractability is to split the complexity into two parts - one part that depends purely on the size of the input, and one part that depends on some parameter of the problem that tends to be small in practice. Further we consider the closely related question, whether the sizes and the values can be reduced, such that their bit-length is bounded polynomially or even constantly in a given parameter, i.e. the existence of kernelizations is studied. We discuss the following parameters: the number of items, the threshold value for the profit, the sizes, the profits, the number d of dimensions, and the number m of knapsacks. We also consider the connection of parameterized knapsack problems to linear programming, approximation, and pseudo-polynomial algorithms.
1502.04014
Ivano Malavolta
Mirco Franzago, Ivano Malavolta, Henry Muccini
Stakeholders, Viewpoints and Languages of a Modelling Framework for the Design and Development of Data-Intensive Mobile Apps
Workshop MOBILEng 2014
null
null
MOBILEng/2014/03
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today millions of mobile apps are downloaded and used all over the world. Guidelines and best practices on how to design and develop mobile apps are being periodically released, mainly by mobile platform vendors and researchers. They cover different concerns, and refer to different technical and non-technical stakeholders. Still, mobile applications are developed with ad-hoc development processes, and on-paper best practices. In this paper we discuss a multi-view modelling framework supporting the collaborative design and development of mobile apps. The proposed framework embraces the Model-Driven Engineering methodology. This paper provides an overall view of the modelling framework in terms of its main stakeholders, viewpoints, and modelling languages.
[ { "created": "Fri, 13 Feb 2015 14:51:30 GMT", "version": "v1" }, { "created": "Fri, 27 Feb 2015 17:52:53 GMT", "version": "v2" } ]
2015-03-02
[ [ "Franzago", "Mirco", "" ], [ "Malavolta", "Ivano", "" ], [ "Muccini", "Henry", "" ] ]
Today millions of mobile apps are downloaded and used all over the world. Guidelines and best practices on how to design and develop mobile apps are being periodically released, mainly by mobile platform vendors and researchers. They cover different concerns, and refer to different technical and non-technical stakeholders. Still, mobile applications are developed with ad-hoc development processes, and on-paper best practices. In this paper we discuss a multi-view modelling framework supporting the collaborative design and development of mobile apps. The proposed framework embraces the Model-Driven Engineering methodology. This paper provides an overall view of the modelling framework in terms of its main stakeholders, viewpoints, and modelling languages.
1803.03525
Andrii Berezovskyi
Andrii Berezovskyi, Jad El-khoury, Omar Kacimi, and Fr\'ed\'eric Loiret
Improving lifecycle query in integrated toolchains using linked data and MQTT-based data warehousing
12 pages, workshop
null
null
null
cs.SE cs.MA cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development of increasingly complex IoT systems requires large engineering environments. These environments generally consist of tools from different vendors and are not necessarily integrated well with each other. In order to automate various analyses, queries across resources from multiple tools have to be executed in parallel to the engineering activities. In this paper, we identify the necessary requirements on such a query capability and evaluate different architectures according to these requirements. We propose an improved lifecycle query architecture, which builds upon the existing Tracked Resource Set (TRS) protocol, and complements it with the MQTT messaging protocol in order to allow the data in the warehouse to be kept updated in real-time. As part of the case study focusing on the development of an IoT automated warehouse, this architecture was implemented for a toolchain integrated using RESTful microservices and linked data.
[ { "created": "Fri, 9 Mar 2018 14:29:32 GMT", "version": "v1" } ]
2018-03-12
[ [ "Berezovskyi", "Andrii", "" ], [ "El-khoury", "Jad", "" ], [ "Kacimi", "Omar", "" ], [ "Loiret", "Frédéric", "" ] ]
The development of increasingly complex IoT systems requires large engineering environments. These environments generally consist of tools from different vendors and are not necessarily integrated well with each other. In order to automate various analyses, queries across resources from multiple tools have to be executed in parallel to the engineering activities. In this paper, we identify the necessary requirements on such a query capability and evaluate different architectures according to these requirements. We propose an improved lifecycle query architecture, which builds upon the existing Tracked Resource Set (TRS) protocol, and complements it with the MQTT messaging protocol in order to allow the data in the warehouse to be kept updated in real-time. As part of the case study focusing on the development of an IoT automated warehouse, this architecture was implemented for a toolchain integrated using RESTful microservices and linked data.
2103.13822
Minxue Tang
Minxue Tang, Xuefei Ning, Yitu Wang, Jingwei Sun, Yu Wang, Hai Li and Yiran Chen
FedCor: Correlation-Based Active Client Selection Strategy for Heterogeneous Federated Learning
Accepted by CVPR 2022
null
null
null
cs.LG cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Client-wise data heterogeneity is one of the major issues that hinder effective training in federated learning (FL). Since the data distribution on each client may vary dramatically, the client selection strategy can significantly influence the convergence rate of the FL process. Active client selection strategies are popularly proposed in recent studies. However, they neglect the loss correlations between the clients and achieve only marginal improvement compared to the uniform selection strategy. In this work, we propose FedCor -- an FL framework built on a correlation-based client selection strategy, to boost the convergence rate of FL. Specifically, we first model the loss correlations between the clients with a Gaussian Process (GP). Based on the GP model, we derive a client selection strategy with a significant reduction of expected global loss in each round. Besides, we develop an efficient GP training method with a low communication overhead in the FL scenario by utilizing the covariance stationarity. Our experimental results show that compared to the state-of-the-art method, FedCorr can improve the convergence rates by $34\%\sim 99\%$ and $26\%\sim 51\%$ on FMNIST and CIFAR-10, respectively.
[ { "created": "Wed, 24 Mar 2021 03:25:14 GMT", "version": "v1" }, { "created": "Wed, 25 Aug 2021 21:12:10 GMT", "version": "v2" }, { "created": "Thu, 24 Mar 2022 14:58:49 GMT", "version": "v3" } ]
2022-03-25
[ [ "Tang", "Minxue", "" ], [ "Ning", "Xuefei", "" ], [ "Wang", "Yitu", "" ], [ "Sun", "Jingwei", "" ], [ "Wang", "Yu", "" ], [ "Li", "Hai", "" ], [ "Chen", "Yiran", "" ] ]
Client-wise data heterogeneity is one of the major issues that hinder effective training in federated learning (FL). Since the data distribution on each client may vary dramatically, the client selection strategy can significantly influence the convergence rate of the FL process. Active client selection strategies are popularly proposed in recent studies. However, they neglect the loss correlations between the clients and achieve only marginal improvement compared to the uniform selection strategy. In this work, we propose FedCor -- an FL framework built on a correlation-based client selection strategy, to boost the convergence rate of FL. Specifically, we first model the loss correlations between the clients with a Gaussian Process (GP). Based on the GP model, we derive a client selection strategy with a significant reduction of expected global loss in each round. Besides, we develop an efficient GP training method with a low communication overhead in the FL scenario by utilizing the covariance stationarity. Our experimental results show that compared to the state-of-the-art method, FedCorr can improve the convergence rates by $34\%\sim 99\%$ and $26\%\sim 51\%$ on FMNIST and CIFAR-10, respectively.
1603.01547
Ondrej Bajgar
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar and Jan Kleindienst
Text Understanding with the Attention Sum Reader Network
Presented at ACL 2016
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Ensemble of our models sets new state of the art on all evaluated datasets.
[ { "created": "Fri, 4 Mar 2016 17:32:42 GMT", "version": "v1" }, { "created": "Fri, 24 Jun 2016 13:04:47 GMT", "version": "v2" } ]
2016-06-27
[ [ "Kadlec", "Rudolf", "" ], [ "Schmid", "Martin", "" ], [ "Bajgar", "Ondrej", "" ], [ "Kleindienst", "Jan", "" ] ]
Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Ensemble of our models sets new state of the art on all evaluated datasets.
2111.01689
Michal Ptaszynski Prof.
Juuso Eronen, Michal Ptaszynski, Fumito Masui, Aleksander Smywi\'nski-Pohl, Gniewosz Leliwa, Michal Wroczynski
Improving Classifier Training Efficiency for Automatic Cyberbullying Detection with Feature Density
73 pages, 4 figures, 19 tables, Information Processing and Management, Vol. 58, Issue 5, September 2021, paper ID: 102616
Information Processing and Management, Vol. 58, Issue 5, September 2021, paper ID: 102616
10.1016/j.ipm.2021.102616
null
cs.CL cs.AI cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
We study the effectiveness of Feature Density (FD) using different linguistically-backed feature preprocessing methods in order to estimate dataset complexity, which in turn is used to comparatively estimate the potential performance of machine learning (ML) classifiers prior to any training. We hypothesise that estimating dataset complexity allows for the reduction of the number of required experiments iterations. This way we can optimize the resource-intensive training of ML models which is becoming a serious issue due to the increases in available dataset sizes and the ever rising popularity of models based on Deep Neural Networks (DNN). The problem of constantly increasing needs for more powerful computational resources is also affecting the environment due to alarmingly-growing amount of CO2 emissions caused by training of large-scale ML models. The research was conducted on multiple datasets, including popular datasets, such as Yelp business review dataset used for training typical sentiment analysis models, as well as more recent datasets trying to tackle the problem of cyberbullying, which, being a serious social problem, is also a much more sophisticated problem form the point of view of linguistic representation. We use cyberbullying datasets collected for multiple languages, namely English, Japanese and Polish. The difference in linguistic complexity of datasets allows us to additionally discuss the efficacy of linguistically-backed word preprocessing.
[ { "created": "Tue, 2 Nov 2021 15:48:28 GMT", "version": "v1" }, { "created": "Wed, 3 Nov 2021 01:46:27 GMT", "version": "v2" } ]
2021-11-04
[ [ "Eronen", "Juuso", "" ], [ "Ptaszynski", "Michal", "" ], [ "Masui", "Fumito", "" ], [ "Smywiński-Pohl", "Aleksander", "" ], [ "Leliwa", "Gniewosz", "" ], [ "Wroczynski", "Michal", "" ] ]
We study the effectiveness of Feature Density (FD) using different linguistically-backed feature preprocessing methods in order to estimate dataset complexity, which in turn is used to comparatively estimate the potential performance of machine learning (ML) classifiers prior to any training. We hypothesise that estimating dataset complexity allows for the reduction of the number of required experiments iterations. This way we can optimize the resource-intensive training of ML models which is becoming a serious issue due to the increases in available dataset sizes and the ever rising popularity of models based on Deep Neural Networks (DNN). The problem of constantly increasing needs for more powerful computational resources is also affecting the environment due to alarmingly-growing amount of CO2 emissions caused by training of large-scale ML models. The research was conducted on multiple datasets, including popular datasets, such as Yelp business review dataset used for training typical sentiment analysis models, as well as more recent datasets trying to tackle the problem of cyberbullying, which, being a serious social problem, is also a much more sophisticated problem form the point of view of linguistic representation. We use cyberbullying datasets collected for multiple languages, namely English, Japanese and Polish. The difference in linguistic complexity of datasets allows us to additionally discuss the efficacy of linguistically-backed word preprocessing.
1701.01590
Ruohan Cao
Ruohan Cao
Detecting Arbitrary Attacks Using Continuous Secured Side Information in Wireless Networks
arXiv admin note: substantial text overlap with arXiv:1612.01707
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper focuses on Byzantine attack detection for Gaussian two-hop one-way relay network, where an amplify-and-forward relay may conduct Byzantine attacks by forwarding altered symbols to the destination. For facilitating attack detection, we utilize the openness of wireless medium to make the destination observe some secured signals that are not attacked. Then, a detection scheme is developed for the destination by using its secured observations to statistically check other observations from the relay. On the other hand, notice the Gaussian channel is continuous, which allows the possible Byzantine attacks to be conducted within continuous alphabet(s). The existing work on discrete channel is not applicable for investigating the performance of the proposed scheme. The main contribution of this paper is to prove that if and only if the wireless relay network satisfies a non-manipulable channel condition, the proposed detection scheme achieves asymptotic errorless performance against arbitrary attacks that allow the stochastic distributions of altered symbols to vary arbitrarily and depend on each other. No pre-shared secret or secret transmission is needed for the detection. Furthermore, we also prove that the relay network is non-manipulable as long as all channel coefficients are non-zero, which is not essential restrict for many practical systems.
[ { "created": "Fri, 6 Jan 2017 10:44:13 GMT", "version": "v1" }, { "created": "Mon, 9 Jan 2017 11:48:25 GMT", "version": "v2" }, { "created": "Fri, 4 Aug 2017 10:04:02 GMT", "version": "v3" } ]
2017-08-07
[ [ "Cao", "Ruohan", "" ] ]
This paper focuses on Byzantine attack detection for Gaussian two-hop one-way relay network, where an amplify-and-forward relay may conduct Byzantine attacks by forwarding altered symbols to the destination. For facilitating attack detection, we utilize the openness of wireless medium to make the destination observe some secured signals that are not attacked. Then, a detection scheme is developed for the destination by using its secured observations to statistically check other observations from the relay. On the other hand, notice the Gaussian channel is continuous, which allows the possible Byzantine attacks to be conducted within continuous alphabet(s). The existing work on discrete channel is not applicable for investigating the performance of the proposed scheme. The main contribution of this paper is to prove that if and only if the wireless relay network satisfies a non-manipulable channel condition, the proposed detection scheme achieves asymptotic errorless performance against arbitrary attacks that allow the stochastic distributions of altered symbols to vary arbitrarily and depend on each other. No pre-shared secret or secret transmission is needed for the detection. Furthermore, we also prove that the relay network is non-manipulable as long as all channel coefficients are non-zero, which is not essential restrict for many practical systems.
2403.15481
Aastha Pant
Aastha Pant, Rashina Hoda, Chakkrit Tantithamthavorn, Burak Turhan
Navigating Fairness: Practitioners' Understanding, Challenges, and Strategies in AI/ML Development
46 pages, 8 figures, 2 tables
null
null
null
cs.CY cs.AI cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
The rise in the use of AI/ML applications across industries has sparked more discussions about the fairness of AI/ML in recent times. While prior research on the fairness of AI/ML exists, there is a lack of empirical studies focused on understanding the perspectives and experiences of AI practitioners in developing a fair AI/ML system. Understanding AI practitioners' perspectives and experiences on the fairness of AI/ML systems are important because they are directly involved in its development and deployment and their insights can offer valuable real-world perspectives on the challenges associated with ensuring fairness in AI/ML systems. We conducted semi-structured interviews with 22 AI practitioners to investigate their understanding of what a 'fair AI/ML' is, the challenges they face in developing a fair AI/ML system, the consequences of developing an unfair AI/ML system, and the strategies they employ to ensure AI/ML system fairness. We developed a framework showcasing the relationship between AI practitioners' understanding of 'fair AI/ML' system and (i) their challenges in its development, (ii) the consequences of developing an unfair AI/ML system, and (iii) strategies used to ensure AI/ML system fairness. By exploring AI practitioners' perspectives and experiences, this study provides actionable insights to enhance AI/ML fairness, which may promote fairer systems, reduce bias, and foster public trust in AI technologies. Additionally, we also identify areas for further investigation and offer recommendations to aid AI practitioners and AI companies in navigating fairness.
[ { "created": "Thu, 21 Mar 2024 03:44:59 GMT", "version": "v1" }, { "created": "Wed, 31 Jul 2024 14:47:24 GMT", "version": "v2" } ]
2024-08-02
[ [ "Pant", "Aastha", "" ], [ "Hoda", "Rashina", "" ], [ "Tantithamthavorn", "Chakkrit", "" ], [ "Turhan", "Burak", "" ] ]
The rise in the use of AI/ML applications across industries has sparked more discussions about the fairness of AI/ML in recent times. While prior research on the fairness of AI/ML exists, there is a lack of empirical studies focused on understanding the perspectives and experiences of AI practitioners in developing a fair AI/ML system. Understanding AI practitioners' perspectives and experiences on the fairness of AI/ML systems are important because they are directly involved in its development and deployment and their insights can offer valuable real-world perspectives on the challenges associated with ensuring fairness in AI/ML systems. We conducted semi-structured interviews with 22 AI practitioners to investigate their understanding of what a 'fair AI/ML' is, the challenges they face in developing a fair AI/ML system, the consequences of developing an unfair AI/ML system, and the strategies they employ to ensure AI/ML system fairness. We developed a framework showcasing the relationship between AI practitioners' understanding of 'fair AI/ML' system and (i) their challenges in its development, (ii) the consequences of developing an unfair AI/ML system, and (iii) strategies used to ensure AI/ML system fairness. By exploring AI practitioners' perspectives and experiences, this study provides actionable insights to enhance AI/ML fairness, which may promote fairer systems, reduce bias, and foster public trust in AI technologies. Additionally, we also identify areas for further investigation and offer recommendations to aid AI practitioners and AI companies in navigating fairness.
2405.07679
Yufei Gu
Yufei Gu
Class-wise Activation Unravelling the Engima of Deep Double Descent
arXiv admin note: text overlap with arXiv:2310.13572
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Double descent presents a counter-intuitive aspect within the machine learning domain, and researchers have observed its manifestation in various models and tasks. While some theoretical explanations have been proposed for this phenomenon in specific contexts, an accepted theory for its occurring mechanism in deep learning remains yet to be established. In this study, we revisited the phenomenon of double descent and discussed the conditions of its occurrence. This paper introduces the concept of class-activation matrices and a methodology for estimating the effective complexity of functions, on which we unveil that over-parameterized models exhibit more distinct and simpler class patterns in hidden activations compared to under-parameterized ones. We further looked into the interpolation of noisy labelled data among clean representations and demonstrated overfitting w.r.t. expressive capacity. By comprehensively analysing hypotheses and presenting corresponding empirical evidence that either validates or contradicts these hypotheses, we aim to provide fresh insights into the phenomenon of double descent and benign over-parameterization and facilitate future explorations. By comprehensively studying different hypotheses and the corresponding empirical evidence either supports or challenges these hypotheses, our goal is to offer new insights into the phenomena of double descent and benign over-parameterization, thereby enabling further explorations in the field. The source code is available at https://github.com/Yufei-Gu-451/sparse-generalization.git.
[ { "created": "Mon, 13 May 2024 12:07:48 GMT", "version": "v1" } ]
2024-05-14
[ [ "Gu", "Yufei", "" ] ]
Double descent presents a counter-intuitive aspect within the machine learning domain, and researchers have observed its manifestation in various models and tasks. While some theoretical explanations have been proposed for this phenomenon in specific contexts, an accepted theory for its occurring mechanism in deep learning remains yet to be established. In this study, we revisited the phenomenon of double descent and discussed the conditions of its occurrence. This paper introduces the concept of class-activation matrices and a methodology for estimating the effective complexity of functions, on which we unveil that over-parameterized models exhibit more distinct and simpler class patterns in hidden activations compared to under-parameterized ones. We further looked into the interpolation of noisy labelled data among clean representations and demonstrated overfitting w.r.t. expressive capacity. By comprehensively analysing hypotheses and presenting corresponding empirical evidence that either validates or contradicts these hypotheses, we aim to provide fresh insights into the phenomenon of double descent and benign over-parameterization and facilitate future explorations. By comprehensively studying different hypotheses and the corresponding empirical evidence either supports or challenges these hypotheses, our goal is to offer new insights into the phenomena of double descent and benign over-parameterization, thereby enabling further explorations in the field. The source code is available at https://github.com/Yufei-Gu-451/sparse-generalization.git.
2309.16708
Neda Rahimpour Anaraki
Neda Rahimpour Anaraki, Alireza Azadbakht, Maryam Tahmasbi, Hadi Farahani, Saeed Reza Kheradpisheh, Alireza Javaheri
Automatic Cadastral Boundary Detection of Very High Resolution Images Using Mask R-CNN
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recently, there has been a high demand for accelerating and improving the detection of automatic cadastral mapping. As this problem is in its starting point, there are many methods of computer vision and deep learning that have not been considered yet. In this paper, we focus on deep learning and provide three geometric post-processing methods that improve the quality of the work. Our framework includes two parts, each of which consists of a few phases. Our solution to this problem uses instance segmentation. In the first part, we use Mask R-CNN with the backbone of pre-trained ResNet-50 on the ImageNet dataset. In the second phase, we apply three geometric post-processing methods to the output of the first part to get better overall output. Here, we also use computational geometry to introduce a new method for simplifying lines which we call it pocket-based simplification algorithm. For evaluating the quality of our solution, we use popular formulas in this field which are recall, precision and F-score. The highest recall we gain is 95 percent which also maintains high Precision of 72 percent. This resulted in an F-score of 82 percent. Implementing instance segmentation using Mask R-CNN with some geometric post-processes to its output gives us promising results for this field. Also, results show that pocket-based simplification algorithms work better for simplifying lines than Douglas-Puecker algorithm.
[ { "created": "Thu, 17 Aug 2023 10:47:15 GMT", "version": "v1" } ]
2023-10-02
[ [ "Anaraki", "Neda Rahimpour", "" ], [ "Azadbakht", "Alireza", "" ], [ "Tahmasbi", "Maryam", "" ], [ "Farahani", "Hadi", "" ], [ "Kheradpisheh", "Saeed Reza", "" ], [ "Javaheri", "Alireza", "" ] ]
Recently, there has been a high demand for accelerating and improving the detection of automatic cadastral mapping. As this problem is in its starting point, there are many methods of computer vision and deep learning that have not been considered yet. In this paper, we focus on deep learning and provide three geometric post-processing methods that improve the quality of the work. Our framework includes two parts, each of which consists of a few phases. Our solution to this problem uses instance segmentation. In the first part, we use Mask R-CNN with the backbone of pre-trained ResNet-50 on the ImageNet dataset. In the second phase, we apply three geometric post-processing methods to the output of the first part to get better overall output. Here, we also use computational geometry to introduce a new method for simplifying lines which we call it pocket-based simplification algorithm. For evaluating the quality of our solution, we use popular formulas in this field which are recall, precision and F-score. The highest recall we gain is 95 percent which also maintains high Precision of 72 percent. This resulted in an F-score of 82 percent. Implementing instance segmentation using Mask R-CNN with some geometric post-processes to its output gives us promising results for this field. Also, results show that pocket-based simplification algorithms work better for simplifying lines than Douglas-Puecker algorithm.
1809.00232
Peng Yang
Peng Yang, Ning Zhang, Shan Zhang, Li Yu, Junshan Zhang, Xuemin Shen
Content Popularity Prediction Towards Location-Aware Mobile Edge Caching
to appear in IEEE Trans. Multimedia
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile edge caching enables content delivery within the radio access network, which effectively alleviates the backhaul burden and reduces response time. To fully exploit edge storage resources, the most popular contents should be identified and cached. Observing that user demands on certain contents vary greatly at different locations, this paper devises location-customized caching schemes to maximize the total content hit rate. Specifically, a linear model is used to estimate the future content hit rate. For the case where the model noise is zero-mean, a ridge regression based online algorithm with positive perturbation is proposed. Regret analysis indicates that the proposed algorithm asymptotically approaches the optimal caching strategy in the long run. When the noise structure is unknown, an $H_{\infty}$ filter based online algorithm is further proposed by taking a prescribed threshold as input, which guarantees prediction accuracy even under the worst-case noise process. Both online algorithms require no training phases, and hence are robust to the time-varying user demands. The underlying causes of estimation errors of both algorithms are numerically analyzed. Moreover, extensive experiments on real world dataset are conducted to validate the applicability of the proposed algorithms. It is demonstrated that those algorithms can be applied to scenarios with different noise features, and are able to make adaptive caching decisions, achieving content hit rate that is comparable to that via the hindsight optimal strategy.
[ { "created": "Sat, 1 Sep 2018 18:29:08 GMT", "version": "v1" } ]
2018-09-05
[ [ "Yang", "Peng", "" ], [ "Zhang", "Ning", "" ], [ "Zhang", "Shan", "" ], [ "Yu", "Li", "" ], [ "Zhang", "Junshan", "" ], [ "Shen", "Xuemin", "" ] ]
Mobile edge caching enables content delivery within the radio access network, which effectively alleviates the backhaul burden and reduces response time. To fully exploit edge storage resources, the most popular contents should be identified and cached. Observing that user demands on certain contents vary greatly at different locations, this paper devises location-customized caching schemes to maximize the total content hit rate. Specifically, a linear model is used to estimate the future content hit rate. For the case where the model noise is zero-mean, a ridge regression based online algorithm with positive perturbation is proposed. Regret analysis indicates that the proposed algorithm asymptotically approaches the optimal caching strategy in the long run. When the noise structure is unknown, an $H_{\infty}$ filter based online algorithm is further proposed by taking a prescribed threshold as input, which guarantees prediction accuracy even under the worst-case noise process. Both online algorithms require no training phases, and hence are robust to the time-varying user demands. The underlying causes of estimation errors of both algorithms are numerically analyzed. Moreover, extensive experiments on real world dataset are conducted to validate the applicability of the proposed algorithms. It is demonstrated that those algorithms can be applied to scenarios with different noise features, and are able to make adaptive caching decisions, achieving content hit rate that is comparable to that via the hindsight optimal strategy.
2109.06511
Yizhar Or
Oren Wiezel, Suresh Ramasamy, Nathan Justus, Yizhar Or and Ross Hatton
Geometric analysis of gaits and optimal control for three-link kinematic swimmers
accepted to Automatica, 2023
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Many robotic systems locomote using gaits - periodic changes of internal shape, whose mechanical interaction with the robot's environment generate characteristic net displacements. Prominent examples with two shape variables are the low Reynolds number 3-link "Purcell swimmer" with inputs of 2 joint angles and the "ideal fluid" swimmer. Gait analysis of these systems allows for intelligent decisions to be made about the swimmer's locomotive properties, increasing the potential for robotic autonomy. In this work, we present comparative analysis of gait optimization using two different methods. The first method is variational approach of "Pontryagin's maximum principle" (PMP) from optimal control theory. We apply PMP for several variants of 3-link swimmers, with and without incorporation of bounds on joint angles. The second method is differential-geometric analysis of the gaits based on curvature (total Lie bracket) of the local connection for 3-link swimmers. Using optimized body-motion coordinates, contour plots of the curvature in shape space give visualization that enables identifying distance-optimal gaits as zero level sets. Combining and comparing results of the two methods enables better understanding of changes in existence, shape and topology of distance-optimal gait trajectories, depending on the swimmers' parameters.
[ { "created": "Tue, 14 Sep 2021 08:15:51 GMT", "version": "v1" }, { "created": "Thu, 24 Aug 2023 15:44:07 GMT", "version": "v2" } ]
2023-08-25
[ [ "Wiezel", "Oren", "" ], [ "Ramasamy", "Suresh", "" ], [ "Justus", "Nathan", "" ], [ "Or", "Yizhar", "" ], [ "Hatton", "Ross", "" ] ]
Many robotic systems locomote using gaits - periodic changes of internal shape, whose mechanical interaction with the robot's environment generate characteristic net displacements. Prominent examples with two shape variables are the low Reynolds number 3-link "Purcell swimmer" with inputs of 2 joint angles and the "ideal fluid" swimmer. Gait analysis of these systems allows for intelligent decisions to be made about the swimmer's locomotive properties, increasing the potential for robotic autonomy. In this work, we present comparative analysis of gait optimization using two different methods. The first method is variational approach of "Pontryagin's maximum principle" (PMP) from optimal control theory. We apply PMP for several variants of 3-link swimmers, with and without incorporation of bounds on joint angles. The second method is differential-geometric analysis of the gaits based on curvature (total Lie bracket) of the local connection for 3-link swimmers. Using optimized body-motion coordinates, contour plots of the curvature in shape space give visualization that enables identifying distance-optimal gaits as zero level sets. Combining and comparing results of the two methods enables better understanding of changes in existence, shape and topology of distance-optimal gait trajectories, depending on the swimmers' parameters.
2011.05007
Quynh Ngoc Thi Do
Quynh Do, Judith Gaspers, Tobias Roding, Melanie Bradford
To What Degree Can Language Borders Be Blurred In BERT-based Multilingual Spoken Language Understanding?
COLING 2020
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the question as to what degree a BERT-based multilingual Spoken Language Understanding (SLU) model can transfer knowledge across languages. Through experiments we will show that, although it works substantially well even on distant language groups, there is still a gap to the ideal multilingual performance. In addition, we propose a novel BERT-based adversarial model architecture to learn language-shared and language-specific representations for multilingual SLU. Our experimental results prove that the proposed model is capable of narrowing the gap to the ideal multilingual performance.
[ { "created": "Tue, 10 Nov 2020 09:59:24 GMT", "version": "v1" } ]
2020-11-11
[ [ "Do", "Quynh", "" ], [ "Gaspers", "Judith", "" ], [ "Roding", "Tobias", "" ], [ "Bradford", "Melanie", "" ] ]
This paper addresses the question as to what degree a BERT-based multilingual Spoken Language Understanding (SLU) model can transfer knowledge across languages. Through experiments we will show that, although it works substantially well even on distant language groups, there is still a gap to the ideal multilingual performance. In addition, we propose a novel BERT-based adversarial model architecture to learn language-shared and language-specific representations for multilingual SLU. Our experimental results prove that the proposed model is capable of narrowing the gap to the ideal multilingual performance.
1407.7508
Zhenqiu Liu
Zhenqiu Liu and Gang Li
Efficient Regularized Regression for Variable Selection with L0 Penalty
26 pages and 3 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Variable (feature, gene, model, which we use interchangeably) selections for regression with high-dimensional BIGDATA have found many applications in bioinformatics, computational biology, image processing, and engineering. One appealing approach is the L0 regularized regression which penalizes the number of nonzero features in the model directly. L0 is known as the most essential sparsity measure and has nice theoretical properties, while the popular L1 regularization is only a best convex relaxation of L0. Therefore, it is natural to expect that L0 regularized regression performs better than LASSO. However, it is well-known that L0 optimization is NP-hard and computationally challenging. Instead of solving the L0 problems directly, most publications so far have tried to solve an approximation problem that closely resembles L0 regularization. In this paper, we propose an efficient EM algorithm (L0EM) that directly solves the L0 optimization problem. $L_0$EM is efficient with high dimensional data. It also provides a natural solution to all Lp p in [0,2] problems. The regularized parameter can be either determined through cross-validation or AIC and BIC. Theoretical properties of the L0-regularized estimator are given under mild conditions that permit the number of variables to be much larger than the sample size. We demonstrate our methods through simulation and high-dimensional genomic data. The results indicate that L0 has better performance than LASSO and L0 with AIC or BIC has similar performance as computationally intensive cross-validation. The proposed algorithms are efficient in identifying the non-zero variables with less-bias and selecting biologically important genes and pathways with high dimensional BIGDATA.
[ { "created": "Mon, 28 Jul 2014 19:28:26 GMT", "version": "v1" } ]
2014-07-29
[ [ "Liu", "Zhenqiu", "" ], [ "Li", "Gang", "" ] ]
Variable (feature, gene, model, which we use interchangeably) selections for regression with high-dimensional BIGDATA have found many applications in bioinformatics, computational biology, image processing, and engineering. One appealing approach is the L0 regularized regression which penalizes the number of nonzero features in the model directly. L0 is known as the most essential sparsity measure and has nice theoretical properties, while the popular L1 regularization is only a best convex relaxation of L0. Therefore, it is natural to expect that L0 regularized regression performs better than LASSO. However, it is well-known that L0 optimization is NP-hard and computationally challenging. Instead of solving the L0 problems directly, most publications so far have tried to solve an approximation problem that closely resembles L0 regularization. In this paper, we propose an efficient EM algorithm (L0EM) that directly solves the L0 optimization problem. $L_0$EM is efficient with high dimensional data. It also provides a natural solution to all Lp p in [0,2] problems. The regularized parameter can be either determined through cross-validation or AIC and BIC. Theoretical properties of the L0-regularized estimator are given under mild conditions that permit the number of variables to be much larger than the sample size. We demonstrate our methods through simulation and high-dimensional genomic data. The results indicate that L0 has better performance than LASSO and L0 with AIC or BIC has similar performance as computationally intensive cross-validation. The proposed algorithms are efficient in identifying the non-zero variables with less-bias and selecting biologically important genes and pathways with high dimensional BIGDATA.
2304.00180
Zihao Wang
Zihao Wang, Eugene Agichtein and Jinho Choi
FCC: Fusing Conversation History and Candidate Provenance for Contextual Response Ranking in Dialogue Systems
The 13th International Workshop on Spoken Dialogue Systems Technology
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Response ranking in dialogues plays a crucial role in retrieval-based conversational systems. In a multi-turn dialogue, to capture the gist of a conversation, contextual information serves as essential knowledge to achieve this goal. In this paper, we present a flexible neural framework that can integrate contextual information from multiple channels. Specifically for the current task, our approach is to provide two information channels in parallel, Fusing Conversation history and domain knowledge extracted from Candidate provenance (FCC), where candidate responses are curated, as contextual information to improve the performance of multi-turn dialogue response ranking. The proposed approach can be generalized as a module to incorporate miscellaneous contextual features for other context-oriented tasks. We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks. Our experimental results show that our framework significantly outperforms the previous state-of-the-art models, improving Recall@1 by 7% and MAP by 4%. Furthermore, we conduct ablation studies to evaluate the contributions of each information channel, and of the framework components, to the overall ranking performance, providing additional insights and directions for further improvements.
[ { "created": "Fri, 31 Mar 2023 23:58:28 GMT", "version": "v1" } ]
2023-04-04
[ [ "Wang", "Zihao", "" ], [ "Agichtein", "Eugene", "" ], [ "Choi", "Jinho", "" ] ]
Response ranking in dialogues plays a crucial role in retrieval-based conversational systems. In a multi-turn dialogue, to capture the gist of a conversation, contextual information serves as essential knowledge to achieve this goal. In this paper, we present a flexible neural framework that can integrate contextual information from multiple channels. Specifically for the current task, our approach is to provide two information channels in parallel, Fusing Conversation history and domain knowledge extracted from Candidate provenance (FCC), where candidate responses are curated, as contextual information to improve the performance of multi-turn dialogue response ranking. The proposed approach can be generalized as a module to incorporate miscellaneous contextual features for other context-oriented tasks. We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks. Our experimental results show that our framework significantly outperforms the previous state-of-the-art models, improving Recall@1 by 7% and MAP by 4%. Furthermore, we conduct ablation studies to evaluate the contributions of each information channel, and of the framework components, to the overall ranking performance, providing additional insights and directions for further improvements.
2103.13389
Vadim Sushko
Vadim Sushko, Dan Zhang, Juergen Gall, Anna Khoreva
Generating Novel Scene Compositions from Single Images and Videos
Accepted for publication in Computer Vision and Image Understanding: https://www.sciencedirect.com/science/article/pii/S1077314223002680. Code repository: https://github.com/boschresearch/one-shot-synthesis
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a large dataset for training, generative adversarial networks (GANs) can achieve remarkable performance for the image synthesis task. However, training GANs in extremely low data regimes remains a challenge, as overfitting often occurs, leading to memorization or training divergence. In this work, we introduce SIV-GAN, an unconditional generative model that can generate new scene compositions from a single training image or a single video clip. We propose a two-branch discriminator architecture, with content and layout branches designed to judge internal content and scene layout realism separately from each other. This discriminator design enables synthesis of visually plausible, novel compositions of a scene, with varying content and layout, while preserving the context of the original sample. Compared to previous single image GANs, our model generates more diverse, higher quality images, while not being restricted to a single image setting. We further introduce a new challenging task of learning from a few frames of a single video. In this training setup the training images are highly similar to each other, which makes it difficult for prior GAN models to achieve a synthesis of both high quality and diversity.
[ { "created": "Wed, 24 Mar 2021 17:59:07 GMT", "version": "v1" }, { "created": "Tue, 19 Oct 2021 10:55:52 GMT", "version": "v2" }, { "created": "Thu, 17 Mar 2022 16:03:00 GMT", "version": "v3" }, { "created": "Sun, 16 Jul 2023 04:42:07 GMT", "version": "v4" }, { "created": "Wed, 13 Dec 2023 13:44:40 GMT", "version": "v5" } ]
2023-12-14
[ [ "Sushko", "Vadim", "" ], [ "Zhang", "Dan", "" ], [ "Gall", "Juergen", "" ], [ "Khoreva", "Anna", "" ] ]
Given a large dataset for training, generative adversarial networks (GANs) can achieve remarkable performance for the image synthesis task. However, training GANs in extremely low data regimes remains a challenge, as overfitting often occurs, leading to memorization or training divergence. In this work, we introduce SIV-GAN, an unconditional generative model that can generate new scene compositions from a single training image or a single video clip. We propose a two-branch discriminator architecture, with content and layout branches designed to judge internal content and scene layout realism separately from each other. This discriminator design enables synthesis of visually plausible, novel compositions of a scene, with varying content and layout, while preserving the context of the original sample. Compared to previous single image GANs, our model generates more diverse, higher quality images, while not being restricted to a single image setting. We further introduce a new challenging task of learning from a few frames of a single video. In this training setup the training images are highly similar to each other, which makes it difficult for prior GAN models to achieve a synthesis of both high quality and diversity.
2211.12820
Kunal Relia
Kunal Relia
Fairly Allocating Utility in Constrained Multiwinner Elections
13 pages (2-column), 3 figures
null
null
null
cs.GT cs.AI cs.CY
http://creativecommons.org/licenses/by-sa/4.0/
Fairness in multiwinner elections is studied in varying contexts. For instance, diversity of candidates and representation of voters are both separately termed as being fair. A common denominator to ensure fairness across all such contexts is the use of constraints. However, across these contexts, the candidates selected to satisfy the given constraints may systematically lead to unfair outcomes for historically disadvantaged voter populations as the cost of fairness may be borne unequally. Hence, we develop a model to select candidates that satisfy the constraints fairly across voter populations. To do so, the model maps the constrained multiwinner election problem to a problem of fairly allocating indivisible goods. We propose three variants of the model, namely, global, localized, and inter-sectional. Next, we analyze the model's computational complexity, and we present an empirical analysis of the utility traded-off across various settings of our model across the three variants and discuss the impact of Simpson's paradox using synthetic datasets and a dataset of voting at the United Nations. Finally, we discuss the implications of our work for AI and machine learning, especially for studies that use constraints to guarantee fairness.
[ { "created": "Wed, 23 Nov 2022 10:04:26 GMT", "version": "v1" } ]
2022-11-24
[ [ "Relia", "Kunal", "" ] ]
Fairness in multiwinner elections is studied in varying contexts. For instance, diversity of candidates and representation of voters are both separately termed as being fair. A common denominator to ensure fairness across all such contexts is the use of constraints. However, across these contexts, the candidates selected to satisfy the given constraints may systematically lead to unfair outcomes for historically disadvantaged voter populations as the cost of fairness may be borne unequally. Hence, we develop a model to select candidates that satisfy the constraints fairly across voter populations. To do so, the model maps the constrained multiwinner election problem to a problem of fairly allocating indivisible goods. We propose three variants of the model, namely, global, localized, and inter-sectional. Next, we analyze the model's computational complexity, and we present an empirical analysis of the utility traded-off across various settings of our model across the three variants and discuss the impact of Simpson's paradox using synthetic datasets and a dataset of voting at the United Nations. Finally, we discuss the implications of our work for AI and machine learning, especially for studies that use constraints to guarantee fairness.
1304.2581
Debasish Chatterjee
Debasish Chatterjee and John Lygeros
Stability and performance of stochastic predictive control
19 pages. Minor corrections and updated references
IEEE Transactions on Automatic Control, Vol. 60, No. 2, pp. 509-514, 2015
10.1109/TAC.2014.2335274
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article is concerned with stability and performance of controlled stochastic processes under receding horizon policies. We carry out a systematic study of methods to guarantee stability under receding horizon policies via appropriate selections of cost functions in the underlying finite-horizon optimal control problem. We also obtain quantitative bounds on the performance of the system under receding horizon policies as measured by the long-run expected average cost. The results are illustrated with the help of several simple examples.
[ { "created": "Tue, 9 Apr 2013 13:29:49 GMT", "version": "v1" }, { "created": "Fri, 19 Apr 2013 09:27:03 GMT", "version": "v2" } ]
2017-11-27
[ [ "Chatterjee", "Debasish", "" ], [ "Lygeros", "John", "" ] ]
This article is concerned with stability and performance of controlled stochastic processes under receding horizon policies. We carry out a systematic study of methods to guarantee stability under receding horizon policies via appropriate selections of cost functions in the underlying finite-horizon optimal control problem. We also obtain quantitative bounds on the performance of the system under receding horizon policies as measured by the long-run expected average cost. The results are illustrated with the help of several simple examples.
2407.17787
Fali Wang
Fali Wang, Tianxiang Zhao, Junjie Xu, Suhang Wang
HC-GST: Heterophily-aware Distribution Consistency based Graph Self-training
accepted by CIKM 2024
null
10.1145/3627673.3679622
null
cs.SI cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph self-training (GST), which selects and assigns pseudo-labels to unlabeled nodes, is popular for tackling label sparsity in graphs. However, recent study on homophily graphs show that GST methods could introduce and amplify distribution shift between training and test nodes as they tend to assign pseudo-labels to nodes they are good at. As GNNs typically perform better on homophilic nodes, there could be potential shifts towards homophilic pseudo-nodes, which is underexplored. Our preliminary experiments on heterophilic graphs verify that these methods can cause shifts in homophily ratio distributions, leading to \textit{training bias} that improves performance on homophilic nodes while degrading it on heterophilic ones. Therefore, we study a novel problem of reducing homophily ratio distribution shifts during self-training on heterophilic graphs. A key challenge is the accurate calculation of homophily ratios and their distributions without extensive labeled data. To tackle them, we propose a novel Heterophily-aware Distribution Consistency-based Graph Self-Training (HC-GST) framework, which estimates homophily ratios using soft labels and optimizes a selection vector to align pseudo-nodes with the global homophily ratio distribution. Extensive experiments on both homophilic and heterophilic graphs show that HC-GST effectively reduces training bias and enhances self-training performance.
[ { "created": "Thu, 25 Jul 2024 05:38:06 GMT", "version": "v1" } ]
2024-07-26
[ [ "Wang", "Fali", "" ], [ "Zhao", "Tianxiang", "" ], [ "Xu", "Junjie", "" ], [ "Wang", "Suhang", "" ] ]
Graph self-training (GST), which selects and assigns pseudo-labels to unlabeled nodes, is popular for tackling label sparsity in graphs. However, recent study on homophily graphs show that GST methods could introduce and amplify distribution shift between training and test nodes as they tend to assign pseudo-labels to nodes they are good at. As GNNs typically perform better on homophilic nodes, there could be potential shifts towards homophilic pseudo-nodes, which is underexplored. Our preliminary experiments on heterophilic graphs verify that these methods can cause shifts in homophily ratio distributions, leading to \textit{training bias} that improves performance on homophilic nodes while degrading it on heterophilic ones. Therefore, we study a novel problem of reducing homophily ratio distribution shifts during self-training on heterophilic graphs. A key challenge is the accurate calculation of homophily ratios and their distributions without extensive labeled data. To tackle them, we propose a novel Heterophily-aware Distribution Consistency-based Graph Self-Training (HC-GST) framework, which estimates homophily ratios using soft labels and optimizes a selection vector to align pseudo-nodes with the global homophily ratio distribution. Extensive experiments on both homophilic and heterophilic graphs show that HC-GST effectively reduces training bias and enhances self-training performance.
1505.00110
Peter Hall
Hongping Cai and Qi Wu and Tadeo Corradi and Peter Hall
The Cross-Depiction Problem: Computer Vision Algorithms for Recognising Objects in Artwork and in Photographs
12 pages, 6 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cross-depiction problem is that of recognising visual objects regardless of whether they are photographed, painted, drawn, etc. It is a potentially significant yet under-researched problem. Emulating the remarkable human ability to recognise objects in an astonishingly wide variety of depictive forms is likely to advance both the foundations and the applications of Computer Vision. In this paper we benchmark classification, domain adaptation, and deep learning methods; demonstrating that none perform consistently well in the cross-depiction problem. Given the current interest in deep learning, the fact such methods exhibit the same behaviour as all but one other method: they show a significant fall in performance over inhomogeneous databases compared to their peak performance, which is always over data comprising photographs only. Rather, we find the methods that have strong models of spatial relations between parts tend to be more robust and therefore conclude that such information is important in modelling object classes regardless of appearance details.
[ { "created": "Fri, 1 May 2015 07:38:52 GMT", "version": "v1" } ]
2015-05-04
[ [ "Cai", "Hongping", "" ], [ "Wu", "Qi", "" ], [ "Corradi", "Tadeo", "" ], [ "Hall", "Peter", "" ] ]
The cross-depiction problem is that of recognising visual objects regardless of whether they are photographed, painted, drawn, etc. It is a potentially significant yet under-researched problem. Emulating the remarkable human ability to recognise objects in an astonishingly wide variety of depictive forms is likely to advance both the foundations and the applications of Computer Vision. In this paper we benchmark classification, domain adaptation, and deep learning methods; demonstrating that none perform consistently well in the cross-depiction problem. Given the current interest in deep learning, the fact such methods exhibit the same behaviour as all but one other method: they show a significant fall in performance over inhomogeneous databases compared to their peak performance, which is always over data comprising photographs only. Rather, we find the methods that have strong models of spatial relations between parts tend to be more robust and therefore conclude that such information is important in modelling object classes regardless of appearance details.
2310.10653
Fikret Basic
Fikret Basic, Martin Gaertner, Christian Steger
Secure and Trustworthy NFC-based Sensor Readout for Battery Packs in Battery Management Systems
arXiv admin note: text overlap with arXiv:2308.09366
IEEE Journal of Radio Frequency Identification 2022, vol. 6, pages 637-648
10.1109/JRFID.2022.3170381
null
cs.CR cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless Battery Management Systems (BMS) are increasingly being considered for modern applications. The ever-increasing complexity and production costs of BMS modules and wired connections resulted in a necessity for new ideas and approaches. Despite this growing trend, there is a lack of generic solutions focused on battery cells' sensor readout, where wireless communication allows for a more flexible and cost-efficient sensor installation in battery packs. Many wireless technologies, such as those that use the 2.4 GHz frequency band, suffer from interference and other limitations. In this article, we present an alternative approach to communication in BMS that relies on the use of Near Field Communication (NFC) technology for battery sensor readouts. As an answer to the rising concern over the counterfeited battery packs, we consider an authentication schema for battery pack validation. We further consider security measures for the processed and stored BMS status data. To show that a general BMS application can make use of our design, we implement a BMS demonstrator using the targeted components. We further test the demonstrator on the technical and functional level, by also performing evaluation on its performance, energy usage, and a security threat model.
[ { "created": "Thu, 31 Aug 2023 22:55:21 GMT", "version": "v1" } ]
2023-10-18
[ [ "Basic", "Fikret", "" ], [ "Gaertner", "Martin", "" ], [ "Steger", "Christian", "" ] ]
Wireless Battery Management Systems (BMS) are increasingly being considered for modern applications. The ever-increasing complexity and production costs of BMS modules and wired connections resulted in a necessity for new ideas and approaches. Despite this growing trend, there is a lack of generic solutions focused on battery cells' sensor readout, where wireless communication allows for a more flexible and cost-efficient sensor installation in battery packs. Many wireless technologies, such as those that use the 2.4 GHz frequency band, suffer from interference and other limitations. In this article, we present an alternative approach to communication in BMS that relies on the use of Near Field Communication (NFC) technology for battery sensor readouts. As an answer to the rising concern over the counterfeited battery packs, we consider an authentication schema for battery pack validation. We further consider security measures for the processed and stored BMS status data. To show that a general BMS application can make use of our design, we implement a BMS demonstrator using the targeted components. We further test the demonstrator on the technical and functional level, by also performing evaluation on its performance, energy usage, and a security threat model.
2308.13217
Masoud Mokhtari
Masoud Mokhtari, Neda Ahmadi, Teresa S. M. Tsang, Purang Abolmaesumi, Renjie Liao
GEMTrans: A General, Echocardiography-based, Multi-Level Transformer Framework for Cardiovascular Diagnosis
To be published in MLMI 2023
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Echocardiography (echo) is an ultrasound imaging modality that is widely used for various cardiovascular diagnosis tasks. Due to inter-observer variability in echo-based diagnosis, which arises from the variability in echo image acquisition and the interpretation of echo images based on clinical experience, vision-based machine learning (ML) methods have gained popularity to act as secondary layers of verification. For such safety-critical applications, it is essential for any proposed ML method to present a level of explainability along with good accuracy. In addition, such methods must be able to process several echo videos obtained from various heart views and the interactions among them to properly produce predictions for a variety of cardiovascular measurements or interpretation tasks. Prior work lacks explainability or is limited in scope by focusing on a single cardiovascular task. To remedy this, we propose a General, Echo-based, Multi-Level Transformer (GEMTrans) framework that provides explainability, while simultaneously enabling multi-video training where the inter-play among echo image patches in the same frame, all frames in the same video, and inter-video relationships are captured based on a downstream task. We show the flexibility of our framework by considering two critical tasks including ejection fraction (EF) and aortic stenosis (AS) severity detection. Our model achieves mean absolute errors of 4.15 and 4.84 for single and dual-video EF estimation and an accuracy of 96.5 % for AS detection, while providing informative task-specific attention maps and prototypical explainability.
[ { "created": "Fri, 25 Aug 2023 07:30:18 GMT", "version": "v1" } ]
2023-08-28
[ [ "Mokhtari", "Masoud", "" ], [ "Ahmadi", "Neda", "" ], [ "Tsang", "Teresa S. M.", "" ], [ "Abolmaesumi", "Purang", "" ], [ "Liao", "Renjie", "" ] ]
Echocardiography (echo) is an ultrasound imaging modality that is widely used for various cardiovascular diagnosis tasks. Due to inter-observer variability in echo-based diagnosis, which arises from the variability in echo image acquisition and the interpretation of echo images based on clinical experience, vision-based machine learning (ML) methods have gained popularity to act as secondary layers of verification. For such safety-critical applications, it is essential for any proposed ML method to present a level of explainability along with good accuracy. In addition, such methods must be able to process several echo videos obtained from various heart views and the interactions among them to properly produce predictions for a variety of cardiovascular measurements or interpretation tasks. Prior work lacks explainability or is limited in scope by focusing on a single cardiovascular task. To remedy this, we propose a General, Echo-based, Multi-Level Transformer (GEMTrans) framework that provides explainability, while simultaneously enabling multi-video training where the inter-play among echo image patches in the same frame, all frames in the same video, and inter-video relationships are captured based on a downstream task. We show the flexibility of our framework by considering two critical tasks including ejection fraction (EF) and aortic stenosis (AS) severity detection. Our model achieves mean absolute errors of 4.15 and 4.84 for single and dual-video EF estimation and an accuracy of 96.5 % for AS detection, while providing informative task-specific attention maps and prototypical explainability.
1908.04911
Nicolas Christianson
Nicolas H. Christianson, Ann Sizemore Blevins, Danielle S. Bassett
Architecture and evolution of semantic networks in mathematics texts
22 pages, 5 figures
null
10.1098/rspa.2019.0741
null
cs.CL physics.soc-ph q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge is a network of interconnected concepts. Yet, precisely how the topological structure of knowledge constrains its acquisition remains unknown, hampering the development of learning enhancement strategies. Here we study the topological structure of semantic networks reflecting mathematical concepts and their relations in college-level linear algebra texts. We hypothesize that these networks will exhibit structural order, reflecting the logical sequence of topics that ensures accessibility. We find that the networks exhibit strong core-periphery architecture, where a dense core of concepts presented early is complemented with a sparse periphery presented evenly throughout the exposition; the latter is composed of many small modules each reflecting more narrow domains. Using tools from applied topology, we find that the expositional evolution of the semantic networks produces and subsequently fills knowledge gaps, and that the density of these gaps tracks negatively with community ratings of each textbook. Broadly, our study lays the groundwork for future efforts developing optimal design principles for textbook exposition and teaching in a classroom setting.
[ { "created": "Wed, 14 Aug 2019 01:38:07 GMT", "version": "v1" }, { "created": "Tue, 24 Sep 2019 22:10:22 GMT", "version": "v2" } ]
2021-03-17
[ [ "Christianson", "Nicolas H.", "" ], [ "Blevins", "Ann Sizemore", "" ], [ "Bassett", "Danielle S.", "" ] ]
Knowledge is a network of interconnected concepts. Yet, precisely how the topological structure of knowledge constrains its acquisition remains unknown, hampering the development of learning enhancement strategies. Here we study the topological structure of semantic networks reflecting mathematical concepts and their relations in college-level linear algebra texts. We hypothesize that these networks will exhibit structural order, reflecting the logical sequence of topics that ensures accessibility. We find that the networks exhibit strong core-periphery architecture, where a dense core of concepts presented early is complemented with a sparse periphery presented evenly throughout the exposition; the latter is composed of many small modules each reflecting more narrow domains. Using tools from applied topology, we find that the expositional evolution of the semantic networks produces and subsequently fills knowledge gaps, and that the density of these gaps tracks negatively with community ratings of each textbook. Broadly, our study lays the groundwork for future efforts developing optimal design principles for textbook exposition and teaching in a classroom setting.