id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1803.10067
Bernd Burgstaller
Johann Blieberger and Bernd Burgstaller
Safe Non-blocking Synchronization in Ada 202x
null
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mutual-exclusion property of locks stands in the way to scalability of parallel programs on many-core architectures. Locks do not allow progress guarantees, because a task may fail inside a critical section and keep holding a lock that blocks other tasks from accessing shared data. With non-blocking synchronization, the drawbacks of locks are avoided by synchronizing access to shared data by atomic read-modify-write operations. To incorporate non-blocking synchronization in Ada~202x, programmers must be able to reason about the behavior and performance of tasks in the absence of protected objects and rendezvous. We therefore extend Ada's memory model by synchronized types, which support the expression of memory ordering operations at a sufficient level of detail. To mitigate the complexity associated with non-blocking synchronization, we propose concurrent objects as a novel high-level language construct. Entities of a concurrent object execute in parallel, due to a fine-grained, optimistic synchronization mechanism. Synchronization is framed by the semantics of concurrent entry execution. The programmer is only required to label shared data accesses in the code of concurrent entries. Labels constitute memory-ordering operations expressed through attributes. To the best of our knowledge, this is the first approach to provide a non-blocking synchronization construct as a first-class citizen of a high-level programming language. We illustrate the use of concurrent objects by several examples.
[ { "created": "Tue, 27 Mar 2018 13:31:19 GMT", "version": "v1" }, { "created": "Mon, 18 Jun 2018 22:20:12 GMT", "version": "v2" } ]
2018-06-20
[ [ "Blieberger", "Johann", "" ], [ "Burgstaller", "Bernd", "" ] ]
The mutual-exclusion property of locks stands in the way to scalability of parallel programs on many-core architectures. Locks do not allow progress guarantees, because a task may fail inside a critical section and keep holding a lock that blocks other tasks from accessing shared data. With non-blocking synchronization, the drawbacks of locks are avoided by synchronizing access to shared data by atomic read-modify-write operations. To incorporate non-blocking synchronization in Ada~202x, programmers must be able to reason about the behavior and performance of tasks in the absence of protected objects and rendezvous. We therefore extend Ada's memory model by synchronized types, which support the expression of memory ordering operations at a sufficient level of detail. To mitigate the complexity associated with non-blocking synchronization, we propose concurrent objects as a novel high-level language construct. Entities of a concurrent object execute in parallel, due to a fine-grained, optimistic synchronization mechanism. Synchronization is framed by the semantics of concurrent entry execution. The programmer is only required to label shared data accesses in the code of concurrent entries. Labels constitute memory-ordering operations expressed through attributes. To the best of our knowledge, this is the first approach to provide a non-blocking synchronization construct as a first-class citizen of a high-level programming language. We illustrate the use of concurrent objects by several examples.
2109.02514
Andrea Araldo
Mathieu Simon, Alessandro Spallina, Loic Dubocquet, Andrea Araldo
Parsimonious Edge Computing to Reduce Microservice Resource Usage
null
null
null
null
cs.NI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cloud Computing (CC) is the most prevalent paradigm under which services are provided over the Internet. The most relevant feature for its success is its capability to promptly scale service based on user demand. When scaling, the main objective is to maximize as much as possible service performance. Moreover, resources in the Cloud are usually so abundant, that they can be assumed infinite from the service point of view: an application provider can have as many servers it wills, as long it pays for it. This model has some limitations. First, energy efficiency is not among the first criteria for scaling decisions, which has raised concerns about the environmental effects of today's wild computations in the Cloud. Moreover, it is not viable for Edge Computing (EC), a paradigm in which computational resources are distributed up to the very edge of the network, i.e., co-located with base stations or access points. In edge nodes, resources are limited, which imposes different parsimonious scaling strategies to be adopted. In this work, we design a scaling strategy aimed to instantiate, parsimoniously, a number of microservices sufficient to guarantee a certain Quality of Service (QoS) target. We implement such a strategy in a Kubernetes/Docker environment. The strategy is based on a simple Proportional-Integrative-Derivative (PID) controller. In this paper we describe the system design and a preliminary performance evaluation.
[ { "created": "Mon, 6 Sep 2021 14:47:36 GMT", "version": "v1" } ]
2021-09-07
[ [ "Simon", "Mathieu", "" ], [ "Spallina", "Alessandro", "" ], [ "Dubocquet", "Loic", "" ], [ "Araldo", "Andrea", "" ] ]
Cloud Computing (CC) is the most prevalent paradigm under which services are provided over the Internet. The most relevant feature for its success is its capability to promptly scale service based on user demand. When scaling, the main objective is to maximize as much as possible service performance. Moreover, resources in the Cloud are usually so abundant, that they can be assumed infinite from the service point of view: an application provider can have as many servers it wills, as long it pays for it. This model has some limitations. First, energy efficiency is not among the first criteria for scaling decisions, which has raised concerns about the environmental effects of today's wild computations in the Cloud. Moreover, it is not viable for Edge Computing (EC), a paradigm in which computational resources are distributed up to the very edge of the network, i.e., co-located with base stations or access points. In edge nodes, resources are limited, which imposes different parsimonious scaling strategies to be adopted. In this work, we design a scaling strategy aimed to instantiate, parsimoniously, a number of microservices sufficient to guarantee a certain Quality of Service (QoS) target. We implement such a strategy in a Kubernetes/Docker environment. The strategy is based on a simple Proportional-Integrative-Derivative (PID) controller. In this paper we describe the system design and a preliminary performance evaluation.
2407.11950
Jiaxi Zeng
Jiaxi Zeng, Chengtang Yao, Yuwei Wu, Yunde Jia
Temporally Consistent Stereo Matching
ECCV 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Stereo matching provides depth estimation from binocular images for downstream applications. These applications mostly take video streams as input and require temporally consistent depth maps. However, existing methods mainly focus on the estimation at the single-frame level. This commonly leads to temporally inconsistent results, especially in ill-posed regions. In this paper, we aim to leverage temporal information to improve the temporal consistency, accuracy, and efficiency of stereo matching. To achieve this, we formulate video stereo matching as a process of temporal disparity completion followed by continuous iterative refinements. Specifically, we first project the disparity of the previous timestamp to the current viewpoint, obtaining a semi-dense disparity map. Then, we complete this map through a disparity completion module to obtain a well-initialized disparity map. The state features from the current completion module and from the past refinement are fused together, providing a temporally coherent state for subsequent refinement. Based on this coherent state, we introduce a dual-space refinement module to iteratively refine the initialized result in both disparity and disparity gradient spaces, improving estimations in ill-posed regions. Extensive experiments demonstrate that our method effectively alleviates temporal inconsistency while enhancing both accuracy and efficiency.
[ { "created": "Tue, 16 Jul 2024 17:44:34 GMT", "version": "v1" } ]
2024-07-17
[ [ "Zeng", "Jiaxi", "" ], [ "Yao", "Chengtang", "" ], [ "Wu", "Yuwei", "" ], [ "Jia", "Yunde", "" ] ]
Stereo matching provides depth estimation from binocular images for downstream applications. These applications mostly take video streams as input and require temporally consistent depth maps. However, existing methods mainly focus on the estimation at the single-frame level. This commonly leads to temporally inconsistent results, especially in ill-posed regions. In this paper, we aim to leverage temporal information to improve the temporal consistency, accuracy, and efficiency of stereo matching. To achieve this, we formulate video stereo matching as a process of temporal disparity completion followed by continuous iterative refinements. Specifically, we first project the disparity of the previous timestamp to the current viewpoint, obtaining a semi-dense disparity map. Then, we complete this map through a disparity completion module to obtain a well-initialized disparity map. The state features from the current completion module and from the past refinement are fused together, providing a temporally coherent state for subsequent refinement. Based on this coherent state, we introduce a dual-space refinement module to iteratively refine the initialized result in both disparity and disparity gradient spaces, improving estimations in ill-posed regions. Extensive experiments demonstrate that our method effectively alleviates temporal inconsistency while enhancing both accuracy and efficiency.
1308.0056
Kaiwen Zhang
Kaiwen Zhang, Hans-Arno Jacobsen
SDN-like: The Next Generation of Pub/Sub
null
null
null
null
cs.NI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software-Defined Networking (SDN) has raised the boundaries of cloud computing by offering unparalleled levels of control and flexibility to system administrators over their virtualized environments. To properly embrace this new era of SDN-driven network architectures, the research community must not only consider the impact of SDN over the protocol stack, but also on its overlying networked applications. In this big ideas paper, we study the impact of SDN on the design of future message-oriented middleware, specifically pub/sub systems. We argue that key concepts put forth by SDN can be applied in a meaningful fashion to the next generation of pub/sub systems. First, pub/sub can adopt a logically centralized controller model for maintenance, monitoring, and control of the overlay network. We establish a parallel with existing work on centralized pub/sub routing and discuss how the logically centralized controller model can be implemented in a distributed manner. Second, we investigate the separation of the control and data plane, which is integral to SDN, which can be adopted to raise the level of decoupling in pub/sub. We introduce a new model of pub/sub which separates the traditional publisher and subscriber roles into flow regulators and producer/consumers of data. We then present use cases that benefit from this approach and study the impact of decoupling for performance.
[ { "created": "Wed, 31 Jul 2013 22:37:33 GMT", "version": "v1" } ]
2013-08-02
[ [ "Zhang", "Kaiwen", "" ], [ "Jacobsen", "Hans-Arno", "" ] ]
Software-Defined Networking (SDN) has raised the boundaries of cloud computing by offering unparalleled levels of control and flexibility to system administrators over their virtualized environments. To properly embrace this new era of SDN-driven network architectures, the research community must not only consider the impact of SDN over the protocol stack, but also on its overlying networked applications. In this big ideas paper, we study the impact of SDN on the design of future message-oriented middleware, specifically pub/sub systems. We argue that key concepts put forth by SDN can be applied in a meaningful fashion to the next generation of pub/sub systems. First, pub/sub can adopt a logically centralized controller model for maintenance, monitoring, and control of the overlay network. We establish a parallel with existing work on centralized pub/sub routing and discuss how the logically centralized controller model can be implemented in a distributed manner. Second, we investigate the separation of the control and data plane, which is integral to SDN, which can be adopted to raise the level of decoupling in pub/sub. We introduce a new model of pub/sub which separates the traditional publisher and subscriber roles into flow regulators and producer/consumers of data. We then present use cases that benefit from this approach and study the impact of decoupling for performance.
2106.05657
Shashank Kotyan
Shashank Kotyan and Danilo Vasconcellos Vargas
Deep neural network loses attention to adversarial images
Accepted in Workshop on Artificial Intelligence Safety (AISafety 2021), IJCAI-2021
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Adversarial algorithms have shown to be effective against neural networks for a variety of tasks. Some adversarial algorithms perturb all the pixels in the image minimally for the image classification task in image classification. In contrast, some algorithms perturb few pixels strongly. However, very little information is available regarding why these adversarial samples so diverse from each other exist. Recently, Vargas et al. showed that the existence of these adversarial samples might be due to conflicting saliency within the neural network. We test this hypothesis of conflicting saliency by analysing the Saliency Maps (SM) and Gradient-weighted Class Activation Maps (Grad-CAM) of original and few different types of adversarial samples. We also analyse how different adversarial samples distort the attention of the neural network compared to original samples. We show that in the case of Pixel Attack, perturbed pixels either calls the network attention to themselves or divert the attention from them. Simultaneously, the Projected Gradient Descent Attack perturbs pixels so that intermediate layers inside the neural network lose attention for the correct class. We also show that both attacks affect the saliency map and activation maps differently. Thus, shedding light on why some defences successful against some attacks remain vulnerable against other attacks. We hope that this analysis will improve understanding of the existence and the effect of adversarial samples and enable the community to develop more robust neural networks.
[ { "created": "Thu, 10 Jun 2021 11:06:17 GMT", "version": "v1" } ]
2021-06-11
[ [ "Kotyan", "Shashank", "" ], [ "Vargas", "Danilo Vasconcellos", "" ] ]
Adversarial algorithms have shown to be effective against neural networks for a variety of tasks. Some adversarial algorithms perturb all the pixels in the image minimally for the image classification task in image classification. In contrast, some algorithms perturb few pixels strongly. However, very little information is available regarding why these adversarial samples so diverse from each other exist. Recently, Vargas et al. showed that the existence of these adversarial samples might be due to conflicting saliency within the neural network. We test this hypothesis of conflicting saliency by analysing the Saliency Maps (SM) and Gradient-weighted Class Activation Maps (Grad-CAM) of original and few different types of adversarial samples. We also analyse how different adversarial samples distort the attention of the neural network compared to original samples. We show that in the case of Pixel Attack, perturbed pixels either calls the network attention to themselves or divert the attention from them. Simultaneously, the Projected Gradient Descent Attack perturbs pixels so that intermediate layers inside the neural network lose attention for the correct class. We also show that both attacks affect the saliency map and activation maps differently. Thus, shedding light on why some defences successful against some attacks remain vulnerable against other attacks. We hope that this analysis will improve understanding of the existence and the effect of adversarial samples and enable the community to develop more robust neural networks.
1011.3580
Yuanzhang Xiao
Yuanzhang Xiao, William R. Zame, and Mihaela van der Schaar
Technology Choices and Pricing Policies in Public and Private Wireless Networks
14 pages, 6 figures
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the provision of a wireless network by a monopolistic provider who may be either benevolent (seeking to maximize social welfare) or selfish (seeking to maximize provider profit). The paper addresses questions that do not seem to have been studied before in the engineering literature on wireless networks: Under what circumstances is it feasible for a provider, either benevolent or selfish, to operate a network in such a way as to cover costs? How is the optimal behavior of a benevolent provider different from the optimal behavior of a selfish provider, and how does this difference affect social welfare? And, most importantly, how does the medium access control (MAC) technology influence the answers to these questions? To address these questions, we build a general model, and provide analysis and simulations for simplified but typical scenarios; the focus in these scenarios is on the contrast between the outcomes obtained under carrier-sensing multiple access (CSMA) and outcomes obtained under time-division multiple access (TDMA). Simulation results demonstrate that differences in MAC technology can have a significant effect on social welfare, on provider profit, and even on the (financial) feasibility of a wireless network.
[ { "created": "Tue, 16 Nov 2010 04:15:37 GMT", "version": "v1" }, { "created": "Wed, 17 Nov 2010 07:12:22 GMT", "version": "v2" }, { "created": "Fri, 14 Sep 2012 21:06:56 GMT", "version": "v3" } ]
2012-09-18
[ [ "Xiao", "Yuanzhang", "" ], [ "Zame", "William R.", "" ], [ "van der Schaar", "Mihaela", "" ] ]
This paper studies the provision of a wireless network by a monopolistic provider who may be either benevolent (seeking to maximize social welfare) or selfish (seeking to maximize provider profit). The paper addresses questions that do not seem to have been studied before in the engineering literature on wireless networks: Under what circumstances is it feasible for a provider, either benevolent or selfish, to operate a network in such a way as to cover costs? How is the optimal behavior of a benevolent provider different from the optimal behavior of a selfish provider, and how does this difference affect social welfare? And, most importantly, how does the medium access control (MAC) technology influence the answers to these questions? To address these questions, we build a general model, and provide analysis and simulations for simplified but typical scenarios; the focus in these scenarios is on the contrast between the outcomes obtained under carrier-sensing multiple access (CSMA) and outcomes obtained under time-division multiple access (TDMA). Simulation results demonstrate that differences in MAC technology can have a significant effect on social welfare, on provider profit, and even on the (financial) feasibility of a wireless network.
2105.06543
Wei Xie
Hua Zheng, Wei Xie, Ilya O. Ryzhov, Dongming Xie
Policy Optimization in Dynamic Bayesian Network Hybrid Models of Biomanufacturing Processes
36 pages, 6 figures
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Biopharmaceutical manufacturing is a rapidly growing industry with impact in virtually all branches of medicines. Biomanufacturing processes require close monitoring and control, in the presence of complex bioprocess dynamics with many interdependent factors, as well as extremely limited data due to the high cost of experiments as well as the novelty of personalized bio-drugs. We develop a novel model-based reinforcement learning framework that can achieve human-level control in low-data environments. The model uses a dynamic Bayesian network to capture causal interdependencies between factors and predict how the effects of different inputs propagate through the pathways of the bioprocess mechanisms. This enables the design of process control policies that are both interpretable and robust against model risk. We present a computationally efficient, provably convergence stochastic gradient method for optimizing such policies. Validation is conducted on a realistic application with a multi-dimensional, continuous state variable.
[ { "created": "Thu, 13 May 2021 20:39:02 GMT", "version": "v1" }, { "created": "Wed, 9 Mar 2022 16:59:17 GMT", "version": "v2" }, { "created": "Sun, 24 Jul 2022 17:31:43 GMT", "version": "v3" } ]
2022-07-26
[ [ "Zheng", "Hua", "" ], [ "Xie", "Wei", "" ], [ "Ryzhov", "Ilya O.", "" ], [ "Xie", "Dongming", "" ] ]
Biopharmaceutical manufacturing is a rapidly growing industry with impact in virtually all branches of medicines. Biomanufacturing processes require close monitoring and control, in the presence of complex bioprocess dynamics with many interdependent factors, as well as extremely limited data due to the high cost of experiments as well as the novelty of personalized bio-drugs. We develop a novel model-based reinforcement learning framework that can achieve human-level control in low-data environments. The model uses a dynamic Bayesian network to capture causal interdependencies between factors and predict how the effects of different inputs propagate through the pathways of the bioprocess mechanisms. This enables the design of process control policies that are both interpretable and robust against model risk. We present a computationally efficient, provably convergence stochastic gradient method for optimizing such policies. Validation is conducted on a realistic application with a multi-dimensional, continuous state variable.
2305.18135
Zongwei Wu
Steven Tel, Zongwei Wu, Yulun Zhang, Barth\'el\'emy Heyrman, C\'edric Demonceaux, Radu Timofte, Dominique Ginhac
Alignment-free HDR Deghosting with Semantics Consistent Transformer
Accepted to ICCV 2023. Version 2: Corrections are made to the conference proceedings to address issues with the production of our benchmark input. We have now updated Table 3 and Figure 6 to reflect these changes
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
High dynamic range (HDR) imaging aims to retrieve information from multiple low-dynamic range inputs to generate realistic output. The essence is to leverage the contextual information, including both dynamic and static semantics, for better image generation. Existing methods often focus on the spatial misalignment across input frames caused by the foreground and/or camera motion. However, there is no research on jointly leveraging the dynamic and static context in a simultaneous manner. To delve into this problem, we propose a novel alignment-free network with a Semantics Consistent Transformer (SCTNet) with both spatial and channel attention modules in the network. The spatial attention aims to deal with the intra-image correlation to model the dynamic motion, while the channel attention enables the inter-image intertwining to enhance the semantic consistency across frames. Aside from this, we introduce a novel realistic HDR dataset with more variations in foreground objects, environmental factors, and larger motions. Extensive comparisons on both conventional datasets and ours validate the effectiveness of our method, achieving the best trade-off on the performance and the computational cost.
[ { "created": "Mon, 29 May 2023 15:03:23 GMT", "version": "v1" }, { "created": "Thu, 28 Sep 2023 17:34:34 GMT", "version": "v2" } ]
2023-09-29
[ [ "Tel", "Steven", "" ], [ "Wu", "Zongwei", "" ], [ "Zhang", "Yulun", "" ], [ "Heyrman", "Barthélémy", "" ], [ "Demonceaux", "Cédric", "" ], [ "Timofte", "Radu", "" ], [ "Ginhac", "Dominique", "" ] ]
High dynamic range (HDR) imaging aims to retrieve information from multiple low-dynamic range inputs to generate realistic output. The essence is to leverage the contextual information, including both dynamic and static semantics, for better image generation. Existing methods often focus on the spatial misalignment across input frames caused by the foreground and/or camera motion. However, there is no research on jointly leveraging the dynamic and static context in a simultaneous manner. To delve into this problem, we propose a novel alignment-free network with a Semantics Consistent Transformer (SCTNet) with both spatial and channel attention modules in the network. The spatial attention aims to deal with the intra-image correlation to model the dynamic motion, while the channel attention enables the inter-image intertwining to enhance the semantic consistency across frames. Aside from this, we introduce a novel realistic HDR dataset with more variations in foreground objects, environmental factors, and larger motions. Extensive comparisons on both conventional datasets and ours validate the effectiveness of our method, achieving the best trade-off on the performance and the computational cost.
1303.4892
Raphael kena Poss
Raphael Poss
On whether and how D-RISC and Microgrids can be kept relevant (self-assessment report)
45 pages, 5 figures, 2 tables
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This report lays flat my personal views on D-RISC and Microgrids as of March 2013. It reflects the opinions and insights that I have gained from working on this project during the period 2008-2013. This report is structed in two parts: deconstruction and reconstruction. In the deconstruction phase, I review what I believe are the fundamental motivation and goals of the D-RISC/Microgrids enterprise, and identify what I judge are shortcomings: that the project did not deliver on its expectations, that fundamental questions are left unanswered, and that its original motivation may not even be relevant in scientific research any more in this day and age. In the reconstruction phase, I start by identifying the merits of the current D-RISC/Microgrids technology and know-how taken at face value, re-motivate its existence from a different angle, and suggest new, relevant research questions that could justify continued scientific investment.
[ { "created": "Wed, 20 Mar 2013 10:17:54 GMT", "version": "v1" } ]
2013-03-21
[ [ "Poss", "Raphael", "" ] ]
This report lays flat my personal views on D-RISC and Microgrids as of March 2013. It reflects the opinions and insights that I have gained from working on this project during the period 2008-2013. This report is structed in two parts: deconstruction and reconstruction. In the deconstruction phase, I review what I believe are the fundamental motivation and goals of the D-RISC/Microgrids enterprise, and identify what I judge are shortcomings: that the project did not deliver on its expectations, that fundamental questions are left unanswered, and that its original motivation may not even be relevant in scientific research any more in this day and age. In the reconstruction phase, I start by identifying the merits of the current D-RISC/Microgrids technology and know-how taken at face value, re-motivate its existence from a different angle, and suggest new, relevant research questions that could justify continued scientific investment.
2205.02435
Song Wang
Song Wang, Yushun Dong, Xiao Huang, Chen Chen, Jundong Li
FAITH: Few-Shot Graph Classification with Hierarchical Task Graphs
IJCAI-ECAI 2022
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Few-shot graph classification aims at predicting classes for graphs, given limited labeled graphs for each class. To tackle the bottleneck of label scarcity, recent works propose to incorporate few-shot learning frameworks for fast adaptations to graph classes with limited labeled graphs. Specifically, these works propose to accumulate meta-knowledge across diverse meta-training tasks, and then generalize such meta-knowledge to the target task with a disjoint label set. However, existing methods generally ignore task correlations among meta-training tasks while treating them independently. Nevertheless, such task correlations can advance the model generalization to the target task for better classification performance. On the other hand, it remains non-trivial to utilize task correlations due to the complex components in a large number of meta-training tasks. To deal with this, we propose a novel few-shot learning framework FAITH that captures task correlations via constructing a hierarchical task graph at different granularities. Then we further design a loss-based sampling strategy to select tasks with more correlated classes. Moreover, a task-specific classifier is proposed to utilize the learned task correlations for few-shot classification. Extensive experiments on four prevalent few-shot graph classification datasets demonstrate the superiority of FAITH over other state-of-the-art baselines.
[ { "created": "Thu, 5 May 2022 04:28:32 GMT", "version": "v1" }, { "created": "Sat, 7 May 2022 01:51:12 GMT", "version": "v2" } ]
2022-05-10
[ [ "Wang", "Song", "" ], [ "Dong", "Yushun", "" ], [ "Huang", "Xiao", "" ], [ "Chen", "Chen", "" ], [ "Li", "Jundong", "" ] ]
Few-shot graph classification aims at predicting classes for graphs, given limited labeled graphs for each class. To tackle the bottleneck of label scarcity, recent works propose to incorporate few-shot learning frameworks for fast adaptations to graph classes with limited labeled graphs. Specifically, these works propose to accumulate meta-knowledge across diverse meta-training tasks, and then generalize such meta-knowledge to the target task with a disjoint label set. However, existing methods generally ignore task correlations among meta-training tasks while treating them independently. Nevertheless, such task correlations can advance the model generalization to the target task for better classification performance. On the other hand, it remains non-trivial to utilize task correlations due to the complex components in a large number of meta-training tasks. To deal with this, we propose a novel few-shot learning framework FAITH that captures task correlations via constructing a hierarchical task graph at different granularities. Then we further design a loss-based sampling strategy to select tasks with more correlated classes. Moreover, a task-specific classifier is proposed to utilize the learned task correlations for few-shot classification. Extensive experiments on four prevalent few-shot graph classification datasets demonstrate the superiority of FAITH over other state-of-the-art baselines.
2012.02509
Venugopal Mani
Behzad Shahrasbi, Venugopal Mani, Apoorv Reddy Arrabothu, Deepthi Sharma, Kannan Achan, Sushant Kumar
On Detecting Data Pollution Attacks On Recommender Systems Using Sequential GANs
8 pages, 4 Figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Recommender systems are an essential part of any e-commerce platform. Recommendations are typically generated by aggregating large amounts of user data. A malicious actor may be motivated to sway the output of such recommender systems by injecting malicious datapoints to leverage the system for financial gain. In this work, we propose a semi-supervised attack detection algorithm to identify the malicious datapoints. We do this by leveraging a portion of the dataset that has a lower chance of being polluted to learn the distribution of genuine datapoints. Our proposed approach modifies the Generative Adversarial Network architecture to take into account the contextual information from user activity. This allows the model to distinguish legitimate datapoints from the injected ones.
[ { "created": "Fri, 4 Dec 2020 10:31:28 GMT", "version": "v1" } ]
2020-12-07
[ [ "Shahrasbi", "Behzad", "" ], [ "Mani", "Venugopal", "" ], [ "Arrabothu", "Apoorv Reddy", "" ], [ "Sharma", "Deepthi", "" ], [ "Achan", "Kannan", "" ], [ "Kumar", "Sushant", "" ] ]
Recommender systems are an essential part of any e-commerce platform. Recommendations are typically generated by aggregating large amounts of user data. A malicious actor may be motivated to sway the output of such recommender systems by injecting malicious datapoints to leverage the system for financial gain. In this work, we propose a semi-supervised attack detection algorithm to identify the malicious datapoints. We do this by leveraging a portion of the dataset that has a lower chance of being polluted to learn the distribution of genuine datapoints. Our proposed approach modifies the Generative Adversarial Network architecture to take into account the contextual information from user activity. This allows the model to distinguish legitimate datapoints from the injected ones.
2404.17251
Samuel Cerezo
Samuel Cerezo, Javier Civera
Camera Motion Estimation from RGB-D-Inertial Scene Flow
Accepted to CVPR2024 Workshop on Visual Odometry and Computer Vision Applications
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we introduce a novel formulation for camera motion estimation that integrates RGB-D images and inertial data through scene flow. Our goal is to accurately estimate the camera motion in a rigid 3D environment, along with the state of the inertial measurement unit (IMU). Our proposed method offers the flexibility to operate as a multi-frame optimization or to marginalize older data, thus effectively utilizing past measurements. To assess the performance of our method, we conducted evaluations using both synthetic data from the ICL-NUIM dataset and real data sequences from the OpenLORIS-Scene dataset. Our results show that the fusion of these two sensors enhances the accuracy of camera motion estimation when compared to using only visual data.
[ { "created": "Fri, 26 Apr 2024 08:42:59 GMT", "version": "v1" } ]
2024-04-29
[ [ "Cerezo", "Samuel", "" ], [ "Civera", "Javier", "" ] ]
In this paper, we introduce a novel formulation for camera motion estimation that integrates RGB-D images and inertial data through scene flow. Our goal is to accurately estimate the camera motion in a rigid 3D environment, along with the state of the inertial measurement unit (IMU). Our proposed method offers the flexibility to operate as a multi-frame optimization or to marginalize older data, thus effectively utilizing past measurements. To assess the performance of our method, we conducted evaluations using both synthetic data from the ICL-NUIM dataset and real data sequences from the OpenLORIS-Scene dataset. Our results show that the fusion of these two sensors enhances the accuracy of camera motion estimation when compared to using only visual data.
1102.0250
S Gorantla
Siva Gorantla, Todd Coleman
Information-Theoretic Viewpoints on Optimal Causal Coding-Decoding Problems
submitted to IEEE Transactions on Information Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we consider an interacting two-agent sequential decision-making problem consisting of a Markov source process, a causal encoder with feedback, and a causal decoder. Motivated by a desire to foster links between control and information theory, we augment the standard formulation by considering general alphabets and a cost function operating on current and previous symbols. Using dynamic programming, we provide a structural result whereby an optimal scheme exists that operates on appropriate sufficient statistics. We emphasize an example where the decoder alphabet lies in a space of beliefs on the source alphabet, and the additive cost function is a log likelihood ratio pertaining to sequential information gain. We also consider the inverse optimal control problem, where a fixed encoder/decoder pair satisfying statistical conditions is shown to be optimal for some cost function, using probabilistic matching. We provide examples of the applicability of this framework to communication with feedback, hidden Markov models and the nonlinear filter, decentralized control, brain-machine interfaces, and queuing theory.
[ { "created": "Tue, 1 Feb 2011 19:01:42 GMT", "version": "v1" } ]
2015-03-18
[ [ "Gorantla", "Siva", "" ], [ "Coleman", "Todd", "" ] ]
In this paper we consider an interacting two-agent sequential decision-making problem consisting of a Markov source process, a causal encoder with feedback, and a causal decoder. Motivated by a desire to foster links between control and information theory, we augment the standard formulation by considering general alphabets and a cost function operating on current and previous symbols. Using dynamic programming, we provide a structural result whereby an optimal scheme exists that operates on appropriate sufficient statistics. We emphasize an example where the decoder alphabet lies in a space of beliefs on the source alphabet, and the additive cost function is a log likelihood ratio pertaining to sequential information gain. We also consider the inverse optimal control problem, where a fixed encoder/decoder pair satisfying statistical conditions is shown to be optimal for some cost function, using probabilistic matching. We provide examples of the applicability of this framework to communication with feedback, hidden Markov models and the nonlinear filter, decentralized control, brain-machine interfaces, and queuing theory.
1401.1100
Klaus Jaffe Dr
Klaus Jaffe, Astrid Florez, Cristina M Gomes, Daniel Rodriguez, Carla Achury
On the biological and cultural evolution of shame: Using internet search tools to weight values in many cultures
Submitted for publication
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shame has clear biological roots and its precise form of expression affects social cohesion and cultural characteristics. Here we explore the relative importance between shame and guilt by using Google Translate to produce translation for the words shame, guilt, pain, embarrassment and fear to the 64 languages covered. We also explore the meanings of these concepts among the Yanomami, a horticulturist hunter-gatherer tribe in the Orinoquia. Results show that societies previously described as 'guilt societies' have more words for guilt than for shame, but the large majority, including the societies previously described as 'shame societies', have more words for shame than for guilt. Results are consistent with evolutionary models of shame which predict a wide scatter in the relative importance between guilt and shame, suggesting that cultural evolution of shame has continued the work of biological evolution, and that neither provides a strong adaptive advantage to either shame or guilt. We propose that the study of shame will improve our understanding of the interaction between biological and cultural evolution in the evolution of cognition and emotions.
[ { "created": "Fri, 3 Jan 2014 15:34:28 GMT", "version": "v1" }, { "created": "Mon, 3 Feb 2014 21:57:06 GMT", "version": "v2" } ]
2014-02-05
[ [ "Jaffe", "Klaus", "" ], [ "Florez", "Astrid", "" ], [ "Gomes", "Cristina M", "" ], [ "Rodriguez", "Daniel", "" ], [ "Achury", "Carla", "" ] ]
Shame has clear biological roots and its precise form of expression affects social cohesion and cultural characteristics. Here we explore the relative importance between shame and guilt by using Google Translate to produce translation for the words shame, guilt, pain, embarrassment and fear to the 64 languages covered. We also explore the meanings of these concepts among the Yanomami, a horticulturist hunter-gatherer tribe in the Orinoquia. Results show that societies previously described as 'guilt societies' have more words for guilt than for shame, but the large majority, including the societies previously described as 'shame societies', have more words for shame than for guilt. Results are consistent with evolutionary models of shame which predict a wide scatter in the relative importance between guilt and shame, suggesting that cultural evolution of shame has continued the work of biological evolution, and that neither provides a strong adaptive advantage to either shame or guilt. We propose that the study of shame will improve our understanding of the interaction between biological and cultural evolution in the evolution of cognition and emotions.
1701.06181
Xinping Yi
Xinping Yi and Giuseppe Caire
The Optimality of Partial Clique Covering for Index Coding
A technical error has been detected in the proof of Theorem 1, so the optimality of partial clique covering remains open
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Partial clique covering is one of the most basic coding schemes for index coding problems, generalizing clique and cycle covering on the side information digraph and further reducing the achievable broadcast rate. In this paper, we start with partition multicast, a special case of partial clique covering with cover number 1, and show that partition multicast achieves the optimal broadcast rate of the multiple-unicast index coding if and only if the side information digraph is partially acyclic. A digraph is said to be partially acyclic if its sub-digraph induced by the vertex with maximum in-degree and its incoming neighbors in the complementary digraph is acyclic. We further extend to the general partial clique covering, offering sufficient conditions of its optimality and sub-optimality with the aid of strong connectivity decomposition. In addition, for some digraph classes, we also prove that the optimal broadcast rate can be approximated by partial clique covering (as well as by other basic schemes) within either a constant factor, or a multiplicative factor of $O(\frac{n}{\log n})$, or $O(n^\epsilon)$ for some $\epsilon \in (0,1)$.
[ { "created": "Sun, 22 Jan 2017 16:20:06 GMT", "version": "v1" }, { "created": "Mon, 28 May 2018 17:34:45 GMT", "version": "v2" } ]
2018-05-29
[ [ "Yi", "Xinping", "" ], [ "Caire", "Giuseppe", "" ] ]
Partial clique covering is one of the most basic coding schemes for index coding problems, generalizing clique and cycle covering on the side information digraph and further reducing the achievable broadcast rate. In this paper, we start with partition multicast, a special case of partial clique covering with cover number 1, and show that partition multicast achieves the optimal broadcast rate of the multiple-unicast index coding if and only if the side information digraph is partially acyclic. A digraph is said to be partially acyclic if its sub-digraph induced by the vertex with maximum in-degree and its incoming neighbors in the complementary digraph is acyclic. We further extend to the general partial clique covering, offering sufficient conditions of its optimality and sub-optimality with the aid of strong connectivity decomposition. In addition, for some digraph classes, we also prove that the optimal broadcast rate can be approximated by partial clique covering (as well as by other basic schemes) within either a constant factor, or a multiplicative factor of $O(\frac{n}{\log n})$, or $O(n^\epsilon)$ for some $\epsilon \in (0,1)$.
1904.02755
Soham Ghosh
Soham Ghosh, Anuva Agarwal, Zarana Parekh, Alexander Hauptmann
ExCL: Extractive Clip Localization Using Natural Language Descriptions
Accepted at NAACL 2019, Short Paper
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of retrieving clips within videos based on a given natural language query requires cross-modal reasoning over multiple frames. Prior approaches such as sliding window classifiers are inefficient, while text-clip similarity driven ranking-based approaches such as segment proposal networks are far more complicated. In order to select the most relevant video clip corresponding to the given text description, we propose a novel extractive approach that predicts the start and end frames by leveraging cross-modal interactions between the text and video - this removes the need to retrieve and re-rank multiple proposal segments. Using recurrent networks we encode the two modalities into a joint representation which is then used in different variants of start-end frame predictor networks. Through extensive experimentation and ablative analysis, we demonstrate that our simple and elegant approach significantly outperforms state of the art on two datasets and has comparable performance on a third.
[ { "created": "Thu, 4 Apr 2019 19:17:04 GMT", "version": "v1" } ]
2019-04-08
[ [ "Ghosh", "Soham", "" ], [ "Agarwal", "Anuva", "" ], [ "Parekh", "Zarana", "" ], [ "Hauptmann", "Alexander", "" ] ]
The task of retrieving clips within videos based on a given natural language query requires cross-modal reasoning over multiple frames. Prior approaches such as sliding window classifiers are inefficient, while text-clip similarity driven ranking-based approaches such as segment proposal networks are far more complicated. In order to select the most relevant video clip corresponding to the given text description, we propose a novel extractive approach that predicts the start and end frames by leveraging cross-modal interactions between the text and video - this removes the need to retrieve and re-rank multiple proposal segments. Using recurrent networks we encode the two modalities into a joint representation which is then used in different variants of start-end frame predictor networks. Through extensive experimentation and ablative analysis, we demonstrate that our simple and elegant approach significantly outperforms state of the art on two datasets and has comparable performance on a third.
1202.1877
Md. Tariq Aziz Tariq
Md. Tariq Aziz (1), Mohammad Saiful Islam (1), Md. Nazmul Islam khan (2) and Adrian Popescu (1) ((1) Blekinge Institute of Technology, Karlskrona, Sweden (2) Presidency University, Dhaka, Bangladesh)
Effect of Packet Delay Variation on Video-Voice over DiffServ-MPLS in IPv4-IPv6 Networks
21 Pages, 8 Figures; January 2012, Volume 3, Number 1 (IJDPS)
null
null
null
cs.NI cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the last years, we have witnessed a rapid deployment of real-time applications on the Internet as well as many research works about Quality of Service (QoS), in particular IPv4 (Internet Protocol version 4). The inevitable exhaustion of the remaining IPv4 address pool has become progressively evident. As the evolution of Internet Protocol (IP) continues, the deployment of IPv6 QoS is underway. Today, there is limited experience in the deployment of QoS for IPv6 traffic in MPLS backbone networks in conjunction with DiffServ (Differentiated Services) support. DiffServ itself does not have the ability to control the traffic which has been taken for end-to-end path while a number of links of the path are congested. In contrast, MPLS Traffic Engineering (TE) is accomplished to control the traffic and can set up end-to-end routing path before data has been forwarded. From the evolution of IPv4 QoS solutions, we know that the integration of DiffServ and MPLS TE satisfies the guaranteed QoS requirement for real-time applications. This paper presents a QoS performance study of real-time applications such as voice and video conferencing in terms of Packet Delay Variation (PDV) over DiffServ with or without MPLS TE in IPv4/IPv6 networks using Optimized Network Engineering Tool (OPNET). We also study the interaction of Expedited Forwarding (EF), Assured Forwarding (AF) traffic aggregation, link congestion, as well as the effect of performance metric such as PDV. The effectiveness of DiffServ and MPLS TE integration in IPv4/IPv6 network is illustrated and analyzed. This paper shows that IPv6 experiences more PDV than their IPv4 counterparts.
[ { "created": "Thu, 9 Feb 2012 02:47:14 GMT", "version": "v1" }, { "created": "Tue, 20 Mar 2012 15:00:14 GMT", "version": "v2" } ]
2012-03-21
[ [ "Aziz", "Md. Tariq", "" ], [ "Islam", "Mohammad Saiful", "" ], [ "khan", "Md. Nazmul Islam", "" ], [ "Popescu", "Adrian", "" ] ]
Over the last years, we have witnessed a rapid deployment of real-time applications on the Internet as well as many research works about Quality of Service (QoS), in particular IPv4 (Internet Protocol version 4). The inevitable exhaustion of the remaining IPv4 address pool has become progressively evident. As the evolution of Internet Protocol (IP) continues, the deployment of IPv6 QoS is underway. Today, there is limited experience in the deployment of QoS for IPv6 traffic in MPLS backbone networks in conjunction with DiffServ (Differentiated Services) support. DiffServ itself does not have the ability to control the traffic which has been taken for end-to-end path while a number of links of the path are congested. In contrast, MPLS Traffic Engineering (TE) is accomplished to control the traffic and can set up end-to-end routing path before data has been forwarded. From the evolution of IPv4 QoS solutions, we know that the integration of DiffServ and MPLS TE satisfies the guaranteed QoS requirement for real-time applications. This paper presents a QoS performance study of real-time applications such as voice and video conferencing in terms of Packet Delay Variation (PDV) over DiffServ with or without MPLS TE in IPv4/IPv6 networks using Optimized Network Engineering Tool (OPNET). We also study the interaction of Expedited Forwarding (EF), Assured Forwarding (AF) traffic aggregation, link congestion, as well as the effect of performance metric such as PDV. The effectiveness of DiffServ and MPLS TE integration in IPv4/IPv6 network is illustrated and analyzed. This paper shows that IPv6 experiences more PDV than their IPv4 counterparts.
1708.07942
Minh Nguyen
Minh Nguyen, Sanjay Purushotham, Hien To, Cyrus Shahabi
m-TSNE: A Framework for Visualizing High-Dimensional Multivariate Time Series
VAHC2016 Workshop on Visual Analytics in Healthcare in conjunction with AMIA 2016
null
null
null
cs.LG stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multivariate time series (MTS) have become increasingly common in healthcare domains where human vital signs and laboratory results are collected for predictive diagnosis. Recently, there have been increasing efforts to visualize healthcare MTS data based on star charts or parallel coordinates. However, such techniques might not be ideal for visualizing a large MTS dataset, since it is difficult to obtain insights or interpretations due to the inherent high dimensionality of MTS. In this paper, we propose 'm-TSNE': a simple and novel framework to visualize high-dimensional MTS data by projecting them into a low-dimensional (2-D or 3-D) space while capturing the underlying data properties. Our framework is easy to use and provides interpretable insights for healthcare professionals to understand MTS data. We evaluate our visualization framework on two real-world datasets and demonstrate that the results of our m-TSNE show patterns that are easy to understand while the other methods' visualization may have limitations in interpretability.
[ { "created": "Sat, 26 Aug 2017 07:21:58 GMT", "version": "v1" } ]
2017-08-29
[ [ "Nguyen", "Minh", "" ], [ "Purushotham", "Sanjay", "" ], [ "To", "Hien", "" ], [ "Shahabi", "Cyrus", "" ] ]
Multivariate time series (MTS) have become increasingly common in healthcare domains where human vital signs and laboratory results are collected for predictive diagnosis. Recently, there have been increasing efforts to visualize healthcare MTS data based on star charts or parallel coordinates. However, such techniques might not be ideal for visualizing a large MTS dataset, since it is difficult to obtain insights or interpretations due to the inherent high dimensionality of MTS. In this paper, we propose 'm-TSNE': a simple and novel framework to visualize high-dimensional MTS data by projecting them into a low-dimensional (2-D or 3-D) space while capturing the underlying data properties. Our framework is easy to use and provides interpretable insights for healthcare professionals to understand MTS data. We evaluate our visualization framework on two real-world datasets and demonstrate that the results of our m-TSNE show patterns that are easy to understand while the other methods' visualization may have limitations in interpretability.
2403.19345
Kangming Xu
Kangming Xu, Huiming Zhou, Haotian Zheng, Mingwei Zhu, Qi Xin
Intelligent Classification and Personalized Recommendation of E-commerce Products Based on Machine Learning
null
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid evolution of the Internet and the exponential proliferation of information, users encounter information overload and the conundrum of choice. Personalized recommendation systems play a pivotal role in alleviating this burden by aiding users in filtering and selecting information tailored to their preferences and requirements. Such systems not only enhance user experience and satisfaction but also furnish opportunities for businesses and platforms to augment user engagement, sales, and advertising efficacy.This paper undertakes a comparative analysis between the operational mechanisms of traditional e-commerce commodity classification systems and personalized recommendation systems. It delineates the significance and application of personalized recommendation systems across e-commerce, content information, and media domains. Furthermore, it delves into the challenges confronting personalized recommendation systems in e-commerce, including data privacy, algorithmic bias, scalability, and the cold start problem. Strategies to address these challenges are elucidated.Subsequently, the paper outlines a personalized recommendation system leveraging the BERT model and nearest neighbor algorithm, specifically tailored to address the exigencies of the eBay e-commerce platform. The efficacy of this recommendation system is substantiated through manual evaluation, and a practical application operational guide and structured output recommendation results are furnished to ensure the system's operability and scalability.
[ { "created": "Thu, 28 Mar 2024 12:02:45 GMT", "version": "v1" } ]
2024-03-29
[ [ "Xu", "Kangming", "" ], [ "Zhou", "Huiming", "" ], [ "Zheng", "Haotian", "" ], [ "Zhu", "Mingwei", "" ], [ "Xin", "Qi", "" ] ]
With the rapid evolution of the Internet and the exponential proliferation of information, users encounter information overload and the conundrum of choice. Personalized recommendation systems play a pivotal role in alleviating this burden by aiding users in filtering and selecting information tailored to their preferences and requirements. Such systems not only enhance user experience and satisfaction but also furnish opportunities for businesses and platforms to augment user engagement, sales, and advertising efficacy.This paper undertakes a comparative analysis between the operational mechanisms of traditional e-commerce commodity classification systems and personalized recommendation systems. It delineates the significance and application of personalized recommendation systems across e-commerce, content information, and media domains. Furthermore, it delves into the challenges confronting personalized recommendation systems in e-commerce, including data privacy, algorithmic bias, scalability, and the cold start problem. Strategies to address these challenges are elucidated.Subsequently, the paper outlines a personalized recommendation system leveraging the BERT model and nearest neighbor algorithm, specifically tailored to address the exigencies of the eBay e-commerce platform. The efficacy of this recommendation system is substantiated through manual evaluation, and a practical application operational guide and structured output recommendation results are furnished to ensure the system's operability and scalability.
2211.15022
Ernan Li
Ernan Li, Fandong Meng and Jie Zhou
Summer: WeChat Neural Machine Translation Systems for the WMT22 Biomedical Translation Task
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper introduces WeChat's participation in WMT 2022 shared biomedical translation task on Chinese to English. Our systems are based on the Transformer, and use several different Transformer structures to improve the quality of translation. In our experiments, we employ data filtering, data generation, several variants of Transformer, fine-tuning and model ensemble. Our Chinese$\to$English system, named Summer, achieves the highest BLEU score among all submissions.
[ { "created": "Mon, 28 Nov 2022 03:10:50 GMT", "version": "v1" } ]
2022-11-29
[ [ "Li", "Ernan", "" ], [ "Meng", "Fandong", "" ], [ "Zhou", "Jie", "" ] ]
This paper introduces WeChat's participation in WMT 2022 shared biomedical translation task on Chinese to English. Our systems are based on the Transformer, and use several different Transformer structures to improve the quality of translation. In our experiments, we employ data filtering, data generation, several variants of Transformer, fine-tuning and model ensemble. Our Chinese$\to$English system, named Summer, achieves the highest BLEU score among all submissions.
1405.1857
Debasish Chatterjee
Atreyee Kundu, Niranjan Balachandran, and Debasish Chatterjee
Deterministic and probabilistic algorithms for stabilizing discrete-time switched linear systems
11 pages, 2 figures
null
10.3934/mcrf.2019009
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article we study algorithmic synthesis of the class of stabilizing switching signals for discrete-time switched linear systems proposed in [12]. A weighted digraph is associated in a natural way to a switched system, and the switching signal is expressed as an infinite walk on this weighted digraph. We employ graph-theoretic tools and discuss different algorithms for designing walks whose corresponding switching signals satisfy the stabilizing switching conditions proposed in [12]. We also address the issue of how likely/generic it is for a family of systems to admit stabilizing switching signals, and under mild assumptions give sufficient conditions for the same. Our solutions have both deterministic and probabilistic flavours.
[ { "created": "Thu, 8 May 2014 09:48:00 GMT", "version": "v1" }, { "created": "Wed, 17 Sep 2014 06:10:46 GMT", "version": "v2" } ]
2019-05-27
[ [ "Kundu", "Atreyee", "" ], [ "Balachandran", "Niranjan", "" ], [ "Chatterjee", "Debasish", "" ] ]
In this article we study algorithmic synthesis of the class of stabilizing switching signals for discrete-time switched linear systems proposed in [12]. A weighted digraph is associated in a natural way to a switched system, and the switching signal is expressed as an infinite walk on this weighted digraph. We employ graph-theoretic tools and discuss different algorithms for designing walks whose corresponding switching signals satisfy the stabilizing switching conditions proposed in [12]. We also address the issue of how likely/generic it is for a family of systems to admit stabilizing switching signals, and under mild assumptions give sufficient conditions for the same. Our solutions have both deterministic and probabilistic flavours.
2108.09666
Dahyun Kang
Dahyun Kang, Heeseung Kwon, Juhong Min, Minsu Cho
Relational Embedding for Few-Shot Classification
Accepted at ICCV 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose to address the problem of few-shot classification by meta-learning "what to observe" and "where to attend" in a relational perspective. Our method leverages relational patterns within and between images via self-correlational representation (SCR) and cross-correlational attention (CCA). Within each image, the SCR module transforms a base feature map into a self-correlation tensor and learns to extract structural patterns from the tensor. Between the images, the CCA module computes cross-correlation between two image representations and learns to produce co-attention between them. Our Relational Embedding Network (RENet) combines the two relational modules to learn relational embedding in an end-to-end manner. In experimental evaluation, it achieves consistent improvements over state-of-the-art methods on four widely used few-shot classification benchmarks of miniImageNet, tieredImageNet, CUB-200-2011, and CIFAR-FS.
[ { "created": "Sun, 22 Aug 2021 08:44:55 GMT", "version": "v1" } ]
2021-08-24
[ [ "Kang", "Dahyun", "" ], [ "Kwon", "Heeseung", "" ], [ "Min", "Juhong", "" ], [ "Cho", "Minsu", "" ] ]
We propose to address the problem of few-shot classification by meta-learning "what to observe" and "where to attend" in a relational perspective. Our method leverages relational patterns within and between images via self-correlational representation (SCR) and cross-correlational attention (CCA). Within each image, the SCR module transforms a base feature map into a self-correlation tensor and learns to extract structural patterns from the tensor. Between the images, the CCA module computes cross-correlation between two image representations and learns to produce co-attention between them. Our Relational Embedding Network (RENet) combines the two relational modules to learn relational embedding in an end-to-end manner. In experimental evaluation, it achieves consistent improvements over state-of-the-art methods on four widely used few-shot classification benchmarks of miniImageNet, tieredImageNet, CUB-200-2011, and CIFAR-FS.
2310.10573
Chris Hays
Cynthia Dwork, Chris Hays, Jon Kleinberg, Manish Raghavan
Content Moderation and the Formation of Online Communities: A Theoretical Framework
46 pages, 10 figures
null
null
null
cs.DS cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the impact of content moderation policies in online communities. In our theoretical model, a platform chooses a content moderation policy and individuals choose whether or not to participate in the community according to the fraction of user content that aligns with their preferences. The effects of content moderation, at first blush, might seem obvious: it restricts speech on a platform. However, when user participation decisions are taken into account, its effects can be more subtle $\unicode{x2013}$ and counter-intuitive. For example, our model can straightforwardly demonstrate how moderation policies may increase participation and diversify content available on the platform. In our analysis, we explore a rich set of interconnected phenomena related to content moderation in online communities. We first characterize the effectiveness of a natural class of moderation policies for creating and sustaining stable communities. Building on this, we explore how resource-limited or ideological platforms might set policies, how communities are affected by differing levels of personalization, and competition between platforms. Our model provides a vocabulary and mathematically tractable framework for analyzing platform decisions about content moderation.
[ { "created": "Mon, 16 Oct 2023 16:49:44 GMT", "version": "v1" } ]
2023-10-17
[ [ "Dwork", "Cynthia", "" ], [ "Hays", "Chris", "" ], [ "Kleinberg", "Jon", "" ], [ "Raghavan", "Manish", "" ] ]
We study the impact of content moderation policies in online communities. In our theoretical model, a platform chooses a content moderation policy and individuals choose whether or not to participate in the community according to the fraction of user content that aligns with their preferences. The effects of content moderation, at first blush, might seem obvious: it restricts speech on a platform. However, when user participation decisions are taken into account, its effects can be more subtle $\unicode{x2013}$ and counter-intuitive. For example, our model can straightforwardly demonstrate how moderation policies may increase participation and diversify content available on the platform. In our analysis, we explore a rich set of interconnected phenomena related to content moderation in online communities. We first characterize the effectiveness of a natural class of moderation policies for creating and sustaining stable communities. Building on this, we explore how resource-limited or ideological platforms might set policies, how communities are affected by differing levels of personalization, and competition between platforms. Our model provides a vocabulary and mathematically tractable framework for analyzing platform decisions about content moderation.
2003.07097
Bahram Kalhor
Bahram Kalhor, Alireza Nikravanshalmani
Correlation between Content and Traffic of the Universities Website
16 pagges
International Journal of Information Science and Management Vol. 13, No. 2, 2015, 61-76
null
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The purpose of this study is to analyse the correlation between content and traffic of 21,485 academic websites (universities and research institutes). The achieved result is used as an indicator which shows the performance of the websites for attracting more visitors. This inspires a best practice for developing new websites or promoting the traffic of the existing websites. At the first step, content of the site is divided into three major items which are: Size, Papers and Rich Files. Then, the Spearman correlation between traffic of the websites and these items are calculated for each country and for the world, respectively. At the next step, countries are ranked based on their correlations, also a new indicator is proposed from combining these three correlations of the countries. Results show that in most countries, correlation between traffic of the websites and Papers is less than correlations between traffic of the websites and Rich Files and Size.
[ { "created": "Mon, 16 Mar 2020 10:17:27 GMT", "version": "v1" } ]
2020-03-17
[ [ "Kalhor", "Bahram", "" ], [ "Nikravanshalmani", "Alireza", "" ] ]
The purpose of this study is to analyse the correlation between content and traffic of 21,485 academic websites (universities and research institutes). The achieved result is used as an indicator which shows the performance of the websites for attracting more visitors. This inspires a best practice for developing new websites or promoting the traffic of the existing websites. At the first step, content of the site is divided into three major items which are: Size, Papers and Rich Files. Then, the Spearman correlation between traffic of the websites and these items are calculated for each country and for the world, respectively. At the next step, countries are ranked based on their correlations, also a new indicator is proposed from combining these three correlations of the countries. Results show that in most countries, correlation between traffic of the websites and Papers is less than correlations between traffic of the websites and Rich Files and Size.
1608.02327
Petr Jancar
Petr Jancar
Deciding structural liveness of Petri nets
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Place/transition Petri nets are a standard model for a class of distributed systems whose reachability spaces might be infinite. One of well-studied topics is the verification of safety and liveness properties in this model; despite the extensive research effort, some basic problems remain open, which is exemplified by the open complexity status of the reachability problem. The liveness problems are known to be closely related to the reachability problem, and many structural properties of nets that are related to liveness have been studied. Somewhat surprisingly, the decidability status of the problem if a net is structurally live, i.e. if there is an initial marking for which it is live, has remained open, as also a recent paper (Best and Esparza, 2016) emphasizes. Here we show that the structural liveness problem for Petri nets is decidable. A crucial ingredient of the proof is the result by Leroux (LiCS 2013) showing that we can compute a finite (Presburger) description of the reachability set for a marked Petri net if this set is semilinear.
[ { "created": "Mon, 8 Aug 2016 06:05:59 GMT", "version": "v1" } ]
2016-08-09
[ [ "Jancar", "Petr", "" ] ]
Place/transition Petri nets are a standard model for a class of distributed systems whose reachability spaces might be infinite. One of well-studied topics is the verification of safety and liveness properties in this model; despite the extensive research effort, some basic problems remain open, which is exemplified by the open complexity status of the reachability problem. The liveness problems are known to be closely related to the reachability problem, and many structural properties of nets that are related to liveness have been studied. Somewhat surprisingly, the decidability status of the problem if a net is structurally live, i.e. if there is an initial marking for which it is live, has remained open, as also a recent paper (Best and Esparza, 2016) emphasizes. Here we show that the structural liveness problem for Petri nets is decidable. A crucial ingredient of the proof is the result by Leroux (LiCS 2013) showing that we can compute a finite (Presburger) description of the reachability set for a marked Petri net if this set is semilinear.
1906.09524
Jian Wang
Yi-Fei PU, Jian Wang
Fractional-order Backpropagation Neural Networks: Modified Fractional-order Steepest Descent Method for Family of Backpropagation Neural Networks
null
null
null
null
cs.NE cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper offers a novel mathematical approach, the modified Fractional-order Steepest Descent Method (FSDM) for training BackPropagation Neural Networks (BPNNs); this differs from the majority of the previous approaches and as such. A promising mathematical method, fractional calculus, has the potential to assume a prominent role in the applications of neural networks and cybernetics because of its inherent strengths such as long-term memory, nonlocality, and weak singularity. Therefore, to improve the optimization performance of classic first-order BPNNs, in this paper we study whether it could be possible to modified FSDM and generalize classic first-order BPNNs to modified FSDM based Fractional-order Backpropagation Neural Networks (FBPNNs). Motivated by this inspiration, this paper proposes a state-of-the-art application of fractional calculus to implement a modified FSDM based FBPNN whose reverse incremental search is in the negative directions of the approximate fractional-order partial derivatives of the square error. At first, the theoretical concept of a modified FSDM based FBPNN is described mathematically. Then, the mathematical proof of the fractional-order global optimal convergence, an assumption of the structure, and the fractional-order multi-scale global optimization of a modified FSDM based FBPNN are analysed in detail. Finally, we perform comparative experiments and compare a modified FSDM based FBPNN with a classic first-order BPNN, i.e., an example function approximation, fractional-order multi-scale global optimization, and two comparative performances with real data. The more efficient optimal searching capability of the fractional-order multi-scale global optimization of a modified FSDM based FBPNN to determine the global optimal solution is the major advantage being superior to a classic first-order BPNN.
[ { "created": "Sun, 23 Jun 2019 00:30:23 GMT", "version": "v1" }, { "created": "Wed, 10 Jul 2019 08:05:57 GMT", "version": "v2" } ]
2019-07-11
[ [ "PU", "Yi-Fei", "" ], [ "Wang", "Jian", "" ] ]
This paper offers a novel mathematical approach, the modified Fractional-order Steepest Descent Method (FSDM) for training BackPropagation Neural Networks (BPNNs); this differs from the majority of the previous approaches and as such. A promising mathematical method, fractional calculus, has the potential to assume a prominent role in the applications of neural networks and cybernetics because of its inherent strengths such as long-term memory, nonlocality, and weak singularity. Therefore, to improve the optimization performance of classic first-order BPNNs, in this paper we study whether it could be possible to modified FSDM and generalize classic first-order BPNNs to modified FSDM based Fractional-order Backpropagation Neural Networks (FBPNNs). Motivated by this inspiration, this paper proposes a state-of-the-art application of fractional calculus to implement a modified FSDM based FBPNN whose reverse incremental search is in the negative directions of the approximate fractional-order partial derivatives of the square error. At first, the theoretical concept of a modified FSDM based FBPNN is described mathematically. Then, the mathematical proof of the fractional-order global optimal convergence, an assumption of the structure, and the fractional-order multi-scale global optimization of a modified FSDM based FBPNN are analysed in detail. Finally, we perform comparative experiments and compare a modified FSDM based FBPNN with a classic first-order BPNN, i.e., an example function approximation, fractional-order multi-scale global optimization, and two comparative performances with real data. The more efficient optimal searching capability of the fractional-order multi-scale global optimization of a modified FSDM based FBPNN to determine the global optimal solution is the major advantage being superior to a classic first-order BPNN.
2103.03390
Nikola Zubi\'c
Nikola Zubi\'c, Pietro Li\`o
An Effective Loss Function for Generating 3D Models from Single 2D Image without Rendering
21 page, 13 figures, 6 tables, to appear as a full paper with oral contribution in AIAI 2021
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Differentiable rendering is a very successful technique that applies to a Single-View 3D Reconstruction. Current renderers use losses based on pixels between a rendered image of some 3D reconstructed object and ground-truth images from given matched viewpoints to optimise parameters of the 3D shape. These models require a rendering step, along with visibility handling and evaluation of the shading model. The main goal of this paper is to demonstrate that we can avoid these steps and still get reconstruction results as other state-of-the-art models that are equal or even better than existing category-specific reconstruction methods. First, we use the same CNN architecture for the prediction of a point cloud shape and pose prediction like the one used by Insafutdinov & Dosovitskiy. Secondly, we propose the novel effective loss function that evaluates how well the projections of reconstructed 3D point clouds cover the ground truth object's silhouette. Then we use Poisson Surface Reconstruction to transform the reconstructed point cloud into a 3D mesh. Finally, we perform a GAN-based texture mapping on a particular 3D mesh and produce a textured 3D mesh from a single 2D image. We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.
[ { "created": "Fri, 5 Mar 2021 00:02:18 GMT", "version": "v1" }, { "created": "Fri, 30 Apr 2021 09:47:39 GMT", "version": "v2" } ]
2021-05-03
[ [ "Zubić", "Nikola", "" ], [ "Liò", "Pietro", "" ] ]
Differentiable rendering is a very successful technique that applies to a Single-View 3D Reconstruction. Current renderers use losses based on pixels between a rendered image of some 3D reconstructed object and ground-truth images from given matched viewpoints to optimise parameters of the 3D shape. These models require a rendering step, along with visibility handling and evaluation of the shading model. The main goal of this paper is to demonstrate that we can avoid these steps and still get reconstruction results as other state-of-the-art models that are equal or even better than existing category-specific reconstruction methods. First, we use the same CNN architecture for the prediction of a point cloud shape and pose prediction like the one used by Insafutdinov & Dosovitskiy. Secondly, we propose the novel effective loss function that evaluates how well the projections of reconstructed 3D point clouds cover the ground truth object's silhouette. Then we use Poisson Surface Reconstruction to transform the reconstructed point cloud into a 3D mesh. Finally, we perform a GAN-based texture mapping on a particular 3D mesh and produce a textured 3D mesh from a single 2D image. We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.
2107.10538
Wenwen Gong
Wenwen Gong, Huiping Wu, Xiaokang Wang, Xuyun Zhang, Yawei Wang, Yifei Chen, Mohammad R. Khosravi
Diversified and Compatible Web APIs Recommendation in IoT
15 pages, 11 figures
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the ever-increasing popularity of Service-oriented Architecture (SoA) and Internet of Things (IoT), a considerable number of enterprises or organizations are attempting to encapsulate their provided complex business services into various lightweight and accessible web APIs (application programming interfaces) with diverse functions. In this situation, a software developer can select a group of preferred web APIs from a massive number of candidates to create a complex mashup economically and quickly based on the keywords typed by the developer. However, traditional keyword-based web API search approaches often suffer from the following difficulties and challenges. First, they often focus more on the functional matching between the candidate web APIs and the mashup to be developed while neglecting the compatibility among different APIs, which probably returns a group of incompatible web APIs and further leads to a mashup development failure. Second, existing approaches often return a web API composition solution to the mashup developer for reference, which narrows the developer's API selection scope considerably and may reduce developer satisfaction heavily. In view of the above challenges and successful application of game theory in the IoT, based on the idea of game theory, we propose a compatible and diverse web APIs recommendation approach for mashup creations, named MCCOMP+DIV, to return multiple sets of diverse and compatible web APIs with higher success rate. Finally, we validate the effectiveness and efficiency of MCCOMP+DIV through a set of experiments based on a real-world web API dataset, i.e., the PW dataset crawled from ProgrammableWeb.com.
[ { "created": "Thu, 22 Jul 2021 09:32:31 GMT", "version": "v1" }, { "created": "Wed, 11 Aug 2021 23:23:42 GMT", "version": "v2" } ]
2021-08-13
[ [ "Gong", "Wenwen", "" ], [ "Wu", "Huiping", "" ], [ "Wang", "Xiaokang", "" ], [ "Zhang", "Xuyun", "" ], [ "Wang", "Yawei", "" ], [ "Chen", "Yifei", "" ], [ "Khosravi", "Mohammad R.", "" ] ]
With the ever-increasing popularity of Service-oriented Architecture (SoA) and Internet of Things (IoT), a considerable number of enterprises or organizations are attempting to encapsulate their provided complex business services into various lightweight and accessible web APIs (application programming interfaces) with diverse functions. In this situation, a software developer can select a group of preferred web APIs from a massive number of candidates to create a complex mashup economically and quickly based on the keywords typed by the developer. However, traditional keyword-based web API search approaches often suffer from the following difficulties and challenges. First, they often focus more on the functional matching between the candidate web APIs and the mashup to be developed while neglecting the compatibility among different APIs, which probably returns a group of incompatible web APIs and further leads to a mashup development failure. Second, existing approaches often return a web API composition solution to the mashup developer for reference, which narrows the developer's API selection scope considerably and may reduce developer satisfaction heavily. In view of the above challenges and successful application of game theory in the IoT, based on the idea of game theory, we propose a compatible and diverse web APIs recommendation approach for mashup creations, named MCCOMP+DIV, to return multiple sets of diverse and compatible web APIs with higher success rate. Finally, we validate the effectiveness and efficiency of MCCOMP+DIV through a set of experiments based on a real-world web API dataset, i.e., the PW dataset crawled from ProgrammableWeb.com.
1712.08819
Alexander Panchenko
Chris Biemann, Stefano Faralli, Alexander Panchenko, Simone Paolo Ponzetto
A Framework for Enriching Lexical Semantic Resources with Distributional Semantics
Accepted for publication in the journal of Natural Language Engineering, 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an approach to combining distributional semantic representations induced from text corpora with manually constructed lexical-semantic networks. While both kinds of semantic resources are available with high lexical coverage, our aligned resource combines the domain specificity and availability of contextual information from distributional models with the conciseness and high quality of manually crafted lexical networks. We start with a distributional representation of induced senses of vocabulary terms, which are accompanied with rich context information given by related lexical items. We then automatically disambiguate such representations to obtain a full-fledged proto-conceptualization, i.e. a typed graph of induced word senses. In a final step, this proto-conceptualization is aligned to a lexical ontology, resulting in a hybrid aligned resource. Moreover, unmapped induced senses are associated with a semantic type in order to connect them to the core resource. Manual evaluations against ground-truth judgments for different stages of our method as well as an extrinsic evaluation on a knowledge-based Word Sense Disambiguation benchmark all indicate the high quality of the new hybrid resource. Additionally, we show the benefits of enriching top-down lexical knowledge resources with bottom-up distributional information from text for addressing high-end knowledge acquisition tasks such as cleaning hypernym graphs and learning taxonomies from scratch.
[ { "created": "Sat, 23 Dec 2017 18:46:58 GMT", "version": "v1" } ]
2017-12-27
[ [ "Biemann", "Chris", "" ], [ "Faralli", "Stefano", "" ], [ "Panchenko", "Alexander", "" ], [ "Ponzetto", "Simone Paolo", "" ] ]
We present an approach to combining distributional semantic representations induced from text corpora with manually constructed lexical-semantic networks. While both kinds of semantic resources are available with high lexical coverage, our aligned resource combines the domain specificity and availability of contextual information from distributional models with the conciseness and high quality of manually crafted lexical networks. We start with a distributional representation of induced senses of vocabulary terms, which are accompanied with rich context information given by related lexical items. We then automatically disambiguate such representations to obtain a full-fledged proto-conceptualization, i.e. a typed graph of induced word senses. In a final step, this proto-conceptualization is aligned to a lexical ontology, resulting in a hybrid aligned resource. Moreover, unmapped induced senses are associated with a semantic type in order to connect them to the core resource. Manual evaluations against ground-truth judgments for different stages of our method as well as an extrinsic evaluation on a knowledge-based Word Sense Disambiguation benchmark all indicate the high quality of the new hybrid resource. Additionally, we show the benefits of enriching top-down lexical knowledge resources with bottom-up distributional information from text for addressing high-end knowledge acquisition tasks such as cleaning hypernym graphs and learning taxonomies from scratch.
2110.03536
Zhao Ren
Zhao Ren, Thanh Tam Nguyen, Wolfgang Nejdl
Prototype Learning for Interpretable Respiratory Sound Analysis
Technical report of the paper accepted by IEEE ICASSP 2022
null
null
null
cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Remote screening of respiratory diseases has been widely studied as a non-invasive and early instrument for diagnosis purposes, especially in the pandemic. The respiratory sound classification task has been realized with numerous deep neural network (DNN) models due to their superior performance. However, in the high-stake medical domain where decisions can have significant consequences, it is desirable to develop interpretable models; thus, providing understandable reasons for physicians and patients. To address the issue, we propose a prototype learning framework, that jointly generates exemplar samples for explanation and integrates these samples into a layer of DNNs. The experimental results indicate that our method outperforms the state-of-the-art approaches on the largest public respiratory sound database.
[ { "created": "Thu, 7 Oct 2021 14:59:01 GMT", "version": "v1" }, { "created": "Mon, 10 Jan 2022 08:59:50 GMT", "version": "v2" }, { "created": "Wed, 2 Feb 2022 09:31:40 GMT", "version": "v3" }, { "created": "Mon, 7 Feb 2022 09:55:15 GMT", "version": "v4" } ]
2022-02-08
[ [ "Ren", "Zhao", "" ], [ "Nguyen", "Thanh Tam", "" ], [ "Nejdl", "Wolfgang", "" ] ]
Remote screening of respiratory diseases has been widely studied as a non-invasive and early instrument for diagnosis purposes, especially in the pandemic. The respiratory sound classification task has been realized with numerous deep neural network (DNN) models due to their superior performance. However, in the high-stake medical domain where decisions can have significant consequences, it is desirable to develop interpretable models; thus, providing understandable reasons for physicians and patients. To address the issue, we propose a prototype learning framework, that jointly generates exemplar samples for explanation and integrates these samples into a layer of DNNs. The experimental results indicate that our method outperforms the state-of-the-art approaches on the largest public respiratory sound database.
2401.12000
Leonardo Alexandre
Leonardo Alexandre and Rafael S. Costa and Rui Henriques
Integrating Statistical Significance and Discriminative Power in Pattern Discovery
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Pattern discovery plays a central role in both descriptive and predictive tasks across multiple domains. Actionable patterns must meet rigorous statistical significance criteria and, in the presence of target variables, further uphold discriminative power. Our work addresses the underexplored area of guiding pattern discovery by integrating statistical significance and discriminative power criteria into state-of-the-art algorithms while preserving pattern quality. We also address how pattern quality thresholds, imposed by some algorithms, can be rectified to accommodate these additional criteria. To test the proposed methodology, we select the triclustering task as the guiding pattern discovery case and extend well-known greedy and multi-objective optimization triclustering algorithms, $\delta$-Trimax and TriGen, that use various pattern quality criteria, such as Mean Squared Residual (MSR), Least Squared Lines (LSL), and Multi Slope Measure (MSL). Results from three case studies show the role of the proposed methodology in discovering patterns with pronounced improvements of discriminative power and statistical significance without quality deterioration, highlighting its importance in supervisedly guiding the search. Although the proposed methodology is motivated over multivariate time series data, it can be straightforwardly extended to pattern discovery tasks involving multivariate, N-way (N>3), transactional, and sequential data structures. Availability: The code is freely available at https://github.com/JupitersMight/MOF_Triclustering under the MIT license.
[ { "created": "Mon, 22 Jan 2024 14:51:01 GMT", "version": "v1" } ]
2024-01-23
[ [ "Alexandre", "Leonardo", "" ], [ "Costa", "Rafael S.", "" ], [ "Henriques", "Rui", "" ] ]
Pattern discovery plays a central role in both descriptive and predictive tasks across multiple domains. Actionable patterns must meet rigorous statistical significance criteria and, in the presence of target variables, further uphold discriminative power. Our work addresses the underexplored area of guiding pattern discovery by integrating statistical significance and discriminative power criteria into state-of-the-art algorithms while preserving pattern quality. We also address how pattern quality thresholds, imposed by some algorithms, can be rectified to accommodate these additional criteria. To test the proposed methodology, we select the triclustering task as the guiding pattern discovery case and extend well-known greedy and multi-objective optimization triclustering algorithms, $\delta$-Trimax and TriGen, that use various pattern quality criteria, such as Mean Squared Residual (MSR), Least Squared Lines (LSL), and Multi Slope Measure (MSL). Results from three case studies show the role of the proposed methodology in discovering patterns with pronounced improvements of discriminative power and statistical significance without quality deterioration, highlighting its importance in supervisedly guiding the search. Although the proposed methodology is motivated over multivariate time series data, it can be straightforwardly extended to pattern discovery tasks involving multivariate, N-way (N>3), transactional, and sequential data structures. Availability: The code is freely available at https://github.com/JupitersMight/MOF_Triclustering under the MIT license.
2011.02048
Xutai Ma
Xutai Ma, Juan Pino, Philipp Koehn
SimulMT to SimulST: Adapting Simultaneous Text Translation to End-to-End Simultaneous Speech Translation
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simultaneous text translation and end-to-end speech translation have recently made great progress but little work has combined these tasks together. We investigate how to adapt simultaneous text translation methods such as wait-k and monotonic multihead attention to end-to-end simultaneous speech translation by introducing a pre-decision module. A detailed analysis is provided on the latency-quality trade-offs of combining fixed and flexible pre-decision with fixed and flexible policies. We also design a novel computation-aware latency metric, adapted from Average Lagging.
[ { "created": "Tue, 3 Nov 2020 22:47:58 GMT", "version": "v1" } ]
2020-11-05
[ [ "Ma", "Xutai", "" ], [ "Pino", "Juan", "" ], [ "Koehn", "Philipp", "" ] ]
Simultaneous text translation and end-to-end speech translation have recently made great progress but little work has combined these tasks together. We investigate how to adapt simultaneous text translation methods such as wait-k and monotonic multihead attention to end-to-end simultaneous speech translation by introducing a pre-decision module. A detailed analysis is provided on the latency-quality trade-offs of combining fixed and flexible pre-decision with fixed and flexible policies. We also design a novel computation-aware latency metric, adapted from Average Lagging.
1706.07506
Massimiliano Ruocco
Massimiliano Ruocco, Ole Steinar Lillest{\o}l Skrede, Helge Langseth
Inter-Session Modeling for Session-Based Recommendation
null
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, research has been done on applying Recurrent Neural Networks (RNNs) as recommender systems. Results have been promising, especially in the session-based setting where RNNs have been shown to outperform state-of-the-art models. In many of these experiments, the RNN could potentially improve the recommendations by utilizing information about the user's past sessions, in addition to its own interactions in the current session. A problem for session-based recommendation, is how to produce accurate recommendations at the start of a session, before the system has learned much about the user's current interests. We propose a novel approach that extends a RNN recommender to be able to process the user's recent sessions, in order to improve recommendations. This is done by using a second RNN to learn from recent sessions, and predict the user's interest in the current session. By feeding this information to the original RNN, it is able to improve its recommendations. Our experiments on two different datasets show that the proposed approach can significantly improve recommendations throughout the sessions, compared to a single RNN working only on the current session. The proposed model especially improves recommendations at the start of sessions, and is therefore able to deal with the cold start problem within sessions.
[ { "created": "Thu, 22 Jun 2017 22:17:00 GMT", "version": "v1" } ]
2017-06-26
[ [ "Ruocco", "Massimiliano", "" ], [ "Skrede", "Ole Steinar Lillestøl", "" ], [ "Langseth", "Helge", "" ] ]
In recent years, research has been done on applying Recurrent Neural Networks (RNNs) as recommender systems. Results have been promising, especially in the session-based setting where RNNs have been shown to outperform state-of-the-art models. In many of these experiments, the RNN could potentially improve the recommendations by utilizing information about the user's past sessions, in addition to its own interactions in the current session. A problem for session-based recommendation, is how to produce accurate recommendations at the start of a session, before the system has learned much about the user's current interests. We propose a novel approach that extends a RNN recommender to be able to process the user's recent sessions, in order to improve recommendations. This is done by using a second RNN to learn from recent sessions, and predict the user's interest in the current session. By feeding this information to the original RNN, it is able to improve its recommendations. Our experiments on two different datasets show that the proposed approach can significantly improve recommendations throughout the sessions, compared to a single RNN working only on the current session. The proposed model especially improves recommendations at the start of sessions, and is therefore able to deal with the cold start problem within sessions.
2309.08362
Minyar Sassi Hidri
Rania Mkhinini Gahar, Olfa Arfaoui, Minyar Sassi Hidri
Towards Big Data Modeling and Management Systems: From DBMS to BDMS
6 pages, 9 Figures
2023 IEEE International Conference on Advanced Systems and Emergent Technologies (IC_ASET)
10.1109/IC_ASET58101.2023.10151190
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
To succeed in a Big Data strategy, you have to arm yourself with a wide range of data skills and best practices. This strategy can result in an impressive asset that can streamline operational costs, reduce time to market, and enable the creation of new products. However, several Big Data challenges may take place in enterprises when it comes to moving initiatives of boardroom discussions to effective practices. From a broader perspective, we take on this paper two very important challenges, namely modeling, and management. The main context here is to highlight the importance of understanding data modeling and knowing how to process complex data while supporting the characteristics of each model.
[ { "created": "Fri, 15 Sep 2023 12:40:51 GMT", "version": "v1" } ]
2023-09-18
[ [ "Gahar", "Rania Mkhinini", "" ], [ "Arfaoui", "Olfa", "" ], [ "Hidri", "Minyar Sassi", "" ] ]
To succeed in a Big Data strategy, you have to arm yourself with a wide range of data skills and best practices. This strategy can result in an impressive asset that can streamline operational costs, reduce time to market, and enable the creation of new products. However, several Big Data challenges may take place in enterprises when it comes to moving initiatives of boardroom discussions to effective practices. From a broader perspective, we take on this paper two very important challenges, namely modeling, and management. The main context here is to highlight the importance of understanding data modeling and knowing how to process complex data while supporting the characteristics of each model.
1408.1506
Roope Vehkalahti
Roope Vehkalahti, Laura Luzzi and Jean-Claude Belfiore
Shifted inverse determinant sums and new bounds for the DMT of space-time lattice codes
To appear in Proc. 2014 IEEE Int. Symp. Inform. Theory (ISIT), Hawaii, USA, 2014
null
10.1109/ISIT.2014.6875250
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers shifted inverse determinant sums arising from the union bound of the pairwise error probability for space-time codes in multiple-antenna fading channels. Previous work by Vehkalahti et al. focused on the approximation of these sums for low multiplexing gains, providing a complete classification of the inverse determinant sums as a function of constellation size for the most well-known algebraic space-time codes. This work aims at building a general framework for the study of the shifted sums for all multiplexing gains. New bounds obtained using dyadic summing techniques suggest that the behavior of the shifted sums does characterize many properties of a lattice code such as the diversity-multiplexing gain trade-off, both under maximum-likelihood decoding and infinite lattice naive decoding. Moreover, these bounds allow to characterize the signal-to-noise ratio thresholds corresponding to different diversity gains.
[ { "created": "Thu, 7 Aug 2014 07:57:46 GMT", "version": "v1" } ]
2016-11-15
[ [ "Vehkalahti", "Roope", "" ], [ "Luzzi", "Laura", "" ], [ "Belfiore", "Jean-Claude", "" ] ]
This paper considers shifted inverse determinant sums arising from the union bound of the pairwise error probability for space-time codes in multiple-antenna fading channels. Previous work by Vehkalahti et al. focused on the approximation of these sums for low multiplexing gains, providing a complete classification of the inverse determinant sums as a function of constellation size for the most well-known algebraic space-time codes. This work aims at building a general framework for the study of the shifted sums for all multiplexing gains. New bounds obtained using dyadic summing techniques suggest that the behavior of the shifted sums does characterize many properties of a lattice code such as the diversity-multiplexing gain trade-off, both under maximum-likelihood decoding and infinite lattice naive decoding. Moreover, these bounds allow to characterize the signal-to-noise ratio thresholds corresponding to different diversity gains.
2111.00092
Abhin Shah
Abhin Shah, Wei-Ning Chen, Johannes Balle, Peter Kairouz, Lucas Theis
Optimal Compression of Locally Differentially Private Mechanisms
null
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compressing the output of \epsilon-locally differentially private (LDP) randomizers naively leads to suboptimal utility. In this work, we demonstrate the benefits of using schemes that jointly compress and privatize the data using shared randomness. In particular, we investigate a family of schemes based on Minimal Random Coding (Havasi et al., 2019) and prove that they offer optimal privacy-accuracy-communication tradeoffs. Our theoretical and empirical findings show that our approach can compress PrivUnit (Bhowmick et al., 2018) and Subset Selection (Ye et al., 2018), the best known LDP algorithms for mean and frequency estimation, to to the order of \epsilon-bits of communication while preserving their privacy and accuracy guarantees.
[ { "created": "Fri, 29 Oct 2021 21:36:34 GMT", "version": "v1" }, { "created": "Sat, 26 Feb 2022 17:56:55 GMT", "version": "v2" } ]
2022-03-01
[ [ "Shah", "Abhin", "" ], [ "Chen", "Wei-Ning", "" ], [ "Balle", "Johannes", "" ], [ "Kairouz", "Peter", "" ], [ "Theis", "Lucas", "" ] ]
Compressing the output of \epsilon-locally differentially private (LDP) randomizers naively leads to suboptimal utility. In this work, we demonstrate the benefits of using schemes that jointly compress and privatize the data using shared randomness. In particular, we investigate a family of schemes based on Minimal Random Coding (Havasi et al., 2019) and prove that they offer optimal privacy-accuracy-communication tradeoffs. Our theoretical and empirical findings show that our approach can compress PrivUnit (Bhowmick et al., 2018) and Subset Selection (Ye et al., 2018), the best known LDP algorithms for mean and frequency estimation, to to the order of \epsilon-bits of communication while preserving their privacy and accuracy guarantees.
2112.12579
Yancong Lin
Yancong Lin, Silvia-Laura Pintea, Jan van Gemert
NeRD++: Improved 3D-mirror symmetry learning from a single image
BMVC 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Many objects are naturally symmetric, and this symmetry can be exploited to infer unseen 3D properties from a single 2D image. Recently, NeRD is proposed for accurate 3D mirror plane estimation from a single image. Despite the unprecedented accuracy, it relies on large annotated datasets for training and suffers from slow inference. Here we aim to improve its data and compute efficiency. We do away with the computationally expensive 4D feature volumes and instead explicitly compute the feature correlation of the pixel correspondences across depth, thus creating a compact 3D volume. We also design multi-stage spherical convolutions to identify the optimal mirror plane on the hemisphere, whose inductive bias offers gains in data-efficiency. Experiments on both synthetic and real-world datasets show the benefit of our proposed changes for improved data efficiency and inference speed.
[ { "created": "Thu, 23 Dec 2021 14:37:52 GMT", "version": "v1" }, { "created": "Fri, 7 Oct 2022 08:34:42 GMT", "version": "v2" } ]
2022-10-10
[ [ "Lin", "Yancong", "" ], [ "Pintea", "Silvia-Laura", "" ], [ "van Gemert", "Jan", "" ] ]
Many objects are naturally symmetric, and this symmetry can be exploited to infer unseen 3D properties from a single 2D image. Recently, NeRD is proposed for accurate 3D mirror plane estimation from a single image. Despite the unprecedented accuracy, it relies on large annotated datasets for training and suffers from slow inference. Here we aim to improve its data and compute efficiency. We do away with the computationally expensive 4D feature volumes and instead explicitly compute the feature correlation of the pixel correspondences across depth, thus creating a compact 3D volume. We also design multi-stage spherical convolutions to identify the optimal mirror plane on the hemisphere, whose inductive bias offers gains in data-efficiency. Experiments on both synthetic and real-world datasets show the benefit of our proposed changes for improved data efficiency and inference speed.
0802.3992
Effrosyni Kokiopoulou
Effrosyni Kokiopoulou and Pascal Frossard
Polynomial Filtering for Fast Convergence in Distributed Consensus
submitted to IEEE Transactions on Signal Processing
null
10.1109/TSP.2008.2006147
LTS-2008-005
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past few years, the problem of distributed consensus has received a lot of attention, particularly in the framework of ad hoc sensor networks. Most methods proposed in the literature address the consensus averaging problem by distributed linear iterative algorithms, with asymptotic convergence of the consensus solution. The convergence rate of such distributed algorithms typically depends on the network topology and the weights given to the edges between neighboring sensors, as described by the network matrix. In this paper, we propose to accelerate the convergence rate for given network matrices by the use of polynomial filtering algorithms. The main idea of the proposed methodology is to apply a polynomial filter on the network matrix that will shape its spectrum in order to increase the convergence rate. Such an algorithm is equivalent to periodic updates in each of the sensors by aggregating a few of its previous estimates. We formulate the computation of the coefficients of the optimal polynomial as a semi-definite program that can be efficiently and globally solved for both static and dynamic network topologies. We finally provide simulation results that demonstrate the effectiveness of the proposed solutions in accelerating the convergence of distributed consensus averaging problems.
[ { "created": "Wed, 27 Feb 2008 11:35:02 GMT", "version": "v1" } ]
2009-11-13
[ [ "Kokiopoulou", "Effrosyni", "" ], [ "Frossard", "Pascal", "" ] ]
In the past few years, the problem of distributed consensus has received a lot of attention, particularly in the framework of ad hoc sensor networks. Most methods proposed in the literature address the consensus averaging problem by distributed linear iterative algorithms, with asymptotic convergence of the consensus solution. The convergence rate of such distributed algorithms typically depends on the network topology and the weights given to the edges between neighboring sensors, as described by the network matrix. In this paper, we propose to accelerate the convergence rate for given network matrices by the use of polynomial filtering algorithms. The main idea of the proposed methodology is to apply a polynomial filter on the network matrix that will shape its spectrum in order to increase the convergence rate. Such an algorithm is equivalent to periodic updates in each of the sensors by aggregating a few of its previous estimates. We formulate the computation of the coefficients of the optimal polynomial as a semi-definite program that can be efficiently and globally solved for both static and dynamic network topologies. We finally provide simulation results that demonstrate the effectiveness of the proposed solutions in accelerating the convergence of distributed consensus averaging problems.
1604.00216
George Barmpalias Dr
George Barmpalias and Andrew Lewis-Pye
Differences of halting probabilities
null
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The halting probabilities of universal prefix-free machines are universal for the class of reals with computably enumerable left cut (also known as left-c.e. reals), and coincide with the Martin-Loef random elements of this class. We study the differences of Martin-Loef random left-c.e. reals and show that for each pair of such reals a, b there exists a unique number r > 0 such that qa - b is a 1-random left-c.e. real for each positive rational q > r and a 1-random right-c.e. real for each positive rational q < r. Based on this result we develop a theory of differences of halting probabilities, which answers a number of questions about Martin-Loef random left-c.e. reals, including one of the few remaining open problems from the list of open questions in algorithmic randomness by Miller and Nies in 2006. The halting probability of a prefix-free machine M restricted to a set X is the probability that the machine halts and outputs an element of X. These numbers Omega_M(X) were studied by a number of authors in the last decade as a way to obtain concrete highly random numbers. When X is the complement of a computably enumerable set, the number Omega_M(X) is the difference of two halting probabilities. Becher, Figueira, Grigorieff, and Miller asked whether Omega_U(X) is Martin-Loef random when U is universal and X is the complement of a computably enumerable set. This problem has resisted numerous attempts in the last decade. We apply our theory of differences of halting probabilities to give a positive answer, and show that Omega_U(X) is a Martin-Loef random left-c.e. real whenever X is nonempty and the complement of a computably enumerable set.
[ { "created": "Fri, 1 Apr 2016 12:19:31 GMT", "version": "v1" }, { "created": "Fri, 19 May 2017 03:45:18 GMT", "version": "v2" } ]
2017-05-22
[ [ "Barmpalias", "George", "" ], [ "Lewis-Pye", "Andrew", "" ] ]
The halting probabilities of universal prefix-free machines are universal for the class of reals with computably enumerable left cut (also known as left-c.e. reals), and coincide with the Martin-Loef random elements of this class. We study the differences of Martin-Loef random left-c.e. reals and show that for each pair of such reals a, b there exists a unique number r > 0 such that qa - b is a 1-random left-c.e. real for each positive rational q > r and a 1-random right-c.e. real for each positive rational q < r. Based on this result we develop a theory of differences of halting probabilities, which answers a number of questions about Martin-Loef random left-c.e. reals, including one of the few remaining open problems from the list of open questions in algorithmic randomness by Miller and Nies in 2006. The halting probability of a prefix-free machine M restricted to a set X is the probability that the machine halts and outputs an element of X. These numbers Omega_M(X) were studied by a number of authors in the last decade as a way to obtain concrete highly random numbers. When X is the complement of a computably enumerable set, the number Omega_M(X) is the difference of two halting probabilities. Becher, Figueira, Grigorieff, and Miller asked whether Omega_U(X) is Martin-Loef random when U is universal and X is the complement of a computably enumerable set. This problem has resisted numerous attempts in the last decade. We apply our theory of differences of halting probabilities to give a positive answer, and show that Omega_U(X) is a Martin-Loef random left-c.e. real whenever X is nonempty and the complement of a computably enumerable set.
2208.02998
Chengliang Liu
Chengliang Liu, Zhihao Wu, Jie Wen, Chao Huang, Yong Xu
Localized Sparse Incomplete Multi-view Clustering
Published in IEEE Transactions on Multimedia (TMM). The code is available at Github https://github.com/justsmart/LSIMVC
null
10.1109/TMM.2022.3194332
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Incomplete multi-view clustering, which aims to solve the clustering problem on the incomplete multi-view data with partial view missing, has received more and more attention in recent years. Although numerous methods have been developed, most of the methods either cannot flexibly handle the incomplete multi-view data with arbitrary missing views or do not consider the negative factor of information imbalance among views. Moreover, some methods do not fully explore the local structure of all incomplete views. To tackle these problems, this paper proposes a simple but effective method, named localized sparse incomplete multi-view clustering (LSIMVC). Different from the existing methods, LSIMVC intends to learn a sparse and structured consensus latent representation from the incomplete multi-view data by optimizing a sparse regularized and novel graph embedded multi-view matrix factorization model. Specifically, in such a novel model based on the matrix factorization, a l1 norm based sparse constraint is introduced to obtain the sparse low-dimensional individual representations and the sparse consensus representation. Moreover, a novel local graph embedding term is introduced to learn the structured consensus representation. Different from the existing works, our local graph embedding term aggregates the graph embedding task and consensus representation learning task into a concise term. Furthermore, to reduce the imbalance factor of incomplete multi-view learning, an adaptive weighted learning scheme is introduced to LSIMVC. Finally, an efficient optimization strategy is given to solve the optimization problem of our proposed model. Comprehensive experimental results performed on six incomplete multi-view databases verify that the performance of our LSIMVC is superior to the state-of-the-art IMC approaches. The code is available in https://github.com/justsmart/LSIMVC.
[ { "created": "Fri, 5 Aug 2022 05:48:28 GMT", "version": "v1" }, { "created": "Tue, 11 Oct 2022 16:03:19 GMT", "version": "v2" }, { "created": "Mon, 13 Mar 2023 13:25:27 GMT", "version": "v3" } ]
2023-03-14
[ [ "Liu", "Chengliang", "" ], [ "Wu", "Zhihao", "" ], [ "Wen", "Jie", "" ], [ "Huang", "Chao", "" ], [ "Xu", "Yong", "" ] ]
Incomplete multi-view clustering, which aims to solve the clustering problem on the incomplete multi-view data with partial view missing, has received more and more attention in recent years. Although numerous methods have been developed, most of the methods either cannot flexibly handle the incomplete multi-view data with arbitrary missing views or do not consider the negative factor of information imbalance among views. Moreover, some methods do not fully explore the local structure of all incomplete views. To tackle these problems, this paper proposes a simple but effective method, named localized sparse incomplete multi-view clustering (LSIMVC). Different from the existing methods, LSIMVC intends to learn a sparse and structured consensus latent representation from the incomplete multi-view data by optimizing a sparse regularized and novel graph embedded multi-view matrix factorization model. Specifically, in such a novel model based on the matrix factorization, a l1 norm based sparse constraint is introduced to obtain the sparse low-dimensional individual representations and the sparse consensus representation. Moreover, a novel local graph embedding term is introduced to learn the structured consensus representation. Different from the existing works, our local graph embedding term aggregates the graph embedding task and consensus representation learning task into a concise term. Furthermore, to reduce the imbalance factor of incomplete multi-view learning, an adaptive weighted learning scheme is introduced to LSIMVC. Finally, an efficient optimization strategy is given to solve the optimization problem of our proposed model. Comprehensive experimental results performed on six incomplete multi-view databases verify that the performance of our LSIMVC is superior to the state-of-the-art IMC approaches. The code is available in https://github.com/justsmart/LSIMVC.
2206.02993
Yuqing Kong
Yuqing Kong and Grant Schoenebeck
False Consensus, Information Theory, and Prediction Markets
To appear in ITCS 2023
null
null
null
cs.GT econ.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a setting where Bayesian agents with a common prior have private information related to an event's outcome and sequentially make public announcements relating to their information. Our main result shows that when agents' private information is independent conditioning on the event's outcome whenever agents have similar beliefs about the outcome, their information is aggregated. That is, there is no false consensus. Our main result has a short proof based on a natural information theoretic framework. A key ingredient of the framework is the equivalence between the sign of the ``interaction information'' and a super/sub-additive property of the value of people's information. This provides an intuitive interpretation and an interesting application of the interaction information, which measures the amount of information shared by three random variables. We illustrate the power of this information theoretic framework by reproving two additional results within it: 1) that agents quickly agree when announcing (summaries of) beliefs in round robin fashion [Aaronson 2005]; and 2) results from [Chen et al 2010] on when prediction market agents should release information to maximize their payment. We also interpret the information theoretic framework and the above results in prediction markets by proving that the expected reward of revealing information is the conditional mutual information of the information revealed.
[ { "created": "Tue, 7 Jun 2022 03:46:11 GMT", "version": "v1" }, { "created": "Thu, 24 Nov 2022 12:17:52 GMT", "version": "v2" } ]
2022-11-28
[ [ "Kong", "Yuqing", "" ], [ "Schoenebeck", "Grant", "" ] ]
We study a setting where Bayesian agents with a common prior have private information related to an event's outcome and sequentially make public announcements relating to their information. Our main result shows that when agents' private information is independent conditioning on the event's outcome whenever agents have similar beliefs about the outcome, their information is aggregated. That is, there is no false consensus. Our main result has a short proof based on a natural information theoretic framework. A key ingredient of the framework is the equivalence between the sign of the ``interaction information'' and a super/sub-additive property of the value of people's information. This provides an intuitive interpretation and an interesting application of the interaction information, which measures the amount of information shared by three random variables. We illustrate the power of this information theoretic framework by reproving two additional results within it: 1) that agents quickly agree when announcing (summaries of) beliefs in round robin fashion [Aaronson 2005]; and 2) results from [Chen et al 2010] on when prediction market agents should release information to maximize their payment. We also interpret the information theoretic framework and the above results in prediction markets by proving that the expected reward of revealing information is the conditional mutual information of the information revealed.
0710.4780
Jesus M. Almendros-Jimenez Dr.
J. M. Almendros-Jim\'enez and A. Becerra-Ter\'on and F. J. Enciso-Ba\~nos
Querying XML Documents in Logic Programming
null
null
null
null
cs.PL cs.DB
null
Extensible Markup Language (XML) is a simple, very flexible text format derived from SGML. Originally designed to meet the challenges of large-scale electronic publishing, XML is also playing an increasingly important role in the exchange of a wide variety of data on the Web and elsewhere. XPath language is the result of an effort to provide address parts of an XML document. In support of this primary purpose, it becomes in a query language against an XML document. In this paper we present a proposal for the implementation of the XPath language in logic programming. With this aim we will describe the representation of XML documents by means of a logic program. Rules and facts can be used for representing the document schema and the XML document itself. In particular, we will present how to index XML documents in logic programs: rules are supposed to be stored in main memory, however facts are stored in secondary memory by using two kind of indexes: one for each XML tag, and other for each group of terminal items. In addition, we will study how to query by means of the XPath language against a logic program representing an XML document. It evolves the specialization of the logic program with regard to the XPath expression. Finally, we will also explain how to combine the indexing and the top-down evaluation of the logic program. To appear in Theory and Practice of Logic Programming (TPLP)"
[ { "created": "Thu, 25 Oct 2007 10:45:08 GMT", "version": "v1" } ]
2007-10-26
[ [ "Almendros-Jiménez", "J. M.", "" ], [ "Becerra-Terón", "A.", "" ], [ "Enciso-Baños", "F. J.", "" ] ]
Extensible Markup Language (XML) is a simple, very flexible text format derived from SGML. Originally designed to meet the challenges of large-scale electronic publishing, XML is also playing an increasingly important role in the exchange of a wide variety of data on the Web and elsewhere. XPath language is the result of an effort to provide address parts of an XML document. In support of this primary purpose, it becomes in a query language against an XML document. In this paper we present a proposal for the implementation of the XPath language in logic programming. With this aim we will describe the representation of XML documents by means of a logic program. Rules and facts can be used for representing the document schema and the XML document itself. In particular, we will present how to index XML documents in logic programs: rules are supposed to be stored in main memory, however facts are stored in secondary memory by using two kind of indexes: one for each XML tag, and other for each group of terminal items. In addition, we will study how to query by means of the XPath language against a logic program representing an XML document. It evolves the specialization of the logic program with regard to the XPath expression. Finally, we will also explain how to combine the indexing and the top-down evaluation of the logic program. To appear in Theory and Practice of Logic Programming (TPLP)"
1805.08717
Minh Vo
Minh Vo, Ersin Yumer, Kalyan Sunkavalli, Sunil Hadap, Yaser Sheikh, and Srinivasa Narasimhan
Self-supervised Multi-view Person Association and Its Applications
Accepted to IEEE TPAMI
null
10.1109/TPAMI.2020.2974726
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reliable markerless motion tracking of people participating in a complex group activity from multiple moving cameras is challenging due to frequent occlusions, strong viewpoint and appearance variations, and asynchronous video streams. To solve this problem, reliable association of the same person across distant viewpoints and temporal instances is essential. We present a self-supervised framework to adapt a generic person appearance descriptor to the unlabeled videos by exploiting motion tracking, mutual exclusion constraints, and multi-view geometry. The adapted discriminative descriptor is used in a tracking-by-clustering formulation. We validate the effectiveness of our descriptor learning on WILDTRACK [14] and three new complex social scenes captured by multiple cameras with up to 60 people "in the wild". We report significant improvement in association accuracy (up to 18%) and stable and coherent 3D human skeleton tracking (5 to 10 times) over the baseline. Using the reconstructed 3D skeletons, we cut the input videos into a multi-angle video where the image of a specified person is shown from the best visible front-facing camera. Our algorithm detects inter-human occlusion to determine the camera switching moment while still maintaining the flow of the action well.
[ { "created": "Tue, 22 May 2018 16:25:26 GMT", "version": "v1" }, { "created": "Thu, 15 Nov 2018 21:39:20 GMT", "version": "v2" }, { "created": "Sat, 18 Apr 2020 06:16:40 GMT", "version": "v3" } ]
2020-04-21
[ [ "Vo", "Minh", "" ], [ "Yumer", "Ersin", "" ], [ "Sunkavalli", "Kalyan", "" ], [ "Hadap", "Sunil", "" ], [ "Sheikh", "Yaser", "" ], [ "Narasimhan", "Srinivasa", "" ] ]
Reliable markerless motion tracking of people participating in a complex group activity from multiple moving cameras is challenging due to frequent occlusions, strong viewpoint and appearance variations, and asynchronous video streams. To solve this problem, reliable association of the same person across distant viewpoints and temporal instances is essential. We present a self-supervised framework to adapt a generic person appearance descriptor to the unlabeled videos by exploiting motion tracking, mutual exclusion constraints, and multi-view geometry. The adapted discriminative descriptor is used in a tracking-by-clustering formulation. We validate the effectiveness of our descriptor learning on WILDTRACK [14] and three new complex social scenes captured by multiple cameras with up to 60 people "in the wild". We report significant improvement in association accuracy (up to 18%) and stable and coherent 3D human skeleton tracking (5 to 10 times) over the baseline. Using the reconstructed 3D skeletons, we cut the input videos into a multi-angle video where the image of a specified person is shown from the best visible front-facing camera. Our algorithm detects inter-human occlusion to determine the camera switching moment while still maintaining the flow of the action well.
2010.06959
Shimrit Shtern
Eyal Gur, Shoham Sabach, Shimrit Shtern
Alternating Minimization Based First-Order Method for the Wireless Sensor Network Localization Problem
null
null
10.1109/TSP.2020.3031695
null
cs.NI math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an algorithm for the Wireless Sensor Network localization problem, which is based on the well-known algorithmic framework of Alternating Minimization. We start with a non-smooth and non-convex minimization, and transform it into an equivalent smooth and non-convex problem, which stands at the heart of our study. This paves the way to a new method which is globally convergent: not only does the sequence of objective function values converge, but the sequence of the location estimates also converges to a unique location that is a critical point of the corresponding (original) objective function. The proposed algorithm has a range of fully distributed to fully centralized implementations, which all have the property of global convergence. The algorithm is tested over several network configurations, and it is shown to produce more accurate solutions within a shorter time relative to existing methods.
[ { "created": "Wed, 14 Oct 2020 11:09:47 GMT", "version": "v1" } ]
2020-12-30
[ [ "Gur", "Eyal", "" ], [ "Sabach", "Shoham", "" ], [ "Shtern", "Shimrit", "" ] ]
We propose an algorithm for the Wireless Sensor Network localization problem, which is based on the well-known algorithmic framework of Alternating Minimization. We start with a non-smooth and non-convex minimization, and transform it into an equivalent smooth and non-convex problem, which stands at the heart of our study. This paves the way to a new method which is globally convergent: not only does the sequence of objective function values converge, but the sequence of the location estimates also converges to a unique location that is a critical point of the corresponding (original) objective function. The proposed algorithm has a range of fully distributed to fully centralized implementations, which all have the property of global convergence. The algorithm is tested over several network configurations, and it is shown to produce more accurate solutions within a shorter time relative to existing methods.
1704.03152
Xitong Yang
Xitong Yang, Palghat Ramesh, Radha Chitta, Sriganesh Madhvanath, Edgar A. Bernal and Jiebo Luo
Deep Multimodal Representation Learning from Temporal Data
To appear in CVPR 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, Deep Learning has been successfully applied to multimodal learning problems, with the aim of learning useful joint representations in data fusion applications. When the available modalities consist of time series data such as video, audio and sensor signals, it becomes imperative to consider their temporal structure during the fusion process. In this paper, we propose the Correlational Recurrent Neural Network (CorrRNN), a novel temporal fusion model for fusing multiple input modalities that are inherently temporal in nature. Key features of our proposed model include: (i) simultaneous learning of the joint representation and temporal dependencies between modalities, (ii) use of multiple loss terms in the objective function, including a maximum correlation loss term to enhance learning of cross-modal information, and (iii) the use of an attention model to dynamically adjust the contribution of different input modalities to the joint representation. We validate our model via experimentation on two different tasks: video- and sensor-based activity classification, and audio-visual speech recognition. We empirically analyze the contributions of different components of the proposed CorrRNN model, and demonstrate its robustness, effectiveness and state-of-the-art performance on multiple datasets.
[ { "created": "Tue, 11 Apr 2017 05:47:42 GMT", "version": "v1" } ]
2017-04-12
[ [ "Yang", "Xitong", "" ], [ "Ramesh", "Palghat", "" ], [ "Chitta", "Radha", "" ], [ "Madhvanath", "Sriganesh", "" ], [ "Bernal", "Edgar A.", "" ], [ "Luo", "Jiebo", "" ] ]
In recent years, Deep Learning has been successfully applied to multimodal learning problems, with the aim of learning useful joint representations in data fusion applications. When the available modalities consist of time series data such as video, audio and sensor signals, it becomes imperative to consider their temporal structure during the fusion process. In this paper, we propose the Correlational Recurrent Neural Network (CorrRNN), a novel temporal fusion model for fusing multiple input modalities that are inherently temporal in nature. Key features of our proposed model include: (i) simultaneous learning of the joint representation and temporal dependencies between modalities, (ii) use of multiple loss terms in the objective function, including a maximum correlation loss term to enhance learning of cross-modal information, and (iii) the use of an attention model to dynamically adjust the contribution of different input modalities to the joint representation. We validate our model via experimentation on two different tasks: video- and sensor-based activity classification, and audio-visual speech recognition. We empirically analyze the contributions of different components of the proposed CorrRNN model, and demonstrate its robustness, effectiveness and state-of-the-art performance on multiple datasets.
1907.11238
Szymon Drgas
Tomasz Grzywalski, Riccardo Belluzzo, Szymon Drgas, Agnieszka Cwalinska, Honorata Hafke-Dys
Interactive Lungs Auscultation with Reinforcement Learning Agent
null
null
null
null
cs.SD cs.AI cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To perform a precise auscultation for the purposes of examination of respiratory system normally requires the presence of an experienced doctor. With most recent advances in machine learning and artificial intelligence, automatic detection of pathological breath phenomena in sounds recorded with stethoscope becomes a reality. But to perform a full auscultation in home environment by layman is another matter, especially if the patient is a child. In this paper we propose a unique application of Reinforcement Learning for training an agent that interactively guides the end user throughout the auscultation procedure. We show that \textit{intelligent} selection of auscultation points by the agent reduces time of the examination fourfold without significant decrease in diagnosis accuracy compared to exhaustive auscultation.
[ { "created": "Thu, 25 Jul 2019 11:04:08 GMT", "version": "v1" } ]
2019-07-29
[ [ "Grzywalski", "Tomasz", "" ], [ "Belluzzo", "Riccardo", "" ], [ "Drgas", "Szymon", "" ], [ "Cwalinska", "Agnieszka", "" ], [ "Hafke-Dys", "Honorata", "" ] ]
To perform a precise auscultation for the purposes of examination of respiratory system normally requires the presence of an experienced doctor. With most recent advances in machine learning and artificial intelligence, automatic detection of pathological breath phenomena in sounds recorded with stethoscope becomes a reality. But to perform a full auscultation in home environment by layman is another matter, especially if the patient is a child. In this paper we propose a unique application of Reinforcement Learning for training an agent that interactively guides the end user throughout the auscultation procedure. We show that \textit{intelligent} selection of auscultation points by the agent reduces time of the examination fourfold without significant decrease in diagnosis accuracy compared to exhaustive auscultation.
1711.01703
Oliver Obst
Olivia Michael and Oliver Obst and Falk Schmidsberger and Frieder Stolzenburg
RoboCupSimData: A RoboCup soccer research dataset
6 pages; https://bitbucket.org/oliverobst/robocupsimdata
In Dirk Holz, Katie Genter, Maarouf Saad, and Oskar von Stryk, editors, RoboCup 2018: Robot Soccer World Cup XXII. RoboCup International Symposium, LNAI 11374, pages 230-237, Montr\'eal, Canada, 2019. Springer Nature Switzerland
10.1007/978-3-030-27544-0_19
null
cs.AI cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RoboCup is an international scientific robot competition in which teams of multiple robots compete against each other. Its different leagues provide many sources of robotics data, that can be used for further analysis and application of machine learning. This paper describes a large dataset from games of some of the top teams (from 2016 and 2017) in RoboCup Soccer Simulation League (2D), where teams of 11 robots (agents) compete against each other. Overall, we used 10 different teams to play each other, resulting in 45 unique pairings. For each pairing, we ran 25 matches (of 10mins), leading to 1125 matches or more than 180 hours of game play. The generated CSV files are 17GB of data (zipped), or 229GB (unzipped). The dataset is unique in the sense that it contains both the ground truth data (global, complete, noise-free information of all objects on the field), as well as the noisy, local and incomplete percepts of each robot. These data are made available as CSV files, as well as in the original soccer simulator formats.
[ { "created": "Mon, 6 Nov 2017 03:09:38 GMT", "version": "v1" } ]
2020-02-12
[ [ "Michael", "Olivia", "" ], [ "Obst", "Oliver", "" ], [ "Schmidsberger", "Falk", "" ], [ "Stolzenburg", "Frieder", "" ] ]
RoboCup is an international scientific robot competition in which teams of multiple robots compete against each other. Its different leagues provide many sources of robotics data, that can be used for further analysis and application of machine learning. This paper describes a large dataset from games of some of the top teams (from 2016 and 2017) in RoboCup Soccer Simulation League (2D), where teams of 11 robots (agents) compete against each other. Overall, we used 10 different teams to play each other, resulting in 45 unique pairings. For each pairing, we ran 25 matches (of 10mins), leading to 1125 matches or more than 180 hours of game play. The generated CSV files are 17GB of data (zipped), or 229GB (unzipped). The dataset is unique in the sense that it contains both the ground truth data (global, complete, noise-free information of all objects on the field), as well as the noisy, local and incomplete percepts of each robot. These data are made available as CSV files, as well as in the original soccer simulator formats.
2007.07703
Evan Piermont
Evan Piermont and Peio Zuazo-Garin
Failures of Contingent Thinking
null
null
null
null
cs.AI econ.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we provide a theoretical framework to analyze an agent who misinterprets or misperceives the true decision problem she faces. We show that a wide range of behavior observed in experimental settings manifest as failures to perceive implications, in other words, to properly account for the logical relationships between various payoff relevant contingencies. We present a behavioral definition of perceived implication, thereby providing an elicitation technique, and show that an agent's account of implication identifies a subjective state-space that underlies her behavior. By analyzing this state-space, we characterize distinct benchmarks of logical sophistication that drive empirical phenomena. We disentangle static and dynamic rationality. Thus, our framework delivers both a methodology for assessing an agent's level of contingent thinking and a strategy for identifying her beliefs in the absence full rationality.
[ { "created": "Wed, 15 Jul 2020 14:21:16 GMT", "version": "v1" }, { "created": "Wed, 7 Dec 2022 11:50:10 GMT", "version": "v2" }, { "created": "Mon, 3 Jul 2023 12:15:09 GMT", "version": "v3" } ]
2023-07-04
[ [ "Piermont", "Evan", "" ], [ "Zuazo-Garin", "Peio", "" ] ]
In this paper, we provide a theoretical framework to analyze an agent who misinterprets or misperceives the true decision problem she faces. We show that a wide range of behavior observed in experimental settings manifest as failures to perceive implications, in other words, to properly account for the logical relationships between various payoff relevant contingencies. We present a behavioral definition of perceived implication, thereby providing an elicitation technique, and show that an agent's account of implication identifies a subjective state-space that underlies her behavior. By analyzing this state-space, we characterize distinct benchmarks of logical sophistication that drive empirical phenomena. We disentangle static and dynamic rationality. Thus, our framework delivers both a methodology for assessing an agent's level of contingent thinking and a strategy for identifying her beliefs in the absence full rationality.
2308.02933
Yifang Wang
Yifang Wang, Yifan Qian, Xiaoyu Qi, Nan Cao, Dashun Wang
InnovationInsights: A Visual Analytics Approach for Understanding the Dual Frontiers of Science and Technology
null
null
null
null
cs.HC cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Science has long been viewed as a key driver of economic growth and rising standards of living. Knowledge about how scientific advances support marketplace inventions is therefore essential for understanding the role of science in propelling real-world applications and technological progress. The increasing availability of large-scale datasets tracing scientific publications and patented inventions and the complex interactions among them offers us new opportunities to explore the evolving dual frontiers of science and technology at an unprecedented level of scale and detail. However, we lack suitable visual analytics approaches to analyze such complex interactions effectively. Here we introduce InnovationInsights, an interactive visual analysis system for researchers, research institutions, and policymakers to explore the complex linkages between science and technology, and to identify critical innovations, inventors, and potential partners. The system first identifies important associations between scientific papers and patented inventions through a set of statistical measures introduced by our experts from the field of the Science of Science. A series of visualization views are then used to present these associations in the data context. In particular, we introduce the Interplay Graph to visualize patterns and insights derived from the data, helping users effectively navigate citation relationships between papers and patents. This visualization thereby helps them identify the origins of technical inventions and the impact of scientific research. We evaluate the system through two case studies with experts followed by expert interviews. We further engage a premier research institution to test-run the system, helping its institution leaders to extract new insights for innovation.
[ { "created": "Sat, 5 Aug 2023 18:23:06 GMT", "version": "v1" }, { "created": "Tue, 8 Aug 2023 18:00:44 GMT", "version": "v2" } ]
2023-08-10
[ [ "Wang", "Yifang", "" ], [ "Qian", "Yifan", "" ], [ "Qi", "Xiaoyu", "" ], [ "Cao", "Nan", "" ], [ "Wang", "Dashun", "" ] ]
Science has long been viewed as a key driver of economic growth and rising standards of living. Knowledge about how scientific advances support marketplace inventions is therefore essential for understanding the role of science in propelling real-world applications and technological progress. The increasing availability of large-scale datasets tracing scientific publications and patented inventions and the complex interactions among them offers us new opportunities to explore the evolving dual frontiers of science and technology at an unprecedented level of scale and detail. However, we lack suitable visual analytics approaches to analyze such complex interactions effectively. Here we introduce InnovationInsights, an interactive visual analysis system for researchers, research institutions, and policymakers to explore the complex linkages between science and technology, and to identify critical innovations, inventors, and potential partners. The system first identifies important associations between scientific papers and patented inventions through a set of statistical measures introduced by our experts from the field of the Science of Science. A series of visualization views are then used to present these associations in the data context. In particular, we introduce the Interplay Graph to visualize patterns and insights derived from the data, helping users effectively navigate citation relationships between papers and patents. This visualization thereby helps them identify the origins of technical inventions and the impact of scientific research. We evaluate the system through two case studies with experts followed by expert interviews. We further engage a premier research institution to test-run the system, helping its institution leaders to extract new insights for innovation.
2106.12144
Mikhail Galkin
Mikhail Galkin, Etienne Denis, Jiapeng Wu, William L. Hamilton
NodePiece: Compositional and Parameter-Efficient Representations of Large Knowledge Graphs
Accepted to ICLR 2022
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional representation learning algorithms for knowledge graphs (KG) map each entity to a unique embedding vector. Such a shallow lookup results in a linear growth of memory consumption for storing the embedding matrix and incurs high computational costs when working with real-world KGs. Drawing parallels with subword tokenization commonly used in NLP, we explore the landscape of more parameter-efficient node embedding strategies with possibly sublinear memory requirements. To this end, we propose NodePiece, an anchor-based approach to learn a fixed-size entity vocabulary. In NodePiece, a vocabulary of subword/sub-entity units is constructed from anchor nodes in a graph with known relation types. Given such a fixed-size vocabulary, it is possible to bootstrap an encoding and embedding for any entity, including those unseen during training. Experiments show that NodePiece performs competitively in node classification, link prediction, and relation prediction tasks while retaining less than 10% of explicit nodes in a graph as anchors and often having 10x fewer parameters. To this end, we show that a NodePiece-enabled model outperforms existing shallow models on a large OGB WikiKG 2 graph having 70x fewer parameters.
[ { "created": "Wed, 23 Jun 2021 03:51:03 GMT", "version": "v1" }, { "created": "Tue, 1 Feb 2022 21:17:08 GMT", "version": "v2" } ]
2022-02-03
[ [ "Galkin", "Mikhail", "" ], [ "Denis", "Etienne", "" ], [ "Wu", "Jiapeng", "" ], [ "Hamilton", "William L.", "" ] ]
Conventional representation learning algorithms for knowledge graphs (KG) map each entity to a unique embedding vector. Such a shallow lookup results in a linear growth of memory consumption for storing the embedding matrix and incurs high computational costs when working with real-world KGs. Drawing parallels with subword tokenization commonly used in NLP, we explore the landscape of more parameter-efficient node embedding strategies with possibly sublinear memory requirements. To this end, we propose NodePiece, an anchor-based approach to learn a fixed-size entity vocabulary. In NodePiece, a vocabulary of subword/sub-entity units is constructed from anchor nodes in a graph with known relation types. Given such a fixed-size vocabulary, it is possible to bootstrap an encoding and embedding for any entity, including those unseen during training. Experiments show that NodePiece performs competitively in node classification, link prediction, and relation prediction tasks while retaining less than 10% of explicit nodes in a graph as anchors and often having 10x fewer parameters. To this end, we show that a NodePiece-enabled model outperforms existing shallow models on a large OGB WikiKG 2 graph having 70x fewer parameters.
1904.12787
Yujia Li
Yujia Li, Chenjie Gu, Thomas Dullien, Oriol Vinyals, Pushmeet Kohli
Graph Matching Networks for Learning the Similarity of Graph Structured Objects
Accepted as a conference paper at ICML 2019
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the challenging problem of retrieval and matching of graph structured objects, and makes two key contributions. First, we demonstrate how Graph Neural Networks (GNN), which have emerged as an effective model for various supervised prediction problems defined on structured data, can be trained to produce embedding of graphs in vector spaces that enables efficient similarity reasoning. Second, we propose a novel Graph Matching Network model that, given a pair of graphs as input, computes a similarity score between them by jointly reasoning on the pair through a new cross-graph attention-based matching mechanism. We demonstrate the effectiveness of our models on different domains including the challenging problem of control-flow-graph based function similarity search that plays an important role in the detection of vulnerabilities in software systems. The experimental analysis demonstrates that our models are not only able to exploit structure in the context of similarity learning but they can also outperform domain-specific baseline systems that have been carefully hand-engineered for these problems.
[ { "created": "Mon, 29 Apr 2019 15:59:04 GMT", "version": "v1" }, { "created": "Sun, 12 May 2019 22:15:33 GMT", "version": "v2" } ]
2019-05-14
[ [ "Li", "Yujia", "" ], [ "Gu", "Chenjie", "" ], [ "Dullien", "Thomas", "" ], [ "Vinyals", "Oriol", "" ], [ "Kohli", "Pushmeet", "" ] ]
This paper addresses the challenging problem of retrieval and matching of graph structured objects, and makes two key contributions. First, we demonstrate how Graph Neural Networks (GNN), which have emerged as an effective model for various supervised prediction problems defined on structured data, can be trained to produce embedding of graphs in vector spaces that enables efficient similarity reasoning. Second, we propose a novel Graph Matching Network model that, given a pair of graphs as input, computes a similarity score between them by jointly reasoning on the pair through a new cross-graph attention-based matching mechanism. We demonstrate the effectiveness of our models on different domains including the challenging problem of control-flow-graph based function similarity search that plays an important role in the detection of vulnerabilities in software systems. The experimental analysis demonstrates that our models are not only able to exploit structure in the context of similarity learning but they can also outperform domain-specific baseline systems that have been carefully hand-engineered for these problems.
1406.5106
David Van Horn
J. Ian Johnson, Ilya Sergey, Christopher Earl, Matthew Might, David Van Horn
Pushdown flow analysis with abstract garbage collection
null
Journal of Functional Programming, Volume 24, Special Issue 2-3, May 2014, pp 218-283
10.1017/S0956796814000100 10.1017/S0956796814000100 10.1017/S0956796814000100 10.1017/S0956796814000100 10.1017/S0956796814000100
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the static analysis of functional programs, pushdown flow analysis and abstract garbage collection push the boundaries of what we can learn about programs statically. This work illuminates and poses solutions to theoretical and practical challenges that stand in the way of combining the power of these techniques. Pushdown flow analysis grants unbounded yet computable polyvariance to the analysis of return-flow in higher-order programs. Abstract garbage collection grants unbounded polyvariance to abstract addresses which become unreachable between invocations of the abstract contexts in which they were created. Pushdown analysis solves the problem of precisely analyzing recursion in higher-order languages; abstract garbage collection is essential in solving the "stickiness" problem. Alone, our benchmarks demonstrate that each method can reduce analysis times and boost precision by orders of magnitude. We combine these methods. The challenge in marrying these techniques is not subtle: computing the reachable control states of a pushdown system relies on limiting access during transition to the top of the stack; abstract garbage collection, on the other hand, needs full access to the entire stack to compute a root set, just as concrete collection does. Conditional pushdown systems were developed for just such a conundrum, but existing methods are ill-suited for the dynamic nature of garbage collection. We show fully precise and approximate solutions to the feasible paths problem for pushdown garbage-collecting control-flow analysis. Experiments reveal synergistic interplay between garbage collection and pushdown techniques, and the fusion demonstrates "better-than-both-worlds" precision.
[ { "created": "Thu, 19 Jun 2014 16:51:12 GMT", "version": "v1" } ]
2014-06-20
[ [ "Johnson", "J. Ian", "" ], [ "Sergey", "Ilya", "" ], [ "Earl", "Christopher", "" ], [ "Might", "Matthew", "" ], [ "Van Horn", "David", "" ] ]
In the static analysis of functional programs, pushdown flow analysis and abstract garbage collection push the boundaries of what we can learn about programs statically. This work illuminates and poses solutions to theoretical and practical challenges that stand in the way of combining the power of these techniques. Pushdown flow analysis grants unbounded yet computable polyvariance to the analysis of return-flow in higher-order programs. Abstract garbage collection grants unbounded polyvariance to abstract addresses which become unreachable between invocations of the abstract contexts in which they were created. Pushdown analysis solves the problem of precisely analyzing recursion in higher-order languages; abstract garbage collection is essential in solving the "stickiness" problem. Alone, our benchmarks demonstrate that each method can reduce analysis times and boost precision by orders of magnitude. We combine these methods. The challenge in marrying these techniques is not subtle: computing the reachable control states of a pushdown system relies on limiting access during transition to the top of the stack; abstract garbage collection, on the other hand, needs full access to the entire stack to compute a root set, just as concrete collection does. Conditional pushdown systems were developed for just such a conundrum, but existing methods are ill-suited for the dynamic nature of garbage collection. We show fully precise and approximate solutions to the feasible paths problem for pushdown garbage-collecting control-flow analysis. Experiments reveal synergistic interplay between garbage collection and pushdown techniques, and the fusion demonstrates "better-than-both-worlds" precision.
2206.11589
Xiong Zhou
Xiong Zhou, Xianming Liu, Deming Zhai, Junjun Jiang, Xin Gao, Xiangyang Ji
Learning Towards the Largest Margins
ICLR 2022
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the main challenges for feature representation in deep learning-based classification is the design of appropriate loss functions that exhibit strong discriminative power. The classical softmax loss does not explicitly encourage discriminative learning of features. A popular direction of research is to incorporate margins in well-established losses in order to enforce extra intra-class compactness and inter-class separability, which, however, were developed through heuristic means, as opposed to rigorous mathematical principles. In this work, we attempt to address this limitation by formulating the principled optimization objective as learning towards the largest margins. Specifically, we firstly define the class margin as the measure of inter-class separability, and the sample margin as the measure of intra-class compactness. Accordingly, to encourage discriminative representation of features, the loss function should promote the largest possible margins for both classes and samples. Furthermore, we derive a generalized margin softmax loss to draw general conclusions for the existing margin-based losses. Not only does this principled framework offer new perspectives to understand and interpret existing margin-based losses, but it also provides new insights that can guide the design of new tools, including sample margin regularization and largest margin softmax loss for the class-balanced case, and zero-centroid regularization for the class-imbalanced case. Experimental results demonstrate the effectiveness of our strategy on a variety of tasks, including visual classification, imbalanced classification, person re-identification, and face verification.
[ { "created": "Thu, 23 Jun 2022 10:03:03 GMT", "version": "v1" } ]
2022-06-24
[ [ "Zhou", "Xiong", "" ], [ "Liu", "Xianming", "" ], [ "Zhai", "Deming", "" ], [ "Jiang", "Junjun", "" ], [ "Gao", "Xin", "" ], [ "Ji", "Xiangyang", "" ] ]
One of the main challenges for feature representation in deep learning-based classification is the design of appropriate loss functions that exhibit strong discriminative power. The classical softmax loss does not explicitly encourage discriminative learning of features. A popular direction of research is to incorporate margins in well-established losses in order to enforce extra intra-class compactness and inter-class separability, which, however, were developed through heuristic means, as opposed to rigorous mathematical principles. In this work, we attempt to address this limitation by formulating the principled optimization objective as learning towards the largest margins. Specifically, we firstly define the class margin as the measure of inter-class separability, and the sample margin as the measure of intra-class compactness. Accordingly, to encourage discriminative representation of features, the loss function should promote the largest possible margins for both classes and samples. Furthermore, we derive a generalized margin softmax loss to draw general conclusions for the existing margin-based losses. Not only does this principled framework offer new perspectives to understand and interpret existing margin-based losses, but it also provides new insights that can guide the design of new tools, including sample margin regularization and largest margin softmax loss for the class-balanced case, and zero-centroid regularization for the class-imbalanced case. Experimental results demonstrate the effectiveness of our strategy on a variety of tasks, including visual classification, imbalanced classification, person re-identification, and face verification.
2112.14921
Ryoma Sato
Ryoma Sato
Retrieving Black-box Optimal Images from External Databases
WSDM 2022
null
null
null
cs.IR cs.AI cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Suppose we have a black-box function (e.g., deep neural network) that takes an image as input and outputs a value that indicates preference. How can we retrieve optimal images with respect to this function from an external database on the Internet? Standard retrieval problems in the literature (e.g., item recommendations) assume that an algorithm has full access to the set of items. In other words, such algorithms are designed for service providers. In this paper, we consider the retrieval problem under different assumptions. Specifically, we consider how users with limited access to an image database can retrieve images using their own black-box functions. This formulation enables a flexible and finer-grained image search defined by each user. We assume the user can access the database through a search query with tight API limits. Therefore, a user needs to efficiently retrieve optimal images in terms of the number of queries. We propose an efficient retrieval algorithm Tiara for this problem. In the experiments, we confirm that our proposed method performs better than several baselines under various settings.
[ { "created": "Thu, 30 Dec 2021 04:22:15 GMT", "version": "v1" } ]
2022-01-03
[ [ "Sato", "Ryoma", "" ] ]
Suppose we have a black-box function (e.g., deep neural network) that takes an image as input and outputs a value that indicates preference. How can we retrieve optimal images with respect to this function from an external database on the Internet? Standard retrieval problems in the literature (e.g., item recommendations) assume that an algorithm has full access to the set of items. In other words, such algorithms are designed for service providers. In this paper, we consider the retrieval problem under different assumptions. Specifically, we consider how users with limited access to an image database can retrieve images using their own black-box functions. This formulation enables a flexible and finer-grained image search defined by each user. We assume the user can access the database through a search query with tight API limits. Therefore, a user needs to efficiently retrieve optimal images in terms of the number of queries. We propose an efficient retrieval algorithm Tiara for this problem. In the experiments, we confirm that our proposed method performs better than several baselines under various settings.
2209.01181
Felipe Xavier Costa
Felipe Xavier Costa, Rion Brattig Correia, Luis M. Rocha
The distance backbone of directed networks
Accepted at the 11th International Conference on Complex Networks and their Applications
null
10.1007/978-3-031-21131-7_11
null
cs.SI cs.DS physics.soc-ph
http://creativecommons.org/licenses/by-sa/4.0/
In weighted graphs the shortest path between two nodes is often reached through an indirect path, out of all possible connections, leading to structural redundancies which play key roles in the dynamics and evolution of complex networks. We have previously developed a parameter-free, algebraically-principled methodology to uncover such redundancy and reveal the distance backbone of weighted graphs, which has been shown to be important in transmission dynamics, inference of important paths, and quantifying the robustness of networks. However, the method was developed for undirected graphs. Here we expand this methodology to weighted directed graphs and study the redundancy and robustness found in nine networks ranging from social, biomedical, and technical systems. We found that similarly to undirected graphs, directed graphs in general also contain a large amount of redundancy, as measured by the size of their (directed) distance backbone. Our methodology adds an additional tool to the principled sparsification of complex networks and the measure of their robustness.
[ { "created": "Fri, 2 Sep 2022 17:23:37 GMT", "version": "v1" } ]
2023-06-14
[ [ "Costa", "Felipe Xavier", "" ], [ "Correia", "Rion Brattig", "" ], [ "Rocha", "Luis M.", "" ] ]
In weighted graphs the shortest path between two nodes is often reached through an indirect path, out of all possible connections, leading to structural redundancies which play key roles in the dynamics and evolution of complex networks. We have previously developed a parameter-free, algebraically-principled methodology to uncover such redundancy and reveal the distance backbone of weighted graphs, which has been shown to be important in transmission dynamics, inference of important paths, and quantifying the robustness of networks. However, the method was developed for undirected graphs. Here we expand this methodology to weighted directed graphs and study the redundancy and robustness found in nine networks ranging from social, biomedical, and technical systems. We found that similarly to undirected graphs, directed graphs in general also contain a large amount of redundancy, as measured by the size of their (directed) distance backbone. Our methodology adds an additional tool to the principled sparsification of complex networks and the measure of their robustness.
2207.00345
Felipe Meneguzzi
Maur\'icio Cec\'ilio Magnaguagno and Felipe Meneguzzi and Lavindra de Silva
HyperTensioN and Total-order Forward Decomposition optimizations
Preprint version of journal submission
null
null
null
cs.AI cs.MA cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hierarchical Task Networks (HTN) planners generate plans using a decomposition process with extra domain knowledge to guide search towards a planning task. While domain experts develop HTN descriptions, they may repeatedly describe the same preconditions, or methods that are rarely used or possible to be decomposed. By leveraging a three-stage compiler design we can easily support more language descriptions and preprocessing optimizations that when chained can greatly improve runtime efficiency in such domains. In this paper we evaluate such optimizations with the HyperTensioN HTN planner, used in the HTN IPC 2020.
[ { "created": "Fri, 1 Jul 2022 11:23:52 GMT", "version": "v1" } ]
2022-07-04
[ [ "Magnaguagno", "Maurício Cecílio", "" ], [ "Meneguzzi", "Felipe", "" ], [ "de Silva", "Lavindra", "" ] ]
Hierarchical Task Networks (HTN) planners generate plans using a decomposition process with extra domain knowledge to guide search towards a planning task. While domain experts develop HTN descriptions, they may repeatedly describe the same preconditions, or methods that are rarely used or possible to be decomposed. By leveraging a three-stage compiler design we can easily support more language descriptions and preprocessing optimizations that when chained can greatly improve runtime efficiency in such domains. In this paper we evaluate such optimizations with the HyperTensioN HTN planner, used in the HTN IPC 2020.
2111.05070
Piyush Srivastava
Vibhor Porwal, Piyush Srivastava, Gaurav Sinha
Universal Lower Bound for Learning Causal DAGs with Atomic Interventions
Extended version of AISTATS 2022 paper. Added results for multi-node interventions, and shortened title
null
null
null
cs.LG cs.AI cs.DM stat.ME stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A well-studied challenge that arises in the structure learning problem of causal directed acyclic graphs (DAG) is that using observational data, one can only learn the graph up to a "Markov equivalence class" (MEC). The remaining undirected edges have to be oriented using interventions, which can be very expensive to perform in applications. Thus, the problem of minimizing the number of interventions needed to fully orient the MEC has received a lot of recent attention, and is also the focus of this work. Our first result is a new universal lower bound on the number of single-node interventions that any algorithm (whether active or passive) would need to perform in order to orient a given MEC. Our second result shows that this bound is, in fact, within a factor of two of the size of the smallest set of single-node interventions that can orient the MEC. Our lower bound is provably better than previously known lower bounds. Further, using simulations on synthetic graphs and by giving examples of special graph families, we show that our bound is often significantly better. To prove our lower bound, we develop the notion of clique-block shared-parents (CBSP) orderings, which are topological orderings of DAGs without v-structures and satisfy certain special properties. We also use the techniques developed here to extend our results to the setting of multi-node interventions.
[ { "created": "Tue, 9 Nov 2021 11:58:44 GMT", "version": "v1" }, { "created": "Tue, 23 Nov 2021 21:36:26 GMT", "version": "v2" }, { "created": "Mon, 31 Jan 2022 07:11:34 GMT", "version": "v3" }, { "created": "Thu, 19 May 2022 11:24:23 GMT", "version": "v4" } ]
2022-05-20
[ [ "Porwal", "Vibhor", "" ], [ "Srivastava", "Piyush", "" ], [ "Sinha", "Gaurav", "" ] ]
A well-studied challenge that arises in the structure learning problem of causal directed acyclic graphs (DAG) is that using observational data, one can only learn the graph up to a "Markov equivalence class" (MEC). The remaining undirected edges have to be oriented using interventions, which can be very expensive to perform in applications. Thus, the problem of minimizing the number of interventions needed to fully orient the MEC has received a lot of recent attention, and is also the focus of this work. Our first result is a new universal lower bound on the number of single-node interventions that any algorithm (whether active or passive) would need to perform in order to orient a given MEC. Our second result shows that this bound is, in fact, within a factor of two of the size of the smallest set of single-node interventions that can orient the MEC. Our lower bound is provably better than previously known lower bounds. Further, using simulations on synthetic graphs and by giving examples of special graph families, we show that our bound is often significantly better. To prove our lower bound, we develop the notion of clique-block shared-parents (CBSP) orderings, which are topological orderings of DAGs without v-structures and satisfy certain special properties. We also use the techniques developed here to extend our results to the setting of multi-node interventions.
2005.07493
Shubham Agarwal
Shubham Agarwal, Trung Bui, Joon-Young Lee, Ioannis Konstas, Verena Rieser
History for Visual Dialog: Do we really need it?
ACL'20
null
null
null
cs.CV cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual Dialog involves "understanding" the dialog history (what has been discussed previously) and the current question (what is asked), in addition to grounding information in the image, to generate the correct response. In this paper, we show that co-attention models which explicitly encode dialog history outperform models that don't, achieving state-of-the-art performance (72 % NDCG on val set). However, we also expose shortcomings of the crowd-sourcing dataset collection procedure by showing that history is indeed only required for a small amount of the data and that the current evaluation metric encourages generic replies. To that end, we propose a challenging subset (VisDialConv) of the VisDial val set and provide a benchmark of 63% NDCG.
[ { "created": "Fri, 8 May 2020 14:58:09 GMT", "version": "v1" } ]
2020-05-18
[ [ "Agarwal", "Shubham", "" ], [ "Bui", "Trung", "" ], [ "Lee", "Joon-Young", "" ], [ "Konstas", "Ioannis", "" ], [ "Rieser", "Verena", "" ] ]
Visual Dialog involves "understanding" the dialog history (what has been discussed previously) and the current question (what is asked), in addition to grounding information in the image, to generate the correct response. In this paper, we show that co-attention models which explicitly encode dialog history outperform models that don't, achieving state-of-the-art performance (72 % NDCG on val set). However, we also expose shortcomings of the crowd-sourcing dataset collection procedure by showing that history is indeed only required for a small amount of the data and that the current evaluation metric encourages generic replies. To that end, we propose a challenging subset (VisDialConv) of the VisDial val set and provide a benchmark of 63% NDCG.
2107.05241
Vinod K Kurmi
Blessen George and Vinod K. Kurmi and Vinay P. Namboodiri
Prb-GAN: A Probabilistic Framework for GAN Modelling
null
null
null
null
cs.LG cs.CV stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Generative adversarial networks (GANs) are very popular to generate realistic images, but they often suffer from the training instability issues and the phenomenon of mode loss. In order to attain greater diversity in GAN synthesized data, it is critical to solving the problem of mode loss. Our work explores probabilistic approaches to GAN modelling that could allow us to tackle these issues. We present Prb-GANs, a new variation that uses dropout to create a distribution over the network parameters with the posterior learnt using variational inference. We describe theoretically and validate experimentally using simple and complex datasets the benefits of such an approach. We look into further improvements using the concept of uncertainty measures. Through a set of further modifications to the loss functions for each network of the GAN, we are able to get results that show the improvement of GAN performance. Our methods are extremely simple and require very little modification to existing GAN architecture.
[ { "created": "Mon, 12 Jul 2021 08:04:13 GMT", "version": "v1" } ]
2021-07-13
[ [ "George", "Blessen", "" ], [ "Kurmi", "Vinod K.", "" ], [ "Namboodiri", "Vinay P.", "" ] ]
Generative adversarial networks (GANs) are very popular to generate realistic images, but they often suffer from the training instability issues and the phenomenon of mode loss. In order to attain greater diversity in GAN synthesized data, it is critical to solving the problem of mode loss. Our work explores probabilistic approaches to GAN modelling that could allow us to tackle these issues. We present Prb-GANs, a new variation that uses dropout to create a distribution over the network parameters with the posterior learnt using variational inference. We describe theoretically and validate experimentally using simple and complex datasets the benefits of such an approach. We look into further improvements using the concept of uncertainty measures. Through a set of further modifications to the loss functions for each network of the GAN, we are able to get results that show the improvement of GAN performance. Our methods are extremely simple and require very little modification to existing GAN architecture.
1908.08997
Thomas Hartley
Thomas Hartley, Kirill Sidorov, Christopher Willis and David Marshall
Gradient Weighted Superpixels for Interpretability in CNNs
Presented at BMVC 2019: Workshop on Interpretable and Explainable Machine Vision, Cardiff, UK
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As Convolutional Neural Networks embed themselves into our everyday lives, the need for them to be interpretable increases. However, there is often a trade-off between methods that are efficient to compute but produce an explanation that is difficult to interpret, and those that are slow to compute but provide a more interpretable result. This is particularly challenging in problem spaces that require a large input volume, especially video which combines both spatial and temporal dimensions. In this work we introduce the idea of scoring superpixels through the use of gradient based pixel scoring techniques. We show qualitatively and quantitatively that this is able to approximate LIME, in a fraction of the time. We investigate our techniques using both image classification, and action recognition networks on large scale datasets (ImageNet and Kinetics-400 respectively).
[ { "created": "Fri, 16 Aug 2019 12:02:25 GMT", "version": "v1" } ]
2019-08-27
[ [ "Hartley", "Thomas", "" ], [ "Sidorov", "Kirill", "" ], [ "Willis", "Christopher", "" ], [ "Marshall", "David", "" ] ]
As Convolutional Neural Networks embed themselves into our everyday lives, the need for them to be interpretable increases. However, there is often a trade-off between methods that are efficient to compute but produce an explanation that is difficult to interpret, and those that are slow to compute but provide a more interpretable result. This is particularly challenging in problem spaces that require a large input volume, especially video which combines both spatial and temporal dimensions. In this work we introduce the idea of scoring superpixels through the use of gradient based pixel scoring techniques. We show qualitatively and quantitatively that this is able to approximate LIME, in a fraction of the time. We investigate our techniques using both image classification, and action recognition networks on large scale datasets (ImageNet and Kinetics-400 respectively).
2301.10531
Ananya Jana
Ananya Jana, Hrebesh Molly Subhash, Dimitris N. Metaxas
3D Tooth Mesh Segmentation with Simplified Mesh Cell Representation
accepted at IEEE ISBI 2023 International Symposium on Biomedical Imaging
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Manual tooth segmentation of 3D tooth meshes is tedious and there is variations among dentists. %Manual tooth annotation of 3D tooth meshes is a tedious task. Several deep learning based methods have been proposed to perform automatic tooth mesh segmentation. Many of the proposed tooth mesh segmentation algorithms summarize the mesh cell as - the cell center or barycenter, the normal at barycenter, the cell vertices and the normals at the cell vertices. Summarizing of the mesh cell/triangle in this manner imposes an implicit structural constraint and makes it difficult to work with multiple resolutions which is done in many point cloud based deep learning algorithms. We propose a novel segmentation method which utilizes only the barycenter and the normal at the barycenter information of the mesh cell and yet achieves competitive performance. We are the first to demonstrate that it is possible to relax the implicit structural constraint and yet achieve superior segmentation performance
[ { "created": "Wed, 25 Jan 2023 11:43:56 GMT", "version": "v1" } ]
2023-01-26
[ [ "Jana", "Ananya", "" ], [ "Subhash", "Hrebesh Molly", "" ], [ "Metaxas", "Dimitris N.", "" ] ]
Manual tooth segmentation of 3D tooth meshes is tedious and there is variations among dentists. %Manual tooth annotation of 3D tooth meshes is a tedious task. Several deep learning based methods have been proposed to perform automatic tooth mesh segmentation. Many of the proposed tooth mesh segmentation algorithms summarize the mesh cell as - the cell center or barycenter, the normal at barycenter, the cell vertices and the normals at the cell vertices. Summarizing of the mesh cell/triangle in this manner imposes an implicit structural constraint and makes it difficult to work with multiple resolutions which is done in many point cloud based deep learning algorithms. We propose a novel segmentation method which utilizes only the barycenter and the normal at the barycenter information of the mesh cell and yet achieves competitive performance. We are the first to demonstrate that it is possible to relax the implicit structural constraint and yet achieve superior segmentation performance
2309.01032
Huiyuan Chen
Huiyuan Chen, Kaixiong Zhou, Kwei-Herng Lai, Chin-Chia Michael Yeh, Yan Zheng, Xia Hu, Hao Yang
Hessian-aware Quantized Node Embeddings for Recommendation
null
null
10.1145/3604915.3608826
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in recommender systems. Nevertheless, the process of searching and ranking from a large item corpus usually requires high latency, which limits the widespread deployment of GNNs in industry-scale applications. To address this issue, many methods compress user/item representations into the binary embedding space to reduce space requirements and accelerate inference. Also, they use the Straight-through Estimator (STE) to prevent vanishing gradients during back-propagation. However, the STE often causes the gradient mismatch problem, leading to sub-optimal results. In this work, we present the Hessian-aware Quantized GNN (HQ-GNN) as an effective solution for discrete representations of users/items that enable fast retrieval. HQ-GNN is composed of two components: a GNN encoder for learning continuous node embeddings and a quantized module for compressing full-precision embeddings into low-bit ones. Consequently, HQ-GNN benefits from both lower memory requirements and faster inference speeds compared to vanilla GNNs. To address the gradient mismatch problem in STE, we further consider the quantized errors and its second-order derivatives for better stability. The experimental results on several large-scale datasets show that HQ-GNN achieves a good balance between latency and performance.
[ { "created": "Sat, 2 Sep 2023 22:34:26 GMT", "version": "v1" } ]
2023-09-06
[ [ "Chen", "Huiyuan", "" ], [ "Zhou", "Kaixiong", "" ], [ "Lai", "Kwei-Herng", "" ], [ "Yeh", "Chin-Chia Michael", "" ], [ "Zheng", "Yan", "" ], [ "Hu", "Xia", "" ], [ "Yang", "Hao", "" ] ]
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in recommender systems. Nevertheless, the process of searching and ranking from a large item corpus usually requires high latency, which limits the widespread deployment of GNNs in industry-scale applications. To address this issue, many methods compress user/item representations into the binary embedding space to reduce space requirements and accelerate inference. Also, they use the Straight-through Estimator (STE) to prevent vanishing gradients during back-propagation. However, the STE often causes the gradient mismatch problem, leading to sub-optimal results. In this work, we present the Hessian-aware Quantized GNN (HQ-GNN) as an effective solution for discrete representations of users/items that enable fast retrieval. HQ-GNN is composed of two components: a GNN encoder for learning continuous node embeddings and a quantized module for compressing full-precision embeddings into low-bit ones. Consequently, HQ-GNN benefits from both lower memory requirements and faster inference speeds compared to vanilla GNNs. To address the gradient mismatch problem in STE, we further consider the quantized errors and its second-order derivatives for better stability. The experimental results on several large-scale datasets show that HQ-GNN achieves a good balance between latency and performance.
2307.09188
Tomas Bueno Momcilovic
Tomas Bueno Mom\v{c}ilovi\'c, Matthias Buchinger, Dian Balta
Need-driven decision-making and prototyping for DLT: Framework and web-based tool
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
In its 14 years, distributed ledger technology has attracted increasing attention, investments, enthusiasm, and user base. However, ongoing doubts about its usefulness and recent losses of trust in prominent cryptocurrencies have fueled deeply skeptical assessments. Multiple groups attempted to disentangle the technology from the associated hype and controversy by building workflows for rapid prototyping and informed decision-making, but their mostly isolated work leaves users only with fewer unclarities. To bridge the gaps between these contributions, we develop a holistic analytical framework and open-source web tool for making evidence-based decisions. Consisting of three stages - evaluation, elicitation, and design - the framework relies on input from the users' domain knowledge, maps their choices, and provides an output of needed technology bundles. We apply it to an example clinical use case to clarify the directions of our contribution charts for prototyping, hopefully driving the conversation towards ways to enhance further tools and approaches.
[ { "created": "Tue, 18 Jul 2023 12:19:47 GMT", "version": "v1" } ]
2023-07-19
[ [ "Momčilović", "Tomas Bueno", "" ], [ "Buchinger", "Matthias", "" ], [ "Balta", "Dian", "" ] ]
In its 14 years, distributed ledger technology has attracted increasing attention, investments, enthusiasm, and user base. However, ongoing doubts about its usefulness and recent losses of trust in prominent cryptocurrencies have fueled deeply skeptical assessments. Multiple groups attempted to disentangle the technology from the associated hype and controversy by building workflows for rapid prototyping and informed decision-making, but their mostly isolated work leaves users only with fewer unclarities. To bridge the gaps between these contributions, we develop a holistic analytical framework and open-source web tool for making evidence-based decisions. Consisting of three stages - evaluation, elicitation, and design - the framework relies on input from the users' domain knowledge, maps their choices, and provides an output of needed technology bundles. We apply it to an example clinical use case to clarify the directions of our contribution charts for prototyping, hopefully driving the conversation towards ways to enhance further tools and approaches.
2210.11719
Guo Weiyu
Weiyu Guo, Zhaoshuo Li, Yongkui Yang, Zheng Wang, Russell H. Taylor, Mathias Unberath, Alan Yuille, and Yingwei Li
Context-Enhanced Stereo Transformer
Accepted by ECCV2022
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stereo depth estimation is of great interest for computer vision research. However, existing methods struggles to generalize and predict reliably in hazardous regions, such as large uniform regions. To overcome these limitations, we propose Context Enhanced Path (CEP). CEP improves the generalization and robustness against common failure cases in existing solutions by capturing the long-range global information. We construct our stereo depth estimation model, Context Enhanced Stereo Transformer (CSTR), by plugging CEP into the state-of-the-art stereo depth estimation method Stereo Transformer. CSTR is examined on distinct public datasets, such as Scene Flow, Middlebury-2014, KITTI-2015, and MPI-Sintel. We find CSTR outperforms prior approaches by a large margin. For example, in the zero-shot synthetic-to-real setting, CSTR outperforms the best competing approaches on Middlebury-2014 dataset by 11%. Our extensive experiments demonstrate that the long-range information is critical for stereo matching task and CEP successfully captures such information.
[ { "created": "Fri, 21 Oct 2022 04:10:47 GMT", "version": "v1" } ]
2022-10-24
[ [ "Guo", "Weiyu", "" ], [ "Li", "Zhaoshuo", "" ], [ "Yang", "Yongkui", "" ], [ "Wang", "Zheng", "" ], [ "Taylor", "Russell H.", "" ], [ "Unberath", "Mathias", "" ], [ "Yuille", "Alan", "" ], [ "Li", "Yingwei", "" ] ]
Stereo depth estimation is of great interest for computer vision research. However, existing methods struggles to generalize and predict reliably in hazardous regions, such as large uniform regions. To overcome these limitations, we propose Context Enhanced Path (CEP). CEP improves the generalization and robustness against common failure cases in existing solutions by capturing the long-range global information. We construct our stereo depth estimation model, Context Enhanced Stereo Transformer (CSTR), by plugging CEP into the state-of-the-art stereo depth estimation method Stereo Transformer. CSTR is examined on distinct public datasets, such as Scene Flow, Middlebury-2014, KITTI-2015, and MPI-Sintel. We find CSTR outperforms prior approaches by a large margin. For example, in the zero-shot synthetic-to-real setting, CSTR outperforms the best competing approaches on Middlebury-2014 dataset by 11%. Our extensive experiments demonstrate that the long-range information is critical for stereo matching task and CEP successfully captures such information.
1906.01907
Hongyu Li
Hongyu Li, Fan Zhu, Junhua Qiu
Towards Document Image Quality Assessment: A Text Line Based Framework and A Synthetic Text Line Image Dataset
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the low quality of document images will greatly undermine the chances of success in automatic text recognition and analysis, it is necessary to assess the quality of document images uploaded in online business process, so as to reject those images of low quality. In this paper, we attempt to achieve document image quality assessment and our contributions are twofold. Firstly, since document image quality assessment is more interested in text, we propose a text line based framework to estimate document image quality, which is composed of three stages: text line detection, text line quality prediction, and overall quality assessment. Text line detection aims to find potential text lines with a detector. In the text line quality prediction stage, the quality score is computed for each text line with a CNN-based prediction model. The overall quality of document images is finally assessed with the ensemble of all text line quality. Secondly, to train the prediction model, a large-scale dataset, comprising 52,094 text line images, is synthesized with diverse attributes. For each text line image, a quality label is computed with a piece-wise function. To demonstrate the effectiveness of the proposed framework, comprehensive experiments are evaluated on two popular document image quality assessment benchmarks. Our framework significantly outperforms the state-of-the-art methods by large margins on the large and complicated dataset.
[ { "created": "Wed, 5 Jun 2019 09:40:34 GMT", "version": "v1" } ]
2019-06-06
[ [ "Li", "Hongyu", "" ], [ "Zhu", "Fan", "" ], [ "Qiu", "Junhua", "" ] ]
Since the low quality of document images will greatly undermine the chances of success in automatic text recognition and analysis, it is necessary to assess the quality of document images uploaded in online business process, so as to reject those images of low quality. In this paper, we attempt to achieve document image quality assessment and our contributions are twofold. Firstly, since document image quality assessment is more interested in text, we propose a text line based framework to estimate document image quality, which is composed of three stages: text line detection, text line quality prediction, and overall quality assessment. Text line detection aims to find potential text lines with a detector. In the text line quality prediction stage, the quality score is computed for each text line with a CNN-based prediction model. The overall quality of document images is finally assessed with the ensemble of all text line quality. Secondly, to train the prediction model, a large-scale dataset, comprising 52,094 text line images, is synthesized with diverse attributes. For each text line image, a quality label is computed with a piece-wise function. To demonstrate the effectiveness of the proposed framework, comprehensive experiments are evaluated on two popular document image quality assessment benchmarks. Our framework significantly outperforms the state-of-the-art methods by large margins on the large and complicated dataset.
2204.06601
Jeremy Tien
Jeremy Tien, Jerry Zhi-Yang He, Zackory Erickson, Anca D. Dragan, Daniel S. Brown
Causal Confusion and Reward Misidentification in Preference-Based Reward Learning
In the proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023). https://iclr.cc/virtual/2023/poster/10822
null
null
null
cs.LG cs.RO
http://creativecommons.org/licenses/by/4.0/
Learning policies via preference-based reward learning is an increasingly popular method for customizing agent behavior, but has been shown anecdotally to be prone to spurious correlations and reward hacking behaviors. While much prior work focuses on causal confusion in reinforcement learning and behavioral cloning, we focus on a systematic study of causal confusion and reward misidentification when learning from preferences. In particular, we perform a series of sensitivity and ablation analyses on several benchmark domains where rewards learned from preferences achieve minimal test error but fail to generalize to out-of-distribution states -- resulting in poor policy performance when optimized. We find that the presence of non-causal distractor features, noise in the stated preferences, and partial state observability can all exacerbate reward misidentification. We also identify a set of methods with which to interpret misidentified learned rewards. In general, we observe that optimizing misidentified rewards drives the policy off the reward's training distribution, resulting in high predicted (learned) rewards but low true rewards. These findings illuminate the susceptibility of preference learning to reward misidentification and causal confusion -- failure to consider even one of many factors can result in unexpected, undesirable behavior.
[ { "created": "Wed, 13 Apr 2022 18:41:41 GMT", "version": "v1" }, { "created": "Thu, 20 Oct 2022 01:52:35 GMT", "version": "v2" }, { "created": "Thu, 9 Mar 2023 02:45:48 GMT", "version": "v3" }, { "created": "Sat, 18 Mar 2023 20:44:45 GMT", "version": "v4" } ]
2023-03-21
[ [ "Tien", "Jeremy", "" ], [ "He", "Jerry Zhi-Yang", "" ], [ "Erickson", "Zackory", "" ], [ "Dragan", "Anca D.", "" ], [ "Brown", "Daniel S.", "" ] ]
Learning policies via preference-based reward learning is an increasingly popular method for customizing agent behavior, but has been shown anecdotally to be prone to spurious correlations and reward hacking behaviors. While much prior work focuses on causal confusion in reinforcement learning and behavioral cloning, we focus on a systematic study of causal confusion and reward misidentification when learning from preferences. In particular, we perform a series of sensitivity and ablation analyses on several benchmark domains where rewards learned from preferences achieve minimal test error but fail to generalize to out-of-distribution states -- resulting in poor policy performance when optimized. We find that the presence of non-causal distractor features, noise in the stated preferences, and partial state observability can all exacerbate reward misidentification. We also identify a set of methods with which to interpret misidentified learned rewards. In general, we observe that optimizing misidentified rewards drives the policy off the reward's training distribution, resulting in high predicted (learned) rewards but low true rewards. These findings illuminate the susceptibility of preference learning to reward misidentification and causal confusion -- failure to consider even one of many factors can result in unexpected, undesirable behavior.
1302.3596
Kim-Leng Poh
Kim-Leng Poh, Eric J. Horvitz
A Graph-Theoretic Analysis of Information Value
Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
null
null
UAI-P-1996-PG-427-435
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We derive qualitative relationships about the informational relevance of variables in graphical decision models based on a consideration of the topology of the models. Specifically, we identify dominance relations for the expected value of information on chance variables in terms of their position and relationships in influence diagrams. The qualitative relationships can be harnessed to generate nonnumerical procedures for ordering uncertain variables in a decision model by their informational relevance.
[ { "created": "Wed, 13 Feb 2013 14:15:55 GMT", "version": "v1" } ]
2013-02-18
[ [ "Poh", "Kim-Leng", "" ], [ "Horvitz", "Eric J.", "" ] ]
We derive qualitative relationships about the informational relevance of variables in graphical decision models based on a consideration of the topology of the models. Specifically, we identify dominance relations for the expected value of information on chance variables in terms of their position and relationships in influence diagrams. The qualitative relationships can be harnessed to generate nonnumerical procedures for ordering uncertain variables in a decision model by their informational relevance.
1202.3711
Tom Claassen
Tom Claassen, Tom Heskes
A Logical Characterization of Constraint-Based Causal Discovery
null
null
null
UAI-P-2011-PG-135-144
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel approach to constraint-based causal discovery, that takes the form of straightforward logical inference, applied to a list of simple, logical statements about causal relations that are derived directly from observed (in)dependencies. It is both sound and complete, in the sense that all invariant features of the corresponding partial ancestral graph (PAG) are identified, even in the presence of latent variables and selection bias. The approach shows that every identifiable causal relation corresponds to one of just two fundamental forms. More importantly, as the basic building blocks of the method do not rely on the detailed (graphical) structure of the corresponding PAG, it opens up a range of new opportunities, including more robust inference, detailed accountability, and application to large models.
[ { "created": "Tue, 14 Feb 2012 16:41:17 GMT", "version": "v1" } ]
2012-02-20
[ [ "Claassen", "Tom", "" ], [ "Heskes", "Tom", "" ] ]
We present a novel approach to constraint-based causal discovery, that takes the form of straightforward logical inference, applied to a list of simple, logical statements about causal relations that are derived directly from observed (in)dependencies. It is both sound and complete, in the sense that all invariant features of the corresponding partial ancestral graph (PAG) are identified, even in the presence of latent variables and selection bias. The approach shows that every identifiable causal relation corresponds to one of just two fundamental forms. More importantly, as the basic building blocks of the method do not rely on the detailed (graphical) structure of the corresponding PAG, it opens up a range of new opportunities, including more robust inference, detailed accountability, and application to large models.
2210.02109
Wilson Jallet
Wilson Jallet (WILLOW, LAAS-GEPETTO), Antoine Bambade (ENPC, WILLOW), Nicolas Mansard (LAAS-GEPETTO), Justin Carpentier (WILLOW)
ProxNLP: a primal-dual augmented Lagrangian solver for nonlinear programming in Robotics and beyond
Workshop paper at the 6th Legged Robots Workshop, at the IEEE International Conference on Robotics and Automation (ICRA) 2022
6th Legged Robots Workshop, May 2022, Philadelphia, Pennsylvania, United States
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical optimization is the workhorse behind several aspects of modern robotics and control. In these applications, the focus is on constrained optimization, and the ability to work on manifolds (such as the classical matrix Lie groups), along with a specific requirement for robustness and speed. In recent years, augmented Lagrangian methods have seen a resurgence due to their robustness and flexibility, their connections to (inexact) proximal-point methods, and their interoperability with Newton or semismooth Newton methods. In the sequel, we present primal-dual augmented Lagrangian method for inequality-constrained problems on manifolds, which we introduced in our recent work, as well as an efficient C++ implementation suitable for use in robotics applications and beyond.
[ { "created": "Wed, 5 Oct 2022 09:18:51 GMT", "version": "v1" } ]
2022-10-06
[ [ "Jallet", "Wilson", "", "WILLOW, LAAS-GEPETTO" ], [ "Bambade", "Antoine", "", "ENPC, WILLOW" ], [ "Mansard", "Nicolas", "", "LAAS-GEPETTO" ], [ "Carpentier", "Justin", "", "WILLOW" ] ]
Mathematical optimization is the workhorse behind several aspects of modern robotics and control. In these applications, the focus is on constrained optimization, and the ability to work on manifolds (such as the classical matrix Lie groups), along with a specific requirement for robustness and speed. In recent years, augmented Lagrangian methods have seen a resurgence due to their robustness and flexibility, their connections to (inexact) proximal-point methods, and their interoperability with Newton or semismooth Newton methods. In the sequel, we present primal-dual augmented Lagrangian method for inequality-constrained problems on manifolds, which we introduced in our recent work, as well as an efficient C++ implementation suitable for use in robotics applications and beyond.
2308.05600
Edouard Yvinec
Edouard Yvinec, Arnaud Dapogny and Kevin Bailly
NUPES : Non-Uniform Post-Training Quantization via Power Exponent Search
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural network (DNN) deployment has been confined to larger hardware devices due to their expensive computational requirements. This challenge has recently reached another scale with the emergence of large language models (LLMs). In order to reduce both their memory footprint and latency, a promising technique is quantization. It consists in converting floating point representations to low bit-width fixed point representations, usually by assuming a uniform mapping onto a regular grid. This process, referred to in the literature as uniform quantization, may however be ill-suited as most DNN weights and activations follow a bell-shaped distribution. This is even worse on LLMs whose weight distributions are known to exhibit large, high impact, outlier values. In this work, we propose an improvement over the most commonly adopted way to tackle this limitation in deep learning models quantization, namely, non-uniform quantization. NUPES leverages automorphisms to preserve the scalar multiplications. Such transformations are derived from power functions. However, the optimization of the exponent parameter and weight values remains a challenging and novel problem which could not be solved with previous post training optimization techniques which only learn to round up or down weight values in order to preserve the predictive function. We circumvent this limitation with a new paradigm: learning new quantized weights over the entire quantized space. Similarly, we enable the optimization of the power exponent, i.e. the optimization of the quantization operator itself during training by alleviating all the numerical instabilities. The resulting predictive function is compatible with integer-only low-bit inference. We show the ability of the method to achieve state-of-the-art compression rates in both, data-free and data-driven configurations.
[ { "created": "Thu, 10 Aug 2023 14:19:58 GMT", "version": "v1" } ]
2023-08-11
[ [ "Yvinec", "Edouard", "" ], [ "Dapogny", "Arnaud", "" ], [ "Bailly", "Kevin", "" ] ]
Deep neural network (DNN) deployment has been confined to larger hardware devices due to their expensive computational requirements. This challenge has recently reached another scale with the emergence of large language models (LLMs). In order to reduce both their memory footprint and latency, a promising technique is quantization. It consists in converting floating point representations to low bit-width fixed point representations, usually by assuming a uniform mapping onto a regular grid. This process, referred to in the literature as uniform quantization, may however be ill-suited as most DNN weights and activations follow a bell-shaped distribution. This is even worse on LLMs whose weight distributions are known to exhibit large, high impact, outlier values. In this work, we propose an improvement over the most commonly adopted way to tackle this limitation in deep learning models quantization, namely, non-uniform quantization. NUPES leverages automorphisms to preserve the scalar multiplications. Such transformations are derived from power functions. However, the optimization of the exponent parameter and weight values remains a challenging and novel problem which could not be solved with previous post training optimization techniques which only learn to round up or down weight values in order to preserve the predictive function. We circumvent this limitation with a new paradigm: learning new quantized weights over the entire quantized space. Similarly, we enable the optimization of the power exponent, i.e. the optimization of the quantization operator itself during training by alleviating all the numerical instabilities. The resulting predictive function is compatible with integer-only low-bit inference. We show the ability of the method to achieve state-of-the-art compression rates in both, data-free and data-driven configurations.
2201.11870
Payam Karisani
Payam Karisani
Multiple-Source Domain Adaptation via Coordinated Domain Encoders and Paired Classifiers
AAAI 2022
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel multiple-source unsupervised model for text classification under domain shift. Our model exploits the update rates in document representations to dynamically integrate domain encoders. It also employs a probabilistic heuristic to infer the error rate in the target domain in order to pair source classifiers. Our heuristic exploits data transformation cost and the classifier accuracy in the target feature space. We have used real world scenarios of Domain Adaptation to evaluate the efficacy of our algorithm. We also used pretrained multi-layer transformers as the document encoder in the experiments to demonstrate whether the improvement achieved by domain adaptation models can be delivered by out-of-the-box language model pretraining. The experiments testify that our model is the top performing approach in this setting.
[ { "created": "Fri, 28 Jan 2022 00:50:01 GMT", "version": "v1" }, { "created": "Sun, 20 Mar 2022 15:41:38 GMT", "version": "v2" } ]
2022-03-22
[ [ "Karisani", "Payam", "" ] ]
We present a novel multiple-source unsupervised model for text classification under domain shift. Our model exploits the update rates in document representations to dynamically integrate domain encoders. It also employs a probabilistic heuristic to infer the error rate in the target domain in order to pair source classifiers. Our heuristic exploits data transformation cost and the classifier accuracy in the target feature space. We have used real world scenarios of Domain Adaptation to evaluate the efficacy of our algorithm. We also used pretrained multi-layer transformers as the document encoder in the experiments to demonstrate whether the improvement achieved by domain adaptation models can be delivered by out-of-the-box language model pretraining. The experiments testify that our model is the top performing approach in this setting.
2405.17262
Shengjie Liu
Shengjie Liu, Lu Zhang
Deep Feature Gaussian Processes for Single-Scene Aerosol Optical Depth Reconstruction
Accepted to IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
null
10.1109/LGRS.2024.3398689
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Remote sensing data provide a low-cost solution for large-scale monitoring of air pollution via the retrieval of aerosol optical depth (AOD), but is often limited by cloud contamination. Existing methods for AOD reconstruction rely on temporal information. However, for remote sensing data at high spatial resolution, multi-temporal observations are often unavailable. In this letter, we take advantage of deep representation learning from convolutional neural networks and propose Deep Feature Gaussian Processes (DFGP) for single-scene AOD reconstruction. By using deep learning, we transform the variables to a feature space with better explainable power. By using Gaussian processes, we explicitly consider the correlation between observed AOD and missing AOD in spatial and feature domains. Experiments on two AOD datasets with real-world cloud patterns showed that the proposed method outperformed deep CNN and random forest, achieving R$^2$ of 0.7431 on MODIS AOD and R$^2$ of 0.9211 on EMIT AOD, compared to deep CNN's R$^2$ of 0.6507 and R$^2$ of 0.8619. The proposed methods increased R$^2$ by over 0.35 compared to the popular random forest in AOD reconstruction. The data and code used in this study are available at \url{https://skrisliu.com/dfgp}.
[ { "created": "Mon, 27 May 2024 15:20:40 GMT", "version": "v1" } ]
2024-05-28
[ [ "Liu", "Shengjie", "" ], [ "Zhang", "Lu", "" ] ]
Remote sensing data provide a low-cost solution for large-scale monitoring of air pollution via the retrieval of aerosol optical depth (AOD), but is often limited by cloud contamination. Existing methods for AOD reconstruction rely on temporal information. However, for remote sensing data at high spatial resolution, multi-temporal observations are often unavailable. In this letter, we take advantage of deep representation learning from convolutional neural networks and propose Deep Feature Gaussian Processes (DFGP) for single-scene AOD reconstruction. By using deep learning, we transform the variables to a feature space with better explainable power. By using Gaussian processes, we explicitly consider the correlation between observed AOD and missing AOD in spatial and feature domains. Experiments on two AOD datasets with real-world cloud patterns showed that the proposed method outperformed deep CNN and random forest, achieving R$^2$ of 0.7431 on MODIS AOD and R$^2$ of 0.9211 on EMIT AOD, compared to deep CNN's R$^2$ of 0.6507 and R$^2$ of 0.8619. The proposed methods increased R$^2$ by over 0.35 compared to the popular random forest in AOD reconstruction. The data and code used in this study are available at \url{https://skrisliu.com/dfgp}.
2303.15652
Rashmi Ranjan Bhuyan
Rashmi Ranjan Bhuyan, Adel Javanmard, Sungchul Kim, Gourab Mukherjee, Ryan A. Rossi, Tong Yu, Handong Zhao
Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model
43 pages, 10 figures
null
null
null
cs.LG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider dynamic pricing strategies in a streamed longitudinal data set-up where the objective is to maximize, over time, the cumulative profit across a large number of customer segments. We consider a dynamic model with the consumers' preferences as well as price sensitivity varying over time. Building on the well-known finding that consumers sharing similar characteristics act in similar ways, we consider a global shrinkage structure, which assumes that the consumers' preferences across the different segments can be well approximated by a spatial autoregressive (SAR) model. In such a streamed longitudinal set-up, we measure the performance of a dynamic pricing policy via regret, which is the expected revenue loss compared to a clairvoyant that knows the sequence of model parameters in advance. We propose a pricing policy based on penalized stochastic gradient descent (PSGD) and explicitly characterize its regret as functions of time, the temporal variability in the model parameters as well as the strength of the auto-correlation network structure spanning the varied customer segments. Our regret analysis results not only demonstrate asymptotic optimality of the proposed policy but also show that for policy planning it is essential to incorporate available structural information as policies based on unshrunken models are highly sub-optimal in the aforementioned set-up. We conduct simulation experiments across a wide range of regimes as well as real-world networks based studies and report encouraging performance for our proposed method.
[ { "created": "Tue, 28 Mar 2023 00:23:23 GMT", "version": "v1" }, { "created": "Sat, 14 Oct 2023 00:53:41 GMT", "version": "v2" } ]
2023-10-17
[ [ "Bhuyan", "Rashmi Ranjan", "" ], [ "Javanmard", "Adel", "" ], [ "Kim", "Sungchul", "" ], [ "Mukherjee", "Gourab", "" ], [ "Rossi", "Ryan A.", "" ], [ "Yu", "Tong", "" ], [ "Zhao", "Handong", "" ] ]
We consider dynamic pricing strategies in a streamed longitudinal data set-up where the objective is to maximize, over time, the cumulative profit across a large number of customer segments. We consider a dynamic model with the consumers' preferences as well as price sensitivity varying over time. Building on the well-known finding that consumers sharing similar characteristics act in similar ways, we consider a global shrinkage structure, which assumes that the consumers' preferences across the different segments can be well approximated by a spatial autoregressive (SAR) model. In such a streamed longitudinal set-up, we measure the performance of a dynamic pricing policy via regret, which is the expected revenue loss compared to a clairvoyant that knows the sequence of model parameters in advance. We propose a pricing policy based on penalized stochastic gradient descent (PSGD) and explicitly characterize its regret as functions of time, the temporal variability in the model parameters as well as the strength of the auto-correlation network structure spanning the varied customer segments. Our regret analysis results not only demonstrate asymptotic optimality of the proposed policy but also show that for policy planning it is essential to incorporate available structural information as policies based on unshrunken models are highly sub-optimal in the aforementioned set-up. We conduct simulation experiments across a wide range of regimes as well as real-world networks based studies and report encouraging performance for our proposed method.
2310.04782
Yuchen Yang
Yuchen Yang, Houqiang Li, Yanfeng Wang and Yu Wang
Improving the Reliability of Large Language Models by Leveraging Uncertainty-Aware In-Context Learning
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In recent years, large-scale language models (LLMs) have gained attention for their impressive text generation capabilities. However, these models often face the challenge of "hallucination," which undermines their reliability. In this study, we introduce an uncertainty-aware in-context learning framework to empower the model to enhance or reject its output in response to uncertainty. Human-defined methods for estimating uncertainty typically assume that "uncertainty is lower when the model's response is correct compared to when it is incorrect." However, setting a precise threshold to distinguish correctness is challenging. Therefore, we introduce uncertainty information as an intermediary variable that implicitly influences the model's behavior. Our innovative uncertainty-aware in-context learning framework involves fine-tuning the LLM using a calibration dataset. Our aim is to improve the model's responses by filtering out answers with high uncertainty while considering the model's knowledge limitations. We evaluate the model's knowledge by examining multiple responses to the same question for the presence of a correct answer. When the model lacks relevant knowledge, the response should indicate that the question cannot be answered. Conversely, when the model has relevant knowledge, the response should provide the correct answer. Extensive experiments confirm the effectiveness of our framework, leading to two key findings. First, the logit output values of the LLM partly reflect inherent uncertainty. Second, our model autonomously recognizes uncertainty, resulting in improved responses.
[ { "created": "Sat, 7 Oct 2023 12:06:53 GMT", "version": "v1" } ]
2023-10-10
[ [ "Yang", "Yuchen", "" ], [ "Li", "Houqiang", "" ], [ "Wang", "Yanfeng", "" ], [ "Wang", "Yu", "" ] ]
In recent years, large-scale language models (LLMs) have gained attention for their impressive text generation capabilities. However, these models often face the challenge of "hallucination," which undermines their reliability. In this study, we introduce an uncertainty-aware in-context learning framework to empower the model to enhance or reject its output in response to uncertainty. Human-defined methods for estimating uncertainty typically assume that "uncertainty is lower when the model's response is correct compared to when it is incorrect." However, setting a precise threshold to distinguish correctness is challenging. Therefore, we introduce uncertainty information as an intermediary variable that implicitly influences the model's behavior. Our innovative uncertainty-aware in-context learning framework involves fine-tuning the LLM using a calibration dataset. Our aim is to improve the model's responses by filtering out answers with high uncertainty while considering the model's knowledge limitations. We evaluate the model's knowledge by examining multiple responses to the same question for the presence of a correct answer. When the model lacks relevant knowledge, the response should indicate that the question cannot be answered. Conversely, when the model has relevant knowledge, the response should provide the correct answer. Extensive experiments confirm the effectiveness of our framework, leading to two key findings. First, the logit output values of the LLM partly reflect inherent uncertainty. Second, our model autonomously recognizes uncertainty, resulting in improved responses.
1809.03062
Julius Berner
Julius Berner, Philipp Grohs, Arnulf Jentzen
Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black-Scholes Partial Differential Equations
null
SIAM Journal on Mathematics of Data Science 2(3), 2020, pp. 631-657
10.1137/19M125649X
null
cs.LG cs.NA math.NA stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development of new classification and regression algorithms based on empirical risk minimization (ERM) over deep neural network hypothesis classes, coined deep learning, revolutionized the area of artificial intelligence, machine learning, and data analysis. In particular, these methods have been applied to the numerical solution of high-dimensional partial differential equations with great success. Recent simulations indicate that deep learning-based algorithms are capable of overcoming the curse of dimensionality for the numerical solution of Kolmogorov equations, which are widely used in models from engineering, finance, and the natural sciences. The present paper considers under which conditions ERM over a deep neural network hypothesis class approximates the solution of a $d$-dimensional Kolmogorov equation with affine drift and diffusion coefficients and typical initial values arising from problems in computational finance up to error $\varepsilon$. We establish that, with high probability over draws of training samples, such an approximation can be achieved with both the size of the hypothesis class and the number of training samples scaling only polynomially in $d$ and $\varepsilon^{-1}$. It can be concluded that ERM over deep neural network hypothesis classes overcomes the curse of dimensionality for the numerical solution of linear Kolmogorov equations with affine coefficients.
[ { "created": "Sun, 9 Sep 2018 23:50:37 GMT", "version": "v1" }, { "created": "Thu, 5 Dec 2019 15:33:20 GMT", "version": "v2" }, { "created": "Wed, 11 Nov 2020 12:46:12 GMT", "version": "v3" } ]
2020-11-12
[ [ "Berner", "Julius", "" ], [ "Grohs", "Philipp", "" ], [ "Jentzen", "Arnulf", "" ] ]
The development of new classification and regression algorithms based on empirical risk minimization (ERM) over deep neural network hypothesis classes, coined deep learning, revolutionized the area of artificial intelligence, machine learning, and data analysis. In particular, these methods have been applied to the numerical solution of high-dimensional partial differential equations with great success. Recent simulations indicate that deep learning-based algorithms are capable of overcoming the curse of dimensionality for the numerical solution of Kolmogorov equations, which are widely used in models from engineering, finance, and the natural sciences. The present paper considers under which conditions ERM over a deep neural network hypothesis class approximates the solution of a $d$-dimensional Kolmogorov equation with affine drift and diffusion coefficients and typical initial values arising from problems in computational finance up to error $\varepsilon$. We establish that, with high probability over draws of training samples, such an approximation can be achieved with both the size of the hypothesis class and the number of training samples scaling only polynomially in $d$ and $\varepsilon^{-1}$. It can be concluded that ERM over deep neural network hypothesis classes overcomes the curse of dimensionality for the numerical solution of linear Kolmogorov equations with affine coefficients.
2105.07391
Kazuya Ueki
Kazuya Ueki
Survey of Visual-Semantic Embedding Methods for Zero-Shot Image Retrieval
Accepted by 20th IEEE International Conference on Machine Learning and Applications (ICMLA2021)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual-semantic embedding is an interesting research topic because it is useful for various tasks, such as visual question answering (VQA), image-text retrieval, image captioning, and scene graph generation. In this paper, we focus on zero-shot image retrieval using sentences as queries and present a survey of the technological trends in this area. First, we provide a comprehensive overview of the history of the technology, starting with a discussion of the early studies of image-to-text matching and how the technology has evolved over time. In addition, a description of the datasets commonly used in experiments and a comparison of the evaluation results of each method are presented. We also introduce the implementation available on github for use in confirming the accuracy of experiments and for further improvement. We hope that this survey paper will encourage researchers to further develop their research on bridging images and languages.
[ { "created": "Sun, 16 May 2021 09:43:25 GMT", "version": "v1" }, { "created": "Tue, 28 Sep 2021 08:51:47 GMT", "version": "v2" } ]
2021-09-29
[ [ "Ueki", "Kazuya", "" ] ]
Visual-semantic embedding is an interesting research topic because it is useful for various tasks, such as visual question answering (VQA), image-text retrieval, image captioning, and scene graph generation. In this paper, we focus on zero-shot image retrieval using sentences as queries and present a survey of the technological trends in this area. First, we provide a comprehensive overview of the history of the technology, starting with a discussion of the early studies of image-to-text matching and how the technology has evolved over time. In addition, a description of the datasets commonly used in experiments and a comparison of the evaluation results of each method are presented. We also introduce the implementation available on github for use in confirming the accuracy of experiments and for further improvement. We hope that this survey paper will encourage researchers to further develop their research on bridging images and languages.
2207.09228
Shunta Maeda
Shunta Maeda
Image Super-Resolution with Deep Dictionary
ECCV 2022
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the first success of Dong et al., the deep-learning-based approach has become dominant in the field of single-image super-resolution. This replaces all the handcrafted image processing steps of traditional sparse-coding-based methods with a deep neural network. In contrast to sparse-coding-based methods, which explicitly create high/low-resolution dictionaries, the dictionaries in deep-learning-based methods are implicitly acquired as a nonlinear combination of multiple convolutions. One disadvantage of deep-learning-based methods is that their performance is degraded for images created differently from the training dataset (out-of-domain images). We propose an end-to-end super-resolution network with a deep dictionary (SRDD), where a high-resolution dictionary is explicitly learned without sacrificing the advantages of deep learning. Extensive experiments show that explicit learning of high-resolution dictionary makes the network more robust for out-of-domain test images while maintaining the performance of the in-domain test images.
[ { "created": "Tue, 19 Jul 2022 12:31:17 GMT", "version": "v1" } ]
2022-07-20
[ [ "Maeda", "Shunta", "" ] ]
Since the first success of Dong et al., the deep-learning-based approach has become dominant in the field of single-image super-resolution. This replaces all the handcrafted image processing steps of traditional sparse-coding-based methods with a deep neural network. In contrast to sparse-coding-based methods, which explicitly create high/low-resolution dictionaries, the dictionaries in deep-learning-based methods are implicitly acquired as a nonlinear combination of multiple convolutions. One disadvantage of deep-learning-based methods is that their performance is degraded for images created differently from the training dataset (out-of-domain images). We propose an end-to-end super-resolution network with a deep dictionary (SRDD), where a high-resolution dictionary is explicitly learned without sacrificing the advantages of deep learning. Extensive experiments show that explicit learning of high-resolution dictionary makes the network more robust for out-of-domain test images while maintaining the performance of the in-domain test images.
1807.05523
Dheryta Jaisinghani
Dheryta Jaisinghani, Vinayak Naik, Sanjit K. Kaul, Rajesh Balan, and Sumit Roy
Improving the Performance of WLANs by Reducing Unnecessary Active Scans
14 pages, TMC, NCC, 18 figures
null
null
null
cs.NI cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of excessive and unnecessary active scans in heavily utilized WLANs during which low rate probe requests and responses are broadcast. These management frames severely impact the goodput. Our analysis of two production WLANs reveals that lesser number of non-overlapping channels in $2.4$ GHz makes it more prone to the effects of increased probe frames than $5$ GHz. We find that not only up to $90$% of probe responses carry redundant information but the probe traffic can be as high as $60$\% of the management traffic. Furthermore, active scanning severely impacts real-time applications at a client as it increases the latency by $91$ times. We present a detailed analysis of the impact of active scans on an individual client and the whole network. We discuss three ways to control the probe traffic in production WLANs -- access point configurations, network planning, and client modification. Our proposals for access point configuration are in line with current WLAN deployments, better network planning is device agnostic in nature, and client modification reduces the average number of probe requests per client by up to $50$% without hampering the ongoing WiFi connection.
[ { "created": "Sun, 15 Jul 2018 10:43:46 GMT", "version": "v1" } ]
2018-07-17
[ [ "Jaisinghani", "Dheryta", "" ], [ "Naik", "Vinayak", "" ], [ "Kaul", "Sanjit K.", "" ], [ "Balan", "Rajesh", "" ], [ "Roy", "Sumit", "" ] ]
We consider the problem of excessive and unnecessary active scans in heavily utilized WLANs during which low rate probe requests and responses are broadcast. These management frames severely impact the goodput. Our analysis of two production WLANs reveals that lesser number of non-overlapping channels in $2.4$ GHz makes it more prone to the effects of increased probe frames than $5$ GHz. We find that not only up to $90$% of probe responses carry redundant information but the probe traffic can be as high as $60$\% of the management traffic. Furthermore, active scanning severely impacts real-time applications at a client as it increases the latency by $91$ times. We present a detailed analysis of the impact of active scans on an individual client and the whole network. We discuss three ways to control the probe traffic in production WLANs -- access point configurations, network planning, and client modification. Our proposals for access point configuration are in line with current WLAN deployments, better network planning is device agnostic in nature, and client modification reduces the average number of probe requests per client by up to $50$% without hampering the ongoing WiFi connection.
2112.00933
Ju He
Ju He, Shuo Yang, Shaokang Yang, Adam Kortylewski, Xiaoding Yuan, Jie-Neng Chen, Shuai Liu, Cheng Yang, Qihang Yu, Alan Yuille
PartImageNet: A Large, High-Quality Dataset of Parts
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is natural to represent objects in terms of their parts. This has the potential to improve the performance of algorithms for object recognition and segmentation but can also help for downstream tasks like activity recognition. Research on part-based models, however, is hindered by the lack of datasets with per-pixel part annotations. This is partly due to the difficulty and high cost of annotating object parts so it has rarely been done except for humans (where there exists a big literature on part-based models). To help address this problem, we propose PartImageNet, a large, high-quality dataset with part segmentation annotations. It consists of $158$ classes from ImageNet with approximately $24,000$ images. PartImageNet is unique because it offers part-level annotations on a general set of classes including non-rigid, articulated objects, while having an order of magnitude larger size compared to existing part datasets (excluding datasets of humans). It can be utilized for many vision tasks including Object Segmentation, Semantic Part Segmentation, Few-shot Learning and Part Discovery. We conduct comprehensive experiments which study these tasks and set up a set of baselines. The dataset and scripts are released at https://github.com/TACJu/PartImageNet.
[ { "created": "Thu, 2 Dec 2021 02:12:03 GMT", "version": "v1" }, { "created": "Wed, 23 Mar 2022 06:13:10 GMT", "version": "v2" }, { "created": "Fri, 16 Dec 2022 19:18:33 GMT", "version": "v3" } ]
2022-12-20
[ [ "He", "Ju", "" ], [ "Yang", "Shuo", "" ], [ "Yang", "Shaokang", "" ], [ "Kortylewski", "Adam", "" ], [ "Yuan", "Xiaoding", "" ], [ "Chen", "Jie-Neng", "" ], [ "Liu", "Shuai", "" ], [ "Yang", "Cheng", "" ], [ "Yu", "Qihang", "" ], [ "Yuille", "Alan", "" ] ]
It is natural to represent objects in terms of their parts. This has the potential to improve the performance of algorithms for object recognition and segmentation but can also help for downstream tasks like activity recognition. Research on part-based models, however, is hindered by the lack of datasets with per-pixel part annotations. This is partly due to the difficulty and high cost of annotating object parts so it has rarely been done except for humans (where there exists a big literature on part-based models). To help address this problem, we propose PartImageNet, a large, high-quality dataset with part segmentation annotations. It consists of $158$ classes from ImageNet with approximately $24,000$ images. PartImageNet is unique because it offers part-level annotations on a general set of classes including non-rigid, articulated objects, while having an order of magnitude larger size compared to existing part datasets (excluding datasets of humans). It can be utilized for many vision tasks including Object Segmentation, Semantic Part Segmentation, Few-shot Learning and Part Discovery. We conduct comprehensive experiments which study these tasks and set up a set of baselines. The dataset and scripts are released at https://github.com/TACJu/PartImageNet.
1310.1507
Irene Guessarian
Patrick Cegielski, Serge Grigorieff, Irene Guessarian
Newton representation of functions over natural integers having integral difference ratios
null
null
null
null
cs.DM math.NT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Different questions lead to the same class of functions from natural integers to integers: those which have integral difference ratios, i.e. verifying $f(a)-f(b)\equiv0 \pmod {(a-b)}$ for all $a>b$. We characterize this class of functions via their representations as Newton series. This class, which obviously contains all polynomials with integral coefficients, also contains unexpected functions, for instance all functions $x\mapsto\lfloor e^{1/a}\;a^x\;x!\rfloor$, with $a\in\Z\setminus\{0,1\}$, and a function equal to $\lfloor e\;x!\rfloor$ except on 0. Finally, to study the complement class, we look at functions $\N\to\RR$ which are not uniformly close to any function having integral difference ratios.
[ { "created": "Sat, 5 Oct 2013 18:37:52 GMT", "version": "v1" } ]
2013-10-08
[ [ "Cegielski", "Patrick", "" ], [ "Grigorieff", "Serge", "" ], [ "Guessarian", "Irene", "" ] ]
Different questions lead to the same class of functions from natural integers to integers: those which have integral difference ratios, i.e. verifying $f(a)-f(b)\equiv0 \pmod {(a-b)}$ for all $a>b$. We characterize this class of functions via their representations as Newton series. This class, which obviously contains all polynomials with integral coefficients, also contains unexpected functions, for instance all functions $x\mapsto\lfloor e^{1/a}\;a^x\;x!\rfloor$, with $a\in\Z\setminus\{0,1\}$, and a function equal to $\lfloor e\;x!\rfloor$ except on 0. Finally, to study the complement class, we look at functions $\N\to\RR$ which are not uniformly close to any function having integral difference ratios.
2011.14660
Shuai Zhao
Shuai Zhao, Liguang Zhou, Wenxiao Wang, Deng Cai, Tin Lun Lam, Yangsheng Xu
Towards Better Accuracy-efficiency Trade-offs: Divide and Co-training
accepted by T-IP 2022, code is at https://github.com/FreeformRobotics/Divide-and-Co-training
null
10.1109/TIP.2022.3201602
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The width of a neural network matters since increasing the width will necessarily increase the model capacity. However, the performance of a network does not improve linearly with the width and soon gets saturated. In this case, we argue that increasing the number of networks (ensemble) can achieve better accuracy-efficiency trade-offs than purely increasing the width. To prove it, one large network is divided into several small ones regarding its parameters and regularization components. Each of these small networks has a fraction of the original one's parameters. We then train these small networks together and make them see various views of the same data to increase their diversity. During this co-training process, networks can also learn from each other. As a result, small networks can achieve better ensemble performance than the large one with few or no extra parameters or FLOPs, \ie, achieving better accuracy-efficiency trade-offs. Small networks can also achieve faster inference speed than the large one by concurrent running. All of the above shows that the number of networks is a new dimension of model scaling. We validate our argument with 8 different neural architectures on common benchmarks through extensive experiments. The code is available at \url{https://github.com/FreeformRobotics/Divide-and-Co-training}.
[ { "created": "Mon, 30 Nov 2020 10:03:34 GMT", "version": "v1" }, { "created": "Tue, 29 Dec 2020 02:41:21 GMT", "version": "v2" }, { "created": "Sat, 20 Mar 2021 14:03:54 GMT", "version": "v3" }, { "created": "Tue, 6 Sep 2022 03:15:23 GMT", "version": "v4" } ]
2022-09-07
[ [ "Zhao", "Shuai", "" ], [ "Zhou", "Liguang", "" ], [ "Wang", "Wenxiao", "" ], [ "Cai", "Deng", "" ], [ "Lam", "Tin Lun", "" ], [ "Xu", "Yangsheng", "" ] ]
The width of a neural network matters since increasing the width will necessarily increase the model capacity. However, the performance of a network does not improve linearly with the width and soon gets saturated. In this case, we argue that increasing the number of networks (ensemble) can achieve better accuracy-efficiency trade-offs than purely increasing the width. To prove it, one large network is divided into several small ones regarding its parameters and regularization components. Each of these small networks has a fraction of the original one's parameters. We then train these small networks together and make them see various views of the same data to increase their diversity. During this co-training process, networks can also learn from each other. As a result, small networks can achieve better ensemble performance than the large one with few or no extra parameters or FLOPs, \ie, achieving better accuracy-efficiency trade-offs. Small networks can also achieve faster inference speed than the large one by concurrent running. All of the above shows that the number of networks is a new dimension of model scaling. We validate our argument with 8 different neural architectures on common benchmarks through extensive experiments. The code is available at \url{https://github.com/FreeformRobotics/Divide-and-Co-training}.
2301.03899
Rakesh Kumar
Truls Asheim, Boris Grot, Rakesh Kumar
A Storage-Effective BTB Organization for Servers
null
null
null
null
cs.AR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Many contemporary applications feature multi-megabyte instruction footprints that overwhelm the capacity of branch target buffers (BTB) and instruction caches (L1-I), causing frequent front-end stalls that inevitably hurt performance. BTB capacity is crucial for performance as a sufficiently large BTB enables the front-end to accurately resolve the upcoming execution path and steer instruction fetch appropriately. Moreover, it also enables highly effective fetch-directed instruction prefetching that can eliminate a large portion L1-I misses. For these reasons, commercial processors allocate vast amounts of storage capacity to BTBs. This work aims to reduce BTB storage requirements by optimizing the organization of BTB entries. Our key insight is that storing branch target offsets, instead of full or compressed targets, can drastically reduce BTB storage cost as the vast majority of dynamic branches have short offsets requiring just a handful of bits to encode. Based on this insight, we size the ways of a set associative BTB to hold different number of target offset bits such that each way stores offsets within a particular range. Doing so enables a dramatic reduction in storage for target addresses. Our final design, called BTB-X, uses an 8-way set associative BTB with differently sized ways that enables it to track about 2.24x more branches than a conventional BTB and 1.3x more branches than a storage-optimized state-of-the-art BTB organization, called PDede, with the same storage budget.
[ { "created": "Tue, 10 Jan 2023 10:52:19 GMT", "version": "v1" } ]
2023-01-11
[ [ "Asheim", "Truls", "" ], [ "Grot", "Boris", "" ], [ "Kumar", "Rakesh", "" ] ]
Many contemporary applications feature multi-megabyte instruction footprints that overwhelm the capacity of branch target buffers (BTB) and instruction caches (L1-I), causing frequent front-end stalls that inevitably hurt performance. BTB capacity is crucial for performance as a sufficiently large BTB enables the front-end to accurately resolve the upcoming execution path and steer instruction fetch appropriately. Moreover, it also enables highly effective fetch-directed instruction prefetching that can eliminate a large portion L1-I misses. For these reasons, commercial processors allocate vast amounts of storage capacity to BTBs. This work aims to reduce BTB storage requirements by optimizing the organization of BTB entries. Our key insight is that storing branch target offsets, instead of full or compressed targets, can drastically reduce BTB storage cost as the vast majority of dynamic branches have short offsets requiring just a handful of bits to encode. Based on this insight, we size the ways of a set associative BTB to hold different number of target offset bits such that each way stores offsets within a particular range. Doing so enables a dramatic reduction in storage for target addresses. Our final design, called BTB-X, uses an 8-way set associative BTB with differently sized ways that enables it to track about 2.24x more branches than a conventional BTB and 1.3x more branches than a storage-optimized state-of-the-art BTB organization, called PDede, with the same storage budget.
2102.02326
James Mou Ph.D.
James Mou, Jun Li
Effects of Number of Filters of Convolutional Layers on Speech Recognition Model Accuracy
8 pages, 9 figures, 3 tables, to be published in the Proc. of the 19th IEEE International Conference on Machine Learning and Applications, Page 971-978, 2020. DOI 10.1109/ICMLA51294.2020.00158. \c{opyright} 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, including reprinting/republishing this material for advertising purposes
null
10.1109/ICMLA51294.2020.00158
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by the progress of the End-to-End approach [1], this paper systematically studies the effects of Number of Filters of convolutional layers on the model prediction accuracy of CNN+RNN (Convolutional Neural Networks adding to Recurrent Neural Networks) for ASR Models (Automatic Speech Recognition). Experimental results show that only when the CNN Number of Filters exceeds a certain threshold value is adding CNN to RNN able to improve the performance of the CNN+RNN speech recognition model, otherwise some parameter ranges of CNN can render it useless to add the CNN to the RNN model. Our results show a strong dependency of word accuracy on the Number of Filters of convolutional layers. Based on the experimental results, the paper suggests a possible hypothesis of Sound-2-Vector Embedding (Convolutional Embedding) to explain the above observations. Based on this Embedding hypothesis and the optimization of parameters, the paper develops an End-to-End speech recognition system which has a high word accuracy but also has a light model-weight. The developed LVCSR (Large Vocabulary Continuous Speech Recognition) model has achieved quite a high word accuracy of 90.2% only by its Acoustic Model alone, without any assistance from intermediate phonetic representation and any Language Model. Its acoustic model contains only 4.4 million weight parameters, compared to the 35~68 million acoustic-model weight parameters in DeepSpeech2 [2] (one of the top state-of-the-art LVCSR models) which can achieve a word accuracy of 91.5%. The light-weighted model is good for improving the transcribing computing efficiency and also useful for mobile devices, Driverless Vehicles, etc. Our model weight is reduced to ~10% the size of DeepSpeech2, but our model accuracy remains close to that of DeepSpeech2. If combined with a Language Model, our LVCSR system is able to achieve 91.5% word accuracy.
[ { "created": "Wed, 3 Feb 2021 23:04:38 GMT", "version": "v1" } ]
2021-02-05
[ [ "Mou", "James", "" ], [ "Li", "Jun", "" ] ]
Inspired by the progress of the End-to-End approach [1], this paper systematically studies the effects of Number of Filters of convolutional layers on the model prediction accuracy of CNN+RNN (Convolutional Neural Networks adding to Recurrent Neural Networks) for ASR Models (Automatic Speech Recognition). Experimental results show that only when the CNN Number of Filters exceeds a certain threshold value is adding CNN to RNN able to improve the performance of the CNN+RNN speech recognition model, otherwise some parameter ranges of CNN can render it useless to add the CNN to the RNN model. Our results show a strong dependency of word accuracy on the Number of Filters of convolutional layers. Based on the experimental results, the paper suggests a possible hypothesis of Sound-2-Vector Embedding (Convolutional Embedding) to explain the above observations. Based on this Embedding hypothesis and the optimization of parameters, the paper develops an End-to-End speech recognition system which has a high word accuracy but also has a light model-weight. The developed LVCSR (Large Vocabulary Continuous Speech Recognition) model has achieved quite a high word accuracy of 90.2% only by its Acoustic Model alone, without any assistance from intermediate phonetic representation and any Language Model. Its acoustic model contains only 4.4 million weight parameters, compared to the 35~68 million acoustic-model weight parameters in DeepSpeech2 [2] (one of the top state-of-the-art LVCSR models) which can achieve a word accuracy of 91.5%. The light-weighted model is good for improving the transcribing computing efficiency and also useful for mobile devices, Driverless Vehicles, etc. Our model weight is reduced to ~10% the size of DeepSpeech2, but our model accuracy remains close to that of DeepSpeech2. If combined with a Language Model, our LVCSR system is able to achieve 91.5% word accuracy.
2302.07640
Guillem Bonafos
Guillem Bonafos, Pierre Pudlo, Jean-Marc Freyermuth, Thierry Legou, Jo\"el Fagot, Samuel Tron\c{c}on, Arnaud Rey
Detection and classification of vocal productions in large scale audio recordings
null
null
null
null
cs.SD cs.LG eess.AS stat.AP
http://creativecommons.org/licenses/by/4.0/
We propose an automatic data processing pipeline to extract vocal productions from large-scale natural audio recordings and classify these vocal productions. The pipeline is based on a deep neural network and adresses both issues simultaneously. Though a series of computationel steps (windowing, creation of a noise class, data augmentation, re-sampling, transfer learning, Bayesian optimisation), it automatically trains a neural network without requiring a large sample of labeled data and important computing resources. Our end-to-end methodology can handle noisy recordings made under different recording conditions. We test it on two different natural audio data sets, one from a group of Guinea baboons recorded from a primate research center and one from human babies recorded at home. The pipeline trains a model on 72 and 77 minutes of labeled audio recordings, with an accuracy of 94.58% and 99.76%. It is then used to process 443 and 174 hours of natural continuous recordings and it creates two new databases of 38.8 and 35.2 hours, respectively. We discuss the strengths and limitations of this approach that can be applied to any massive audio recording.
[ { "created": "Tue, 14 Feb 2023 14:07:09 GMT", "version": "v1" }, { "created": "Fri, 11 Aug 2023 17:50:41 GMT", "version": "v2" } ]
2023-08-14
[ [ "Bonafos", "Guillem", "" ], [ "Pudlo", "Pierre", "" ], [ "Freyermuth", "Jean-Marc", "" ], [ "Legou", "Thierry", "" ], [ "Fagot", "Joël", "" ], [ "Tronçon", "Samuel", "" ], [ "Rey", "Arnaud", "" ] ]
We propose an automatic data processing pipeline to extract vocal productions from large-scale natural audio recordings and classify these vocal productions. The pipeline is based on a deep neural network and adresses both issues simultaneously. Though a series of computationel steps (windowing, creation of a noise class, data augmentation, re-sampling, transfer learning, Bayesian optimisation), it automatically trains a neural network without requiring a large sample of labeled data and important computing resources. Our end-to-end methodology can handle noisy recordings made under different recording conditions. We test it on two different natural audio data sets, one from a group of Guinea baboons recorded from a primate research center and one from human babies recorded at home. The pipeline trains a model on 72 and 77 minutes of labeled audio recordings, with an accuracy of 94.58% and 99.76%. It is then used to process 443 and 174 hours of natural continuous recordings and it creates two new databases of 38.8 and 35.2 hours, respectively. We discuss the strengths and limitations of this approach that can be applied to any massive audio recording.
2105.01388
Nishant Rai
Nishant Rai, Aidas Liaudanskas, Srinivas Rao, Rodrigo Ortiz Cayon, Matteo Munaro, Stefan Holzer
Weak Multi-View Supervision for Surface Mapping Estimation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose a weakly-supervised multi-view learning approach to learn category-specific surface mapping without dense annotations. We learn the underlying surface geometry of common categories, such as human faces, cars, and airplanes, given instances from those categories. While traditional approaches solve this problem using extensive supervision in the form of pixel-level annotations, we take advantage of the fact that pixel-level UV and mesh predictions can be combined with 3D reprojections to form consistency cycles. As a result of exploiting these cycles, we can establish a dense correspondence mapping between image pixels and the mesh acting as a self-supervisory signal, which in turn helps improve our overall estimates. Our approach leverages information from multiple views of the object to establish additional consistency cycles, thus improving surface mapping understanding without the need for explicit annotations. We also propose the use of deformation fields for predictions of an instance specific mesh. Given the lack of datasets providing multiple images of similar object instances from different viewpoints, we generate and release a multi-view ShapeNet Cars and Airplanes dataset created by rendering ShapeNet meshes using a 360 degree camera trajectory around the mesh. For the human faces category, we process and adapt an existing dataset to a multi-view setup. Through experimental evaluations, we show that, at test time, our method can generate accurate variations away from the mean shape, is multi-view consistent, and performs comparably to fully supervised approaches.
[ { "created": "Tue, 4 May 2021 09:46:26 GMT", "version": "v1" } ]
2021-05-05
[ [ "Rai", "Nishant", "" ], [ "Liaudanskas", "Aidas", "" ], [ "Rao", "Srinivas", "" ], [ "Cayon", "Rodrigo Ortiz", "" ], [ "Munaro", "Matteo", "" ], [ "Holzer", "Stefan", "" ] ]
We propose a weakly-supervised multi-view learning approach to learn category-specific surface mapping without dense annotations. We learn the underlying surface geometry of common categories, such as human faces, cars, and airplanes, given instances from those categories. While traditional approaches solve this problem using extensive supervision in the form of pixel-level annotations, we take advantage of the fact that pixel-level UV and mesh predictions can be combined with 3D reprojections to form consistency cycles. As a result of exploiting these cycles, we can establish a dense correspondence mapping between image pixels and the mesh acting as a self-supervisory signal, which in turn helps improve our overall estimates. Our approach leverages information from multiple views of the object to establish additional consistency cycles, thus improving surface mapping understanding without the need for explicit annotations. We also propose the use of deformation fields for predictions of an instance specific mesh. Given the lack of datasets providing multiple images of similar object instances from different viewpoints, we generate and release a multi-view ShapeNet Cars and Airplanes dataset created by rendering ShapeNet meshes using a 360 degree camera trajectory around the mesh. For the human faces category, we process and adapt an existing dataset to a multi-view setup. Through experimental evaluations, we show that, at test time, our method can generate accurate variations away from the mean shape, is multi-view consistent, and performs comparably to fully supervised approaches.
cs/0405081
Riccardo Pucella
Riccardo Pucella
An Analysis of Lambek's Production Machines
13 pages, 1 figure
RAIRO Informatique Theorique et Applications, 31 (5), pp. 483-497, 1997
null
null
cs.LO
null
Lambek's production machines may be used to generate and recognize sentences in a subset of the language described by a production grammar. We determine in this paper the subset of the language of a grammar generated and recognized by such machines.
[ { "created": "Sun, 23 May 2004 19:22:10 GMT", "version": "v1" } ]
2007-05-23
[ [ "Pucella", "Riccardo", "" ] ]
Lambek's production machines may be used to generate and recognize sentences in a subset of the language described by a production grammar. We determine in this paper the subset of the language of a grammar generated and recognized by such machines.
2407.12866
Bingli Liao
Bingli Liao and Danilo Vasconcellos Vargas
Beyond KV Caching: Shared Attention for Efficient LLMs
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The efficiency of large language models (LLMs) remains a critical challenge, particularly in contexts where computational resources are limited. Traditional attention mechanisms in these models, while powerful, require significant computational and memory resources due to the necessity of recalculating and storing attention weights across different layers. This paper introduces a novel Shared Attention (SA) mechanism, designed to enhance the efficiency of LLMs by directly sharing computed attention weights across multiple layers. Unlike previous methods that focus on sharing intermediate Key-Value (KV) caches, our approach utilizes the isotropic tendencies of attention distributions observed in advanced LLMs post-pretraining to reduce both the computational flops and the size of the KV cache required during inference. We empirically demonstrate that implementing SA across various LLMs results in minimal accuracy loss on standard benchmarks. Our findings suggest that SA not only conserves computational resources but also maintains robust model performance, thereby facilitating the deployment of more efficient LLMs in resource-constrained environments.
[ { "created": "Sat, 13 Jul 2024 07:23:07 GMT", "version": "v1" } ]
2024-07-19
[ [ "Liao", "Bingli", "" ], [ "Vargas", "Danilo Vasconcellos", "" ] ]
The efficiency of large language models (LLMs) remains a critical challenge, particularly in contexts where computational resources are limited. Traditional attention mechanisms in these models, while powerful, require significant computational and memory resources due to the necessity of recalculating and storing attention weights across different layers. This paper introduces a novel Shared Attention (SA) mechanism, designed to enhance the efficiency of LLMs by directly sharing computed attention weights across multiple layers. Unlike previous methods that focus on sharing intermediate Key-Value (KV) caches, our approach utilizes the isotropic tendencies of attention distributions observed in advanced LLMs post-pretraining to reduce both the computational flops and the size of the KV cache required during inference. We empirically demonstrate that implementing SA across various LLMs results in minimal accuracy loss on standard benchmarks. Our findings suggest that SA not only conserves computational resources but also maintains robust model performance, thereby facilitating the deployment of more efficient LLMs in resource-constrained environments.
2004.05815
Zhaoqi Su
Zhaoqi Su and Weilin Wan and Tao Yu and Lingjie Liu and Lu Fang and Wenping Wang and Yebin Liu
MulayCap: Multi-layer Human Performance Capture Using A Monocular Video Camera
null
null
10.1109/TVCG.2020.3027763
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce MulayCap, a novel human performance capture method using a monocular video camera without the need for pre-scanning. The method uses "multi-layer" representations for geometry reconstruction and texture rendering, respectively. For geometry reconstruction, we decompose the clothed human into multiple geometry layers, namely a body mesh layer and a garment piece layer. The key technique behind is a Garment-from-Video (GfV) method for optimizing the garment shape and reconstructing the dynamic cloth to fit the input video sequence, based on a cloth simulation model which is effectively solved with gradient descent. For texture rendering, we decompose each input image frame into a shading layer and an albedo layer, and propose a method for fusing a fixed albedo map and solving for detailed garment geometry using the shading layer. Compared with existing single view human performance capture systems, our "multi-layer" approach bypasses the tedious and time consuming scanning step for obtaining a human specific mesh template. Experimental results demonstrate that MulayCap produces realistic rendering of dynamically changing details that has not been achieved in any previous monocular video camera systems. Benefiting from its fully semantic modeling, MulayCap can be applied to various important editing applications, such as cloth editing, re-targeting, relighting, and AR applications.
[ { "created": "Mon, 13 Apr 2020 08:13:37 GMT", "version": "v1" }, { "created": "Sun, 19 Apr 2020 10:49:35 GMT", "version": "v2" }, { "created": "Thu, 1 Oct 2020 08:00:34 GMT", "version": "v3" } ]
2020-10-05
[ [ "Su", "Zhaoqi", "" ], [ "Wan", "Weilin", "" ], [ "Yu", "Tao", "" ], [ "Liu", "Lingjie", "" ], [ "Fang", "Lu", "" ], [ "Wang", "Wenping", "" ], [ "Liu", "Yebin", "" ] ]
We introduce MulayCap, a novel human performance capture method using a monocular video camera without the need for pre-scanning. The method uses "multi-layer" representations for geometry reconstruction and texture rendering, respectively. For geometry reconstruction, we decompose the clothed human into multiple geometry layers, namely a body mesh layer and a garment piece layer. The key technique behind is a Garment-from-Video (GfV) method for optimizing the garment shape and reconstructing the dynamic cloth to fit the input video sequence, based on a cloth simulation model which is effectively solved with gradient descent. For texture rendering, we decompose each input image frame into a shading layer and an albedo layer, and propose a method for fusing a fixed albedo map and solving for detailed garment geometry using the shading layer. Compared with existing single view human performance capture systems, our "multi-layer" approach bypasses the tedious and time consuming scanning step for obtaining a human specific mesh template. Experimental results demonstrate that MulayCap produces realistic rendering of dynamically changing details that has not been achieved in any previous monocular video camera systems. Benefiting from its fully semantic modeling, MulayCap can be applied to various important editing applications, such as cloth editing, re-targeting, relighting, and AR applications.
1502.02942
Mitesh Jain
Mitesh Jain and Panagiotis Manolios
Skipping Refinement
Submitted to CAV 2015
null
null
null
cs.LO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce skipping refinement, a new notion of correctness for reasoning about optimized reactive systems. Reasoning about reactive systems using refinement involves defining an abstract, high-level specification system and a concrete, low-level implementation system. One then shows that every behavior allowed by the implementation is also allowed by the specification. Due to the difference in abstraction levels, it is often the case that the implementation requires many steps to match one step of the specification, hence, it is quite useful for refinement to directly account for stuttering. Some optimized implementations, however, can actually take multiple specification steps at once. For example, a memory controller can buffer the commands to the memory and at a later time simultaneously update multiple memory locations, thereby skipping several observable states of the abstract specification, which only updates one memory location at a time. We introduce skipping simulation refinement and provide a sound and complete characterization consisting of "local" proof rules that are amenable to mechanization and automated verification. We present case studies that highlight the applicability of skipping refinement: a JVM-inspired stack machine, a simple memory controller and a scalar to vector compiler transformation. Our experimental results demonstrate that current model-checking and automated theorem proving tools have difficultly automatically analyzing these systems using existing notions of correctness, but they can analyze the systems if we use skipping refinement.
[ { "created": "Tue, 10 Feb 2015 15:16:50 GMT", "version": "v1" } ]
2015-02-11
[ [ "Jain", "Mitesh", "" ], [ "Manolios", "Panagiotis", "" ] ]
We introduce skipping refinement, a new notion of correctness for reasoning about optimized reactive systems. Reasoning about reactive systems using refinement involves defining an abstract, high-level specification system and a concrete, low-level implementation system. One then shows that every behavior allowed by the implementation is also allowed by the specification. Due to the difference in abstraction levels, it is often the case that the implementation requires many steps to match one step of the specification, hence, it is quite useful for refinement to directly account for stuttering. Some optimized implementations, however, can actually take multiple specification steps at once. For example, a memory controller can buffer the commands to the memory and at a later time simultaneously update multiple memory locations, thereby skipping several observable states of the abstract specification, which only updates one memory location at a time. We introduce skipping simulation refinement and provide a sound and complete characterization consisting of "local" proof rules that are amenable to mechanization and automated verification. We present case studies that highlight the applicability of skipping refinement: a JVM-inspired stack machine, a simple memory controller and a scalar to vector compiler transformation. Our experimental results demonstrate that current model-checking and automated theorem proving tools have difficultly automatically analyzing these systems using existing notions of correctness, but they can analyze the systems if we use skipping refinement.
1309.5139
EPTCS
Emanuele De Angelis (DEC, University `G. D'Annunzio', Pescara, Italy), Fabio Fioravanti (DEC, University `G. D'Annunzio', Pescara, Italy), Alberto Pettorossi (DICII, University of Rome Tor Vergata, Rome, Italy), Maurizio Proietti (IASI-CNR, Rome, Italy)
Verification of Imperative Programs by Constraint Logic Program Transformation
In Proceedings Festschrift for Dave Schmidt, arXiv:1309.4557
EPTCS 129, 2013, pp. 186-210
10.4204/EPTCS.129.12
null
cs.PL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method for verifying partial correctness properties of imperative programs that manipulate integers and arrays by using techniques based on the transformation of constraint logic programs (CLP). We use CLP as a metalanguage for representing imperative programs, their executions, and their properties. First, we encode the correctness of an imperative program, say prog, as the negation of a predicate 'incorrect' defined by a CLP program T. By construction, 'incorrect' holds in the least model of T if and only if the execution of prog from an initial configuration eventually halts in an error configuration. Then, we apply to program T a sequence of transformations that preserve its least model semantics. These transformations are based on well-known transformation rules, such as unfolding and folding, guided by suitable transformation strategies, such as specialization and generalization. The objective of the transformations is to derive a new CLP program TransfT where the predicate 'incorrect' is defined either by (i) the fact 'incorrect.' (and in this case prog is not correct), or by (ii) the empty set of clauses (and in this case prog is correct). In the case where we derive a CLP program such that neither (i) nor (ii) holds, we iterate the transformation. Since the problem is undecidable, this process may not terminate. We show through examples that our method can be applied in a rather systematic way, and is amenable to automation by transferring to the field of program verification many techniques developed in the field of program transformation.
[ { "created": "Fri, 20 Sep 2013 01:44:29 GMT", "version": "v1" } ]
2013-09-23
[ [ "De Angelis", "Emanuele", "", "DEC, University `G. D'Annunzio', Pescara, Italy" ], [ "Fioravanti", "Fabio", "", "DEC, University `G. D'Annunzio', Pescara, Italy" ], [ "Pettorossi", "Alberto", "", "DICII, University of Rome Tor Vergata, Rome, Italy" ], [ "Proietti", "Maurizio", "", "IASI-CNR, Rome, Italy" ] ]
We present a method for verifying partial correctness properties of imperative programs that manipulate integers and arrays by using techniques based on the transformation of constraint logic programs (CLP). We use CLP as a metalanguage for representing imperative programs, their executions, and their properties. First, we encode the correctness of an imperative program, say prog, as the negation of a predicate 'incorrect' defined by a CLP program T. By construction, 'incorrect' holds in the least model of T if and only if the execution of prog from an initial configuration eventually halts in an error configuration. Then, we apply to program T a sequence of transformations that preserve its least model semantics. These transformations are based on well-known transformation rules, such as unfolding and folding, guided by suitable transformation strategies, such as specialization and generalization. The objective of the transformations is to derive a new CLP program TransfT where the predicate 'incorrect' is defined either by (i) the fact 'incorrect.' (and in this case prog is not correct), or by (ii) the empty set of clauses (and in this case prog is correct). In the case where we derive a CLP program such that neither (i) nor (ii) holds, we iterate the transformation. Since the problem is undecidable, this process may not terminate. We show through examples that our method can be applied in a rather systematic way, and is amenable to automation by transferring to the field of program verification many techniques developed in the field of program transformation.
2312.06570
Maciej Grzeszczuk
Maciej Grzeszczuk, Kinga Skorupska
Preserving the Artifacts of the Early Digital Era: A Study of What, Why and How?
8 pages, 3 figures, 2 tables. To be published in 11th Machine Intelligence and Digital Interaction MIDI Conference proceedings
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we report the pilot results of a survey study (N=1036) related to social attitudes towards the early digital heritage. On the basis of the answers, we consider what constitutes early digital artifacts (EDA) and outline how knowledge about them can be useful. We explore attitudes toward the historical and cultural importance of various EDAs and chart the surveyed requirements for their successful and sustainable preservation for current and future generations.
[ { "created": "Mon, 11 Dec 2023 17:57:08 GMT", "version": "v1" } ]
2023-12-12
[ [ "Grzeszczuk", "Maciej", "" ], [ "Skorupska", "Kinga", "" ] ]
In this article, we report the pilot results of a survey study (N=1036) related to social attitudes towards the early digital heritage. On the basis of the answers, we consider what constitutes early digital artifacts (EDA) and outline how knowledge about them can be useful. We explore attitudes toward the historical and cultural importance of various EDAs and chart the surveyed requirements for their successful and sustainable preservation for current and future generations.
2106.07927
Boitumelo Ruf
Boitumelo Ruf, Jonas Mohrs, Martin Weinmann, Stefan Hinz, J\"urgen Beyerer
ReS2tAC -- UAV-Borne Real-Time SGM Stereo Optimized for Embedded ARM and CUDA Devices
null
Sensors 2021, 21, 3938
10.3390/s21113938
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
With the emergence of low-cost robotic systems, such as unmanned aerial vehicle, the importance of embedded high-performance image processing has increased. For a long time, FPGAs were the only processing hardware that were capable of high-performance computing, while at the same time preserving a low power consumption, essential for embedded systems. However, the recently increasing availability of embedded GPU-based systems, such as the NVIDIA Jetson series, comprised of an ARM CPU and a NVIDIA Tegra GPU, allows for massively parallel embedded computing on graphics hardware. With this in mind, we propose an approach for real-time embedded stereo processing on ARM and CUDA-enabled devices, which is based on the popular and widely used Semi-Global Matching algorithm. In this, we propose an optimization of the algorithm for embedded CUDA GPUs, by using massively parallel computing, as well as using the NEON intrinsics to optimize the algorithm for vectorized SIMD processing on embedded ARM CPUs. We have evaluated our approach with different configurations on two public stereo benchmark datasets to demonstrate that they can reach an error rate as low as 3.3%. Furthermore, our experiments show that the fastest configuration of our approach reaches up to 46 FPS on VGA image resolution. Finally, in a use-case specific qualitative evaluation, we have evaluated the power consumption of our approach and deployed it on the DJI Manifold 2-G attached to a DJI Matrix 210v2 RTK unmanned aerial vehicle (UAV), demonstrating its suitability for real-time stereo processing onboard a UAV.
[ { "created": "Tue, 15 Jun 2021 07:29:25 GMT", "version": "v1" } ]
2021-06-16
[ [ "Ruf", "Boitumelo", "" ], [ "Mohrs", "Jonas", "" ], [ "Weinmann", "Martin", "" ], [ "Hinz", "Stefan", "" ], [ "Beyerer", "Jürgen", "" ] ]
With the emergence of low-cost robotic systems, such as unmanned aerial vehicle, the importance of embedded high-performance image processing has increased. For a long time, FPGAs were the only processing hardware that were capable of high-performance computing, while at the same time preserving a low power consumption, essential for embedded systems. However, the recently increasing availability of embedded GPU-based systems, such as the NVIDIA Jetson series, comprised of an ARM CPU and a NVIDIA Tegra GPU, allows for massively parallel embedded computing on graphics hardware. With this in mind, we propose an approach for real-time embedded stereo processing on ARM and CUDA-enabled devices, which is based on the popular and widely used Semi-Global Matching algorithm. In this, we propose an optimization of the algorithm for embedded CUDA GPUs, by using massively parallel computing, as well as using the NEON intrinsics to optimize the algorithm for vectorized SIMD processing on embedded ARM CPUs. We have evaluated our approach with different configurations on two public stereo benchmark datasets to demonstrate that they can reach an error rate as low as 3.3%. Furthermore, our experiments show that the fastest configuration of our approach reaches up to 46 FPS on VGA image resolution. Finally, in a use-case specific qualitative evaluation, we have evaluated the power consumption of our approach and deployed it on the DJI Manifold 2-G attached to a DJI Matrix 210v2 RTK unmanned aerial vehicle (UAV), demonstrating its suitability for real-time stereo processing onboard a UAV.
1701.04208
Anastasios Noulas Anastasios Noulas
Anastasios Noulas, Vsevolod Salnikov, Desislava Hristova, Cecilia Mascolo, Renaud Lambiotte
Developing and Deploying a Taxi Price Comparison Mobile App in the Wild: Insights and Challenges
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As modern transportation systems become more complex, there is need for mobile applications that allow travelers to navigate efficiently in cities. In taxi transport the recent proliferation of Uber has introduced new norms including a flexible pricing scheme where journey costs can change rapidly depending on passenger demand and driver supply. To make informed choices on the most appropriate provider for their journeys, travelers need access to knowledge about provider pricing in real time. To this end, we developed OpenStreetcab a mobile application that offers advice on taxi transport comparing provider prices. We describe its development and deployment in two cities, London and New York, and analyse thousands of user journey queries to compare the price patterns of Uber against major local taxi providers. We have observed large heterogeneity across the taxi transport markets in the two cities. This motivated us to perform a price validation and measurement experiment on the ground comparing Uber and Black Cabs in London. The experimental results reveal interesting insights: not only they confirm feedback on pricing and service quality received by professional drivers users, but also they reveal the tradeoffs between prices and journey times between taxi providers. With respect to journey times in particular, we show how experienced taxi drivers, in the majority of the cases, are able to navigate faster to a destination compared to drivers who rely on modern navigation systems. We provide evidence that this advantage becomes stronger in the centre of a city where urban density is high.
[ { "created": "Mon, 16 Jan 2017 09:15:38 GMT", "version": "v1" } ]
2017-01-17
[ [ "Noulas", "Anastasios", "" ], [ "Salnikov", "Vsevolod", "" ], [ "Hristova", "Desislava", "" ], [ "Mascolo", "Cecilia", "" ], [ "Lambiotte", "Renaud", "" ] ]
As modern transportation systems become more complex, there is need for mobile applications that allow travelers to navigate efficiently in cities. In taxi transport the recent proliferation of Uber has introduced new norms including a flexible pricing scheme where journey costs can change rapidly depending on passenger demand and driver supply. To make informed choices on the most appropriate provider for their journeys, travelers need access to knowledge about provider pricing in real time. To this end, we developed OpenStreetcab a mobile application that offers advice on taxi transport comparing provider prices. We describe its development and deployment in two cities, London and New York, and analyse thousands of user journey queries to compare the price patterns of Uber against major local taxi providers. We have observed large heterogeneity across the taxi transport markets in the two cities. This motivated us to perform a price validation and measurement experiment on the ground comparing Uber and Black Cabs in London. The experimental results reveal interesting insights: not only they confirm feedback on pricing and service quality received by professional drivers users, but also they reveal the tradeoffs between prices and journey times between taxi providers. With respect to journey times in particular, we show how experienced taxi drivers, in the majority of the cases, are able to navigate faster to a destination compared to drivers who rely on modern navigation systems. We provide evidence that this advantage becomes stronger in the centre of a city where urban density is high.
2210.07916
Shuguang Chen
Shuguang Chen, Leonardo Neves, Thamar Solorio
Style Transfer as Data Augmentation: A Case Study on Named Entity Recognition
To appear at EMNLP 2022 main conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we take the named entity recognition task in the English language as a case study and explore style transfer as a data augmentation method to increase the size and diversity of training data in low-resource scenarios. We propose a new method to effectively transform the text from a high-resource domain to a low-resource domain by changing its style-related attributes to generate synthetic data for training. Moreover, we design a constrained decoding algorithm along with a set of key ingredients for data selection to guarantee the generation of valid and coherent data. Experiments and analysis on five different domain pairs under different data regimes demonstrate that our approach can significantly improve results compared to current state-of-the-art data augmentation methods. Our approach is a practical solution to data scarcity, and we expect it to be applicable to other NLP tasks.
[ { "created": "Fri, 14 Oct 2022 16:02:03 GMT", "version": "v1" } ]
2022-10-17
[ [ "Chen", "Shuguang", "" ], [ "Neves", "Leonardo", "" ], [ "Solorio", "Thamar", "" ] ]
In this work, we take the named entity recognition task in the English language as a case study and explore style transfer as a data augmentation method to increase the size and diversity of training data in low-resource scenarios. We propose a new method to effectively transform the text from a high-resource domain to a low-resource domain by changing its style-related attributes to generate synthetic data for training. Moreover, we design a constrained decoding algorithm along with a set of key ingredients for data selection to guarantee the generation of valid and coherent data. Experiments and analysis on five different domain pairs under different data regimes demonstrate that our approach can significantly improve results compared to current state-of-the-art data augmentation methods. Our approach is a practical solution to data scarcity, and we expect it to be applicable to other NLP tasks.
2305.15692
Sarah Wang
Zihan Wang, Yang Yang, Zhi Liu, Yifan Zheng
Deep Neural Networks in Video Human Action Recognition: A Review
null
null
null
null
cs.CV cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
Currently, video behavior recognition is one of the most foundational tasks of computer vision. The 2D neural networks of deep learning are built for recognizing pixel-level information such as images with RGB, RGB-D, or optical flow formats, with the current increasingly wide usage of surveillance video and more tasks related to human action recognition. There are increasing tasks requiring temporal information for frames dependency analysis. The researchers have widely studied video-based recognition rather than image-based(pixel-based) only to extract more informative elements from geometry tasks. Our current related research addresses multiple novel proposed research works and compares their advantages and disadvantages between the derived deep learning frameworks rather than machine learning frameworks. The comparison happened between existing frameworks and datasets, which are video format data only. Due to the specific properties of human actions and the increasingly wide usage of deep neural networks, we collected all research works within the last three years between 2020 to 2022. In our article, the performance of deep neural networks surpassed most of the techniques in the feature learning and extraction tasks, especially video action recognition.
[ { "created": "Thu, 25 May 2023 03:54:41 GMT", "version": "v1" } ]
2023-05-26
[ [ "Wang", "Zihan", "" ], [ "Yang", "Yang", "" ], [ "Liu", "Zhi", "" ], [ "Zheng", "Yifan", "" ] ]
Currently, video behavior recognition is one of the most foundational tasks of computer vision. The 2D neural networks of deep learning are built for recognizing pixel-level information such as images with RGB, RGB-D, or optical flow formats, with the current increasingly wide usage of surveillance video and more tasks related to human action recognition. There are increasing tasks requiring temporal information for frames dependency analysis. The researchers have widely studied video-based recognition rather than image-based(pixel-based) only to extract more informative elements from geometry tasks. Our current related research addresses multiple novel proposed research works and compares their advantages and disadvantages between the derived deep learning frameworks rather than machine learning frameworks. The comparison happened between existing frameworks and datasets, which are video format data only. Due to the specific properties of human actions and the increasingly wide usage of deep neural networks, we collected all research works within the last three years between 2020 to 2022. In our article, the performance of deep neural networks surpassed most of the techniques in the feature learning and extraction tasks, especially video action recognition.
1803.07856
Giuseppe Iaffaldano
Giuseppe Iaffaldano
Investigating Collaboration Within Online Communities: Software Development Vs. Artistic Creation
GROUP 2018, Doctoral Colloquium, January 7-10, 2018, Sanibel Island, FL, USA
Proceedings of the 2018 ACM Conference on Supporting Groupwork
10.1145/3148330.3152699
null
cs.SI cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online creative communities have been able to develop large, open source software (OSS) projects like Linux and Firefox throughout the successful collaborations carried out over the Internet. These communities have also expanded to creative arts domains such as animation, video games, and music. Despite their growing popularity, the factors that lead to successful collaborations in these communities are not entirely understood. In the following, I describe my PhD research project aimed at improving communication, collaboration, and retention in creative arts communities, starting from the experience gained from the literature about OSS communities.
[ { "created": "Wed, 21 Mar 2018 11:06:48 GMT", "version": "v1" } ]
2018-03-22
[ [ "Iaffaldano", "Giuseppe", "" ] ]
Online creative communities have been able to develop large, open source software (OSS) projects like Linux and Firefox throughout the successful collaborations carried out over the Internet. These communities have also expanded to creative arts domains such as animation, video games, and music. Despite their growing popularity, the factors that lead to successful collaborations in these communities are not entirely understood. In the following, I describe my PhD research project aimed at improving communication, collaboration, and retention in creative arts communities, starting from the experience gained from the literature about OSS communities.
2303.03686
Karan Muvvala
Karan Muvvala and Morteza Lahijanian
Efficient Symbolic Approaches for Quantitative Reactive Synthesis with Finite Tasks
Accepted to IROS 2023
null
null
null
cs.RO cs.FL cs.GT cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work introduces efficient symbolic algorithms for quantitative reactive synthesis. We consider resource-constrained robotic manipulators that need to interact with a human to achieve a complex task expressed in linear temporal logic. Our framework generates reactive strategies that not only guarantee task completion but also seek cooperation with the human when possible. We model the interaction as a two-player game and consider regret-minimizing strategies to encourage cooperation. We use symbolic representation of the game to enable scalability. For synthesis, we first introduce value iteration algorithms for such games with min-max objectives. Then, we extend our method to the regret-minimizing objectives. Our benchmarks reveal that our symbolic framework not only significantly improves computation time (up to an order of magnitude) but also can scale up to much larger instances of manipulation problems with up to 2x number of objects and locations than the state of the art.
[ { "created": "Tue, 7 Mar 2023 07:08:20 GMT", "version": "v1" }, { "created": "Mon, 13 Mar 2023 13:29:40 GMT", "version": "v2" }, { "created": "Mon, 7 Aug 2023 19:24:53 GMT", "version": "v3" } ]
2023-08-09
[ [ "Muvvala", "Karan", "" ], [ "Lahijanian", "Morteza", "" ] ]
This work introduces efficient symbolic algorithms for quantitative reactive synthesis. We consider resource-constrained robotic manipulators that need to interact with a human to achieve a complex task expressed in linear temporal logic. Our framework generates reactive strategies that not only guarantee task completion but also seek cooperation with the human when possible. We model the interaction as a two-player game and consider regret-minimizing strategies to encourage cooperation. We use symbolic representation of the game to enable scalability. For synthesis, we first introduce value iteration algorithms for such games with min-max objectives. Then, we extend our method to the regret-minimizing objectives. Our benchmarks reveal that our symbolic framework not only significantly improves computation time (up to an order of magnitude) but also can scale up to much larger instances of manipulation problems with up to 2x number of objects and locations than the state of the art.
2107.02983
Gihan Dias
Upuli Liyanapathirana, Kaumini Gunasinghe, Gihan Dias
SinSpell: A Comprehensive Spelling Checker for Sinhala
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We have built SinSpell, a comprehensive spelling checker for the Sinhala language which is spoken by over 16 million people, mainly in Sri Lanka. However, until recently, Sinhala had no spelling checker with acceptable coverage. Sinspell is still the only open source Sinhala spelling checker. SinSpell identifies possible spelling errors and suggests corrections. It also contains a module which auto-corrects evident errors. To maintain accuracy, SinSpell was designed as a rule-based system based on Hunspell. A set of words was compiled from several sources and verified. These were divided into morphological classes, and the valid roots, suffixes and prefixes for each class were identified, together with lists of irregular words and exceptions. The errors in a corpus of Sinhala documents were analysed and commonly misspelled words and types of common errors were identified. We found that the most common errors were in vowel length and similar sounding letters. Errors due to incorrect typing and encoding were also found. This analysis was used to develop the suggestion generator and auto-corrector.
[ { "created": "Wed, 7 Jul 2021 02:36:43 GMT", "version": "v1" } ]
2021-07-08
[ [ "Liyanapathirana", "Upuli", "" ], [ "Gunasinghe", "Kaumini", "" ], [ "Dias", "Gihan", "" ] ]
We have built SinSpell, a comprehensive spelling checker for the Sinhala language which is spoken by over 16 million people, mainly in Sri Lanka. However, until recently, Sinhala had no spelling checker with acceptable coverage. Sinspell is still the only open source Sinhala spelling checker. SinSpell identifies possible spelling errors and suggests corrections. It also contains a module which auto-corrects evident errors. To maintain accuracy, SinSpell was designed as a rule-based system based on Hunspell. A set of words was compiled from several sources and verified. These were divided into morphological classes, and the valid roots, suffixes and prefixes for each class were identified, together with lists of irregular words and exceptions. The errors in a corpus of Sinhala documents were analysed and commonly misspelled words and types of common errors were identified. We found that the most common errors were in vowel length and similar sounding letters. Errors due to incorrect typing and encoding were also found. This analysis was used to develop the suggestion generator and auto-corrector.
2209.09572
Mikhail Kiselev
Mikhail Kiselev
A Spiking Neural Network Learning Markov Chain
null
null
null
null
cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, the question how spiking neural network (SNN) learns and fixes in its internal structures a model of external world dynamics is explored. This question is important for implementation of the model-based reinforcement learning (RL), the realistic RL regime where the decisions made by SNN and their evaluation in terms of reward/punishment signals may be separated by significant time interval and sequence of intermediate evaluation-neutral world states. In the present work, I formalize world dynamics as a Markov chain with unknown a priori state transition probabilities, which should be learnt by the network. To make this problem formulation more realistic, I solve it in continuous time, so that duration of every state in the Markov chain may be different and is unknown. It is demonstrated how this task can be accomplished by an SNN with specially designed structure and local synaptic plasticity rules. As an example, we show how this network motif works in the simple but non-trivial world where a ball moves inside a square box and bounces from its walls with a random new direction and velocity.
[ { "created": "Tue, 20 Sep 2022 09:31:01 GMT", "version": "v1" } ]
2022-09-21
[ [ "Kiselev", "Mikhail", "" ] ]
In this paper, the question how spiking neural network (SNN) learns and fixes in its internal structures a model of external world dynamics is explored. This question is important for implementation of the model-based reinforcement learning (RL), the realistic RL regime where the decisions made by SNN and their evaluation in terms of reward/punishment signals may be separated by significant time interval and sequence of intermediate evaluation-neutral world states. In the present work, I formalize world dynamics as a Markov chain with unknown a priori state transition probabilities, which should be learnt by the network. To make this problem formulation more realistic, I solve it in continuous time, so that duration of every state in the Markov chain may be different and is unknown. It is demonstrated how this task can be accomplished by an SNN with specially designed structure and local synaptic plasticity rules. As an example, we show how this network motif works in the simple but non-trivial world where a ball moves inside a square box and bounces from its walls with a random new direction and velocity.
2312.15999
Jianyu Xu
Jianyu Xu, Yu-Xiang Wang
Pricing with Contextual Elasticity and Heteroscedastic Valuation
29 pages
null
null
null
cs.LG econ.EM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study an online contextual dynamic pricing problem, where customers decide whether to purchase a product based on its features and price. We introduce a novel approach to modeling a customer's expected demand by incorporating feature-based price elasticity, which can be equivalently represented as a valuation with heteroscedastic noise. To solve the problem, we propose a computationally efficient algorithm called "Pricing with Perturbation (PwP)", which enjoys an $O(\sqrt{dT\log T})$ regret while allowing arbitrary adversarial input context sequences. We also prove a matching lower bound at $\Omega(\sqrt{dT})$ to show the optimality regarding $d$ and $T$ (up to $\log T$ factors). Our results shed light on the relationship between contextual elasticity and heteroscedastic valuation, providing insights for effective and practical pricing strategies.
[ { "created": "Tue, 26 Dec 2023 11:07:37 GMT", "version": "v1" } ]
2023-12-27
[ [ "Xu", "Jianyu", "" ], [ "Wang", "Yu-Xiang", "" ] ]
We study an online contextual dynamic pricing problem, where customers decide whether to purchase a product based on its features and price. We introduce a novel approach to modeling a customer's expected demand by incorporating feature-based price elasticity, which can be equivalently represented as a valuation with heteroscedastic noise. To solve the problem, we propose a computationally efficient algorithm called "Pricing with Perturbation (PwP)", which enjoys an $O(\sqrt{dT\log T})$ regret while allowing arbitrary adversarial input context sequences. We also prove a matching lower bound at $\Omega(\sqrt{dT})$ to show the optimality regarding $d$ and $T$ (up to $\log T$ factors). Our results shed light on the relationship between contextual elasticity and heteroscedastic valuation, providing insights for effective and practical pricing strategies.