id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1804.00702
Rodrigo Bruno
Rodrigo Bruno, Duarte Patr\'icio, Jos\'e Sim\~ao, Lu\'is Veiga and Paulo Ferreira
ROLP: Runtime Object Lifetime Profiling for Big Data Memory Management
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Low latency services such as credit-card fraud detection and website targeted advertisement rely on Big Data platforms (e.g., Lucene, Graphchi, Cassandra) which run on top of memory managed runtimes, such as the JVM. These platforms, however, suffer from unpredictable and unacceptably high pause times due to inadequate memory management decisions (e.g., allocating objects with very different lifetimes next to each other, resulting in memory fragmentation). This leads to long and frequent application pause times, breaking Service Level Agreements (SLAs). This problem has been previously identified and results show that current memory management techniques are ill-suited for applications that hold in memory massive amounts of middle to long-lived objects (which is the case for a wide spectrum of Big Data applications). Previous works try to reduce such application pauses by allocating objects off-heap or in special allocation regions/generations, thus alleviating the pressure on memory management. However, all these solutions require a combination of programmer effort and knowledge, source code access, or off-line profiling, with clear negative impact on programmer productivity and/or application performance. This paper presents ROLP, a runtime object lifetime profiling system. ROLP profiles application code at runtime in order to identify which allocation contexts create objects with middle to long lifetimes, given that such objects need to be handled differently (regarding short-lived ones). This profiling information greatly improves memory management decisions, leading to long tail latencies reduction of up to 51% for Lucene, 85% for GraphChi, and 60% for Cassandra, with negligible throughput and memory overhead. ROLP is implemented for the OpenJDK 8 HotSpot JVM and it does not require any programmer effort or source code access.
[ { "created": "Fri, 9 Mar 2018 16:53:44 GMT", "version": "v1" } ]
2018-04-04
[ [ "Bruno", "Rodrigo", "" ], [ "Patrício", "Duarte", "" ], [ "Simão", "José", "" ], [ "Veiga", "Luís", "" ], [ "Ferreira", "Paulo", "" ] ]
Low latency services such as credit-card fraud detection and website targeted advertisement rely on Big Data platforms (e.g., Lucene, Graphchi, Cassandra) which run on top of memory managed runtimes, such as the JVM. These platforms, however, suffer from unpredictable and unacceptably high pause times due to inadequate memory management decisions (e.g., allocating objects with very different lifetimes next to each other, resulting in memory fragmentation). This leads to long and frequent application pause times, breaking Service Level Agreements (SLAs). This problem has been previously identified and results show that current memory management techniques are ill-suited for applications that hold in memory massive amounts of middle to long-lived objects (which is the case for a wide spectrum of Big Data applications). Previous works try to reduce such application pauses by allocating objects off-heap or in special allocation regions/generations, thus alleviating the pressure on memory management. However, all these solutions require a combination of programmer effort and knowledge, source code access, or off-line profiling, with clear negative impact on programmer productivity and/or application performance. This paper presents ROLP, a runtime object lifetime profiling system. ROLP profiles application code at runtime in order to identify which allocation contexts create objects with middle to long lifetimes, given that such objects need to be handled differently (regarding short-lived ones). This profiling information greatly improves memory management decisions, leading to long tail latencies reduction of up to 51% for Lucene, 85% for GraphChi, and 60% for Cassandra, with negligible throughput and memory overhead. ROLP is implemented for the OpenJDK 8 HotSpot JVM and it does not require any programmer effort or source code access.
2202.08400
Changxi You
Changxi You
Real Time Motion Planning Using Constrained Iterative Linear Quadratic Regulator for On-Road Self-Driving
10 pages with 10 figures and 2 tables
null
null
null
cs.RO math.OC
http://creativecommons.org/licenses/by/4.0/
Collision avoidance is one of the most challenging tasks people need to consider for developing the self-driving technology. In this paper we propose a new spatiotemporal motion planning algorithm that efficiently solves a constrained nonlinear optimal control problem using the iterative linear quadratic regulator (iLQR), which takes into account the uncertain driving behaviors of the traffic vehicles and minimizes the collision risks between the self-driving vehicle (referred to as the "ego" vehicle) and the traffic vehicles such that the ego vehicle is able to maintain sufficiently large distances to all the surrounding vehicles for achieving the desired collision avoidance maneuver in traffic. To this end, we introduce the concept of the "collision polygon" for computing the minimum distances between the ego vehicle and the traffic vehicles, and provide two different solutions for designing the constraints of the motion planning problem by properly modeling the behaviors of the traffic vehicles in order to evaluate the collision risk. Finally, the iLQR motion planning algorithm is validated in multiple real-time tasks for collision avoidance using both a simulator and a level-3 autonomous driving test platform.
[ { "created": "Thu, 17 Feb 2022 01:50:44 GMT", "version": "v1" } ]
2022-02-18
[ [ "You", "Changxi", "" ] ]
Collision avoidance is one of the most challenging tasks people need to consider for developing the self-driving technology. In this paper we propose a new spatiotemporal motion planning algorithm that efficiently solves a constrained nonlinear optimal control problem using the iterative linear quadratic regulator (iLQR), which takes into account the uncertain driving behaviors of the traffic vehicles and minimizes the collision risks between the self-driving vehicle (referred to as the "ego" vehicle) and the traffic vehicles such that the ego vehicle is able to maintain sufficiently large distances to all the surrounding vehicles for achieving the desired collision avoidance maneuver in traffic. To this end, we introduce the concept of the "collision polygon" for computing the minimum distances between the ego vehicle and the traffic vehicles, and provide two different solutions for designing the constraints of the motion planning problem by properly modeling the behaviors of the traffic vehicles in order to evaluate the collision risk. Finally, the iLQR motion planning algorithm is validated in multiple real-time tasks for collision avoidance using both a simulator and a level-3 autonomous driving test platform.
2312.00561
Tingting Ni
Tingting Ni, Maryam Kamgarpour
A safe exploration approach to constrained Markov decision processes
37 pages, 3 figures
null
null
null
cs.LG math.OC
http://creativecommons.org/licenses/by/4.0/
We consider discounted infinite horizon constrained Markov decision processes (CMDPs) where the goal is to find an optimal policy that maximizes the expected cumulative reward subject to expected cumulative constraints. Motivated by the application of CMDPs in online learning of safety-critical systems, we focus on developing a model-free and simulator-free algorithm that ensures constraint satisfaction during learning. To this end, we develop an interior point approach based on the log barrier function of the CMDP. Under the commonly assumed conditions of Fisher non-degeneracy and bounded transfer error of the policy parameterization, we establish the theoretical properties of the algorithm. In particular, in contrast to existing CMDP approaches that ensure policy feasibility only upon convergence, our algorithm guarantees the feasibility of the policies during the learning process and converges to the $\varepsilon$-optimal policy with a sample complexity of $\tilde{\mathcal{O}}(\varepsilon^{-6})$. In comparison to the state-of-the-art policy gradient-based algorithm, C-NPG-PDA, our algorithm requires an additional $\mathcal{O}(\varepsilon^{-2})$ samples to ensure policy feasibility during learning with the same Fisher non-degenerate parameterization.
[ { "created": "Fri, 1 Dec 2023 13:16:39 GMT", "version": "v1" }, { "created": "Thu, 23 May 2024 14:20:16 GMT", "version": "v2" } ]
2024-05-24
[ [ "Ni", "Tingting", "" ], [ "Kamgarpour", "Maryam", "" ] ]
We consider discounted infinite horizon constrained Markov decision processes (CMDPs) where the goal is to find an optimal policy that maximizes the expected cumulative reward subject to expected cumulative constraints. Motivated by the application of CMDPs in online learning of safety-critical systems, we focus on developing a model-free and simulator-free algorithm that ensures constraint satisfaction during learning. To this end, we develop an interior point approach based on the log barrier function of the CMDP. Under the commonly assumed conditions of Fisher non-degeneracy and bounded transfer error of the policy parameterization, we establish the theoretical properties of the algorithm. In particular, in contrast to existing CMDP approaches that ensure policy feasibility only upon convergence, our algorithm guarantees the feasibility of the policies during the learning process and converges to the $\varepsilon$-optimal policy with a sample complexity of $\tilde{\mathcal{O}}(\varepsilon^{-6})$. In comparison to the state-of-the-art policy gradient-based algorithm, C-NPG-PDA, our algorithm requires an additional $\mathcal{O}(\varepsilon^{-2})$ samples to ensure policy feasibility during learning with the same Fisher non-degenerate parameterization.
2309.10776
Eduard Fosch-Villaronga
Andreas Hauselmann, Alan M. Sears, Lex Zard and Eduard Fosch-Villaronga
EU law and emotion data
8 pages, 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII)
null
null
null
cs.CY cs.HC
http://creativecommons.org/licenses/by/4.0/
This article sheds light on legal implications and challenges surrounding emotion data processing within the EU's legal framework. Despite the sensitive nature of emotion data, the GDPR does not categorize it as special data, resulting in a lack of comprehensive protection. The article also discusses the nuances of different approaches to affective computing and their relevance to the processing of special data under the GDPR. Moreover, it points to potential tensions with data protection principles, such as fairness and accuracy. Our article also highlights some of the consequences, including harm, that processing of emotion data may have for individuals concerned. Additionally, we discuss how the AI Act proposal intends to regulate affective computing. Finally, the article outlines the new obligations and transparency requirements introduced by the DSA for online platforms utilizing emotion data. Our article aims at raising awareness among the affective computing community about the applicable legal requirements when developing AC systems intended for the EU market, or when working with study participants located in the EU. We also stress the importance of protecting the fundamental rights of individuals even when the law struggles to keep up with technological developments that capture sensitive emotion data.
[ { "created": "Tue, 19 Sep 2023 17:25:02 GMT", "version": "v1" } ]
2023-09-20
[ [ "Hauselmann", "Andreas", "" ], [ "Sears", "Alan M.", "" ], [ "Zard", "Lex", "" ], [ "Fosch-Villaronga", "Eduard", "" ] ]
This article sheds light on legal implications and challenges surrounding emotion data processing within the EU's legal framework. Despite the sensitive nature of emotion data, the GDPR does not categorize it as special data, resulting in a lack of comprehensive protection. The article also discusses the nuances of different approaches to affective computing and their relevance to the processing of special data under the GDPR. Moreover, it points to potential tensions with data protection principles, such as fairness and accuracy. Our article also highlights some of the consequences, including harm, that processing of emotion data may have for individuals concerned. Additionally, we discuss how the AI Act proposal intends to regulate affective computing. Finally, the article outlines the new obligations and transparency requirements introduced by the DSA for online platforms utilizing emotion data. Our article aims at raising awareness among the affective computing community about the applicable legal requirements when developing AC systems intended for the EU market, or when working with study participants located in the EU. We also stress the importance of protecting the fundamental rights of individuals even when the law struggles to keep up with technological developments that capture sensitive emotion data.
1010.4108
Lorenzo Orecchia
Lorenzo Orecchia and Nisheeth K. Vishnoi
Towards an SDP-based Approach to Spectral Methods: A Nearly-Linear-Time Algorithm for Graph Partitioning and Decomposition
To appear in SODA 2011
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider the following graph partitioning problem: The input is an undirected graph $G=(V,E),$ a balance parameter $b \in (0,1/2]$ and a target conductance value $\gamma \in (0,1).$ The output is a cut which, if non-empty, is of conductance at most $O(f),$ for some function $f(G, \gamma),$ and which is either balanced or well correlated with all cuts of conductance at most $\gamma.$ Spielman and Teng gave an $\tilde{O}(|E|/\gamma^{2})$-time algorithm for $f= \sqrt{\gamma \log^{3}|V|}$ and used it to decompose graphs into a collection of near-expanders. We present a new spectral algorithm for this problem which runs in time $\tilde{O}(|E|/\gamma)$ for $f=\sqrt{\gamma}.$ Our result yields the first nearly-linear time algorithm for the classic Balanced Separator problem that achieves the asymptotically optimal approximation guarantee for spectral methods. Our method has the advantage of being conceptually simple and relies on a primal-dual semidefinite-programming SDP approach. We first consider a natural SDP relaxation for the Balanced Separator problem. While it is easy to obtain from this SDP a certificate of the fact that the graph has no balanced cut of conductance less than $\gamma,$ somewhat surprisingly, we can obtain a certificate for the stronger correlation condition. This is achieved via a novel separation oracle for our SDP and by appealing to Arora and Kale's framework to bound the running time. Our result contains technical ingredients that may be of independent interest.
[ { "created": "Wed, 20 Oct 2010 06:37:28 GMT", "version": "v1" } ]
2010-10-21
[ [ "Orecchia", "Lorenzo", "" ], [ "Vishnoi", "Nisheeth K.", "" ] ]
In this paper, we consider the following graph partitioning problem: The input is an undirected graph $G=(V,E),$ a balance parameter $b \in (0,1/2]$ and a target conductance value $\gamma \in (0,1).$ The output is a cut which, if non-empty, is of conductance at most $O(f),$ for some function $f(G, \gamma),$ and which is either balanced or well correlated with all cuts of conductance at most $\gamma.$ Spielman and Teng gave an $\tilde{O}(|E|/\gamma^{2})$-time algorithm for $f= \sqrt{\gamma \log^{3}|V|}$ and used it to decompose graphs into a collection of near-expanders. We present a new spectral algorithm for this problem which runs in time $\tilde{O}(|E|/\gamma)$ for $f=\sqrt{\gamma}.$ Our result yields the first nearly-linear time algorithm for the classic Balanced Separator problem that achieves the asymptotically optimal approximation guarantee for spectral methods. Our method has the advantage of being conceptually simple and relies on a primal-dual semidefinite-programming SDP approach. We first consider a natural SDP relaxation for the Balanced Separator problem. While it is easy to obtain from this SDP a certificate of the fact that the graph has no balanced cut of conductance less than $\gamma,$ somewhat surprisingly, we can obtain a certificate for the stronger correlation condition. This is achieved via a novel separation oracle for our SDP and by appealing to Arora and Kale's framework to bound the running time. Our result contains technical ingredients that may be of independent interest.
2111.07753
Saif Sidhik
Saif Sidhik, Mohan Sridharan, Dirk Ruiken
An Adaptive Framework for Reliable Trajectory Following in Changing-Contact Robot Manipulation Tasks
21 pages including references
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
We describe a framework for changing-contact robot manipulation tasks that require the robot to make and break contacts with objects and surfaces. The discontinuous interaction dynamics of such tasks make it difficult to construct and use a single dynamics model or control strategy, and the highly non-linear nature of the dynamics during contact changes can be damaging to the robot and the objects. We present an adaptive control framework that enables the robot to incrementally learn to predict contact changes in a changing contact task, learn the interaction dynamics of the piece-wise continuous system, and provide smooth and accurate trajectory tracking using a task-space variable impedance controller. We experimentally compare the performance of our framework against that of representative control methods to establish that the adaptive control and incremental learning components of our framework are needed to achieve smooth control in the presence of discontinuous dynamics in changing-contact robot manipulation tasks.
[ { "created": "Mon, 15 Nov 2021 13:54:38 GMT", "version": "v1" } ]
2021-11-16
[ [ "Sidhik", "Saif", "" ], [ "Sridharan", "Mohan", "" ], [ "Ruiken", "Dirk", "" ] ]
We describe a framework for changing-contact robot manipulation tasks that require the robot to make and break contacts with objects and surfaces. The discontinuous interaction dynamics of such tasks make it difficult to construct and use a single dynamics model or control strategy, and the highly non-linear nature of the dynamics during contact changes can be damaging to the robot and the objects. We present an adaptive control framework that enables the robot to incrementally learn to predict contact changes in a changing contact task, learn the interaction dynamics of the piece-wise continuous system, and provide smooth and accurate trajectory tracking using a task-space variable impedance controller. We experimentally compare the performance of our framework against that of representative control methods to establish that the adaptive control and incremental learning components of our framework are needed to achieve smooth control in the presence of discontinuous dynamics in changing-contact robot manipulation tasks.
2207.00188
Cheng Li
Cheng Li, Yangxin Liu
Rethinking Query-Key Pairwise Interactions in Vision Transformers
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision Transformers have achieved state-of-the-art performance in many visual tasks. Due to the quadratic computational and memory complexities of self-attention, recent works either apply attention only to low-resolution inputs or restrict the receptive field to a small local region. To overcome these limitations, we propose key-only attention, which excludes query-key pairwise interactions and uses a compute-efficient saliency-gate to obtain attention weights, modeling local-global interactions in all stages. Key-only attention has linear computational and memory complexities w.r.t input size. We use alternate layout to hybridize convolution and attention layers instead of grafting which is suggested by previous works, so that all stages can benefit from both spatial attentions and convolutions. We leverage these improvements to develop a new self-attention model family, LinGlos, which reach state-of-the-art accuracies on the parameter-limited setting of ImageNet classification benchmark, and outperform baselines significantly in downstream tasks, e.g., COCO object detection and ADE20K semantic segmentation.
[ { "created": "Fri, 1 Jul 2022 03:36:49 GMT", "version": "v1" }, { "created": "Mon, 4 Jul 2022 02:23:46 GMT", "version": "v2" } ]
2022-07-05
[ [ "Li", "Cheng", "" ], [ "Liu", "Yangxin", "" ] ]
Vision Transformers have achieved state-of-the-art performance in many visual tasks. Due to the quadratic computational and memory complexities of self-attention, recent works either apply attention only to low-resolution inputs or restrict the receptive field to a small local region. To overcome these limitations, we propose key-only attention, which excludes query-key pairwise interactions and uses a compute-efficient saliency-gate to obtain attention weights, modeling local-global interactions in all stages. Key-only attention has linear computational and memory complexities w.r.t input size. We use alternate layout to hybridize convolution and attention layers instead of grafting which is suggested by previous works, so that all stages can benefit from both spatial attentions and convolutions. We leverage these improvements to develop a new self-attention model family, LinGlos, which reach state-of-the-art accuracies on the parameter-limited setting of ImageNet classification benchmark, and outperform baselines significantly in downstream tasks, e.g., COCO object detection and ADE20K semantic segmentation.
2310.19704
Vittorio Mazzia
Vittorio Mazzia, Alessandro Pedrani, Andrea Caciolai, Kay Rottmann, Davide Bernardi
A Survey on Knowledge Editing of Neural Networks
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks are becoming increasingly pervasive in academia and industry, matching and surpassing human performance on a wide variety of fields and related tasks. However, just as humans, even the largest artificial neural networks make mistakes, and once-correct predictions can become invalid as the world progresses in time. Augmenting datasets with samples that account for mistakes or up-to-date information has become a common workaround in practical applications. However, the well-known phenomenon of catastrophic forgetting poses a challenge in achieving precise changes in the implicitly memorized knowledge of neural network parameters, often requiring a full model re-training to achieve desired behaviors. That is expensive, unreliable, and incompatible with the current trend of large self-supervised pre-training, making it necessary to find more efficient and effective methods for adapting neural network models to changing data. To address this need, knowledge editing is emerging as a novel area of research that aims to enable reliable, data-efficient, and fast changes to a pre-trained target model, without affecting model behaviors on previously learned tasks. In this survey, we provide a brief review of this recent artificial intelligence field of research. We first introduce the problem of editing neural networks, formalize it in a common framework and differentiate it from more notorious branches of research such as continuous learning. Next, we provide a review of the most relevant knowledge editing approaches and datasets proposed so far, grouping works under four different families: regularization techniques, meta-learning, direct model editing, and architectural strategies. Finally, we outline some intersections with other fields of research and potential directions for future works.
[ { "created": "Mon, 30 Oct 2023 16:29:47 GMT", "version": "v1" }, { "created": "Thu, 14 Dec 2023 09:16:36 GMT", "version": "v2" } ]
2023-12-15
[ [ "Mazzia", "Vittorio", "" ], [ "Pedrani", "Alessandro", "" ], [ "Caciolai", "Andrea", "" ], [ "Rottmann", "Kay", "" ], [ "Bernardi", "Davide", "" ] ]
Deep neural networks are becoming increasingly pervasive in academia and industry, matching and surpassing human performance on a wide variety of fields and related tasks. However, just as humans, even the largest artificial neural networks make mistakes, and once-correct predictions can become invalid as the world progresses in time. Augmenting datasets with samples that account for mistakes or up-to-date information has become a common workaround in practical applications. However, the well-known phenomenon of catastrophic forgetting poses a challenge in achieving precise changes in the implicitly memorized knowledge of neural network parameters, often requiring a full model re-training to achieve desired behaviors. That is expensive, unreliable, and incompatible with the current trend of large self-supervised pre-training, making it necessary to find more efficient and effective methods for adapting neural network models to changing data. To address this need, knowledge editing is emerging as a novel area of research that aims to enable reliable, data-efficient, and fast changes to a pre-trained target model, without affecting model behaviors on previously learned tasks. In this survey, we provide a brief review of this recent artificial intelligence field of research. We first introduce the problem of editing neural networks, formalize it in a common framework and differentiate it from more notorious branches of research such as continuous learning. Next, we provide a review of the most relevant knowledge editing approaches and datasets proposed so far, grouping works under four different families: regularization techniques, meta-learning, direct model editing, and architectural strategies. Finally, we outline some intersections with other fields of research and potential directions for future works.
1809.10372
Sivakanth Gopi
Zeev Dvir, Sivakanth Gopi, Yuzhou Gu, Avi Wigderson
Spanoids - an abstraction of spanning structures, and a barrier for LCCs
Conference version to appear in ITCS 2019. arXiv:1810.02494 is merged into the new version
null
null
null
cs.CC cs.DM cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a simple logical inference structure we call a $\textsf{spanoid}$ (generalizing the notion of a matroid), which captures well-studied problems in several areas. These include combinatorial geometry, algebra (arrangements of hypersurfaces and ideals), statistical physics (bootstrap percolation) and coding theory. We initiate a thorough investigation of spanoids, from computational and structural viewpoints, focusing on parameters relevant to the applications areas above and, in particular, to questions regarding Locally Correctable Codes (LCCs). One central parameter we study is the $\textsf{rank}$ of a spanoid, extending the rank of a matroid and related to the dimension of codes. This leads to one main application of our work, establishing the first known barrier to improving the nearly 20-year old bound of Katz-Trevisan (KT) on the dimension of LCCs. On the one hand, we prove that the KT bound (and its more recent refinements) holds for the much more general setting of spanoid rank. On the other hand we show that there exist (random) spanoids whose rank matches these bounds. Thus, to significantly improve the known bounds one must step out of the spanoid framework. Another parameter we explore is the $\textsf{functional rank}$ of a spanoid, which captures the possibility of turning a given spanoid into an actual code. The question of the relationship between rank and functional rank is one of the main questions we raise as it may reveal new avenues for constructing new LCCs (perhaps even matching the KT bound). As a first step, we develop an entropy relaxation of functional rank to create a small constant gap and amplify it by tensoring to construct a spanoid whose functional rank is smaller than rank by a polynomial factor. This is evidence that the entropy method we develop can prove polynomially better bounds than KT-type methods on the dimension of LCCs.
[ { "created": "Thu, 27 Sep 2018 06:44:15 GMT", "version": "v1" }, { "created": "Tue, 20 Nov 2018 23:13:54 GMT", "version": "v2" } ]
2018-11-22
[ [ "Dvir", "Zeev", "" ], [ "Gopi", "Sivakanth", "" ], [ "Gu", "Yuzhou", "" ], [ "Wigderson", "Avi", "" ] ]
We introduce a simple logical inference structure we call a $\textsf{spanoid}$ (generalizing the notion of a matroid), which captures well-studied problems in several areas. These include combinatorial geometry, algebra (arrangements of hypersurfaces and ideals), statistical physics (bootstrap percolation) and coding theory. We initiate a thorough investigation of spanoids, from computational and structural viewpoints, focusing on parameters relevant to the applications areas above and, in particular, to questions regarding Locally Correctable Codes (LCCs). One central parameter we study is the $\textsf{rank}$ of a spanoid, extending the rank of a matroid and related to the dimension of codes. This leads to one main application of our work, establishing the first known barrier to improving the nearly 20-year old bound of Katz-Trevisan (KT) on the dimension of LCCs. On the one hand, we prove that the KT bound (and its more recent refinements) holds for the much more general setting of spanoid rank. On the other hand we show that there exist (random) spanoids whose rank matches these bounds. Thus, to significantly improve the known bounds one must step out of the spanoid framework. Another parameter we explore is the $\textsf{functional rank}$ of a spanoid, which captures the possibility of turning a given spanoid into an actual code. The question of the relationship between rank and functional rank is one of the main questions we raise as it may reveal new avenues for constructing new LCCs (perhaps even matching the KT bound). As a first step, we develop an entropy relaxation of functional rank to create a small constant gap and amplify it by tensoring to construct a spanoid whose functional rank is smaller than rank by a polynomial factor. This is evidence that the entropy method we develop can prove polynomially better bounds than KT-type methods on the dimension of LCCs.
2012.15505
Anthony David Blaom
Anthony D. Blaom and Sebastian J. Vollmer
Flexible model composition in machine learning and its implementation in MLJ
13 pages, 3 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
A graph-based protocol called `learning networks' which combine assorted machine learning models into meta-models is described. Learning networks are shown to overcome several limitations of model composition as implemented in the dominant machine learning platforms. After illustrating the protocol in simple examples, a concise syntax for specifying a learning network, implemented in the MLJ framework, is presented. Using the syntax, it is shown that learning networks are are sufficiently flexible to include Wolpert's model stacking, with out-of-sample predictions for the base learners.
[ { "created": "Thu, 31 Dec 2020 08:49:43 GMT", "version": "v1" } ]
2021-01-01
[ [ "Blaom", "Anthony D.", "" ], [ "Vollmer", "Sebastian J.", "" ] ]
A graph-based protocol called `learning networks' which combine assorted machine learning models into meta-models is described. Learning networks are shown to overcome several limitations of model composition as implemented in the dominant machine learning platforms. After illustrating the protocol in simple examples, a concise syntax for specifying a learning network, implemented in the MLJ framework, is presented. Using the syntax, it is shown that learning networks are are sufficiently flexible to include Wolpert's model stacking, with out-of-sample predictions for the base learners.
1408.3709
Parama Bagchi
Parama Bagchi, Debotosh Bhattacharjee and Mita Nasipuri
Robust 3D face recognition in presence of pose and partial occlusions or missing parts
the paper is of 15 pages, International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.4, No.4, July 2014
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a robust 3D face recognition system which can handle pose as well as occlusions in real world. The system at first takes as input, a 3D range image, simultaneously registers it using ICP(Iterative Closest Point) algorithm. ICP used in this work, registers facial surfaces to a common model by minimizing distances between a probe model and a gallery model. However the performance of ICP relies heavily on the initial conditions. Hence, it is necessary to provide an initial registration, which will be improved iteratively and finally converge to the best alignment possible. Once the faces are registered, the occlusions are automatically extracted by thresholding the depth map values of the 3D image. After the occluded regions are detected, restoration is done by Principal Component Analysis (PCA). The restored images, after the removal of occlusions, are then fed to the recognition system for classification purpose. Features are extracted from the reconstructed non-occluded face images in the form of face normals. The experimental results which were obtained on the occluded facial images from the Bosphorus 3D face database, illustrate that our occlusion compensation scheme has attained a recognition accuracy of 91.30%.
[ { "created": "Sat, 16 Aug 2014 06:43:30 GMT", "version": "v1" } ]
2014-08-19
[ [ "Bagchi", "Parama", "" ], [ "Bhattacharjee", "Debotosh", "" ], [ "Nasipuri", "Mita", "" ] ]
In this paper, we propose a robust 3D face recognition system which can handle pose as well as occlusions in real world. The system at first takes as input, a 3D range image, simultaneously registers it using ICP(Iterative Closest Point) algorithm. ICP used in this work, registers facial surfaces to a common model by minimizing distances between a probe model and a gallery model. However the performance of ICP relies heavily on the initial conditions. Hence, it is necessary to provide an initial registration, which will be improved iteratively and finally converge to the best alignment possible. Once the faces are registered, the occlusions are automatically extracted by thresholding the depth map values of the 3D image. After the occluded regions are detected, restoration is done by Principal Component Analysis (PCA). The restored images, after the removal of occlusions, are then fed to the recognition system for classification purpose. Features are extracted from the reconstructed non-occluded face images in the form of face normals. The experimental results which were obtained on the occluded facial images from the Bosphorus 3D face database, illustrate that our occlusion compensation scheme has attained a recognition accuracy of 91.30%.
1611.07800
Ehsan Abbasnejad M
Ehsan Abbasnejad, Anthony Dick, Anton van den Hengel
Infinite Variational Autoencoder for Semi-Supervised Learning
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an infinite variational autoencoder (VAE) whose capacity adapts to suit the input data. This is achieved using a mixture model where the mixing coefficients are modeled by a Dirichlet process, allowing us to integrate over the coefficients when performing inference. Critically, this then allows us to automatically vary the number of autoencoders in the mixture based on the data. Experiments show the flexibility of our method, particularly for semi-supervised learning, where only a small number of training samples are available.
[ { "created": "Wed, 23 Nov 2016 13:59:57 GMT", "version": "v1" }, { "created": "Thu, 24 Nov 2016 01:28:08 GMT", "version": "v2" } ]
2016-11-28
[ [ "Abbasnejad", "Ehsan", "" ], [ "Dick", "Anthony", "" ], [ "Hengel", "Anton van den", "" ] ]
This paper presents an infinite variational autoencoder (VAE) whose capacity adapts to suit the input data. This is achieved using a mixture model where the mixing coefficients are modeled by a Dirichlet process, allowing us to integrate over the coefficients when performing inference. Critically, this then allows us to automatically vary the number of autoencoders in the mixture based on the data. Experiments show the flexibility of our method, particularly for semi-supervised learning, where only a small number of training samples are available.
2208.00094
Yulong Cao
Yulong Cao, Danfei Xu, Xinshuo Weng, Zhuoqing Mao, Anima Anandkumar, Chaowei Xiao, Marco Pavone
Robust Trajectory Prediction against Adversarial Attacks
null
null
null
null
cs.LG cs.AI cs.CR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving (AD) systems. However, these methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions. In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks including (1) designing effective adversarial training methods and (2) adding domain-specific data augmentation to mitigate the performance degradation on clean data. We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data. Additionally, compared to existing robust methods, our method can improve performance by 21% on adversarial examples and 9% on clean data. Our robust model is evaluated with a planner to study its downstream impacts. We demonstrate that our model can significantly reduce the severe accident rates (e.g., collisions and off-road driving).
[ { "created": "Fri, 29 Jul 2022 22:35:05 GMT", "version": "v1" } ]
2022-08-02
[ [ "Cao", "Yulong", "" ], [ "Xu", "Danfei", "" ], [ "Weng", "Xinshuo", "" ], [ "Mao", "Zhuoqing", "" ], [ "Anandkumar", "Anima", "" ], [ "Xiao", "Chaowei", "" ], [ "Pavone", "Marco", "" ] ]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving (AD) systems. However, these methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions. In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks including (1) designing effective adversarial training methods and (2) adding domain-specific data augmentation to mitigate the performance degradation on clean data. We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data. Additionally, compared to existing robust methods, our method can improve performance by 21% on adversarial examples and 9% on clean data. Our robust model is evaluated with a planner to study its downstream impacts. We demonstrate that our model can significantly reduce the severe accident rates (e.g., collisions and off-road driving).
2210.03292
Hongrui Gao
Hongrui Gao, Yawen Li, Meiyu Liang, Zeli Guan
Unsupervised Semantic Representation Learning of Scientific Literature Based on Graph Attention Mechanism and Maximum Mutual Information
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since most scientific literature data are unlabeled, this makes unsupervised graph-based semantic representation learning crucial. Therefore, an unsupervised semantic representation learning method of scientific literature based on graph attention mechanism and maximum mutual information (GAMMI) is proposed. By introducing a graph attention mechanism, the weighted summation of nearby node features make the weights of adjacent node features entirely depend on the node features. Depending on the features of the nearby nodes, different weights can be applied to each node in the graph. Therefore, the correlations between vertex features can be better integrated into the model. In addition, an unsupervised graph contrastive learning strategy is proposed to solve the problem of being unlabeled and scalable on large-scale graphs. By comparing the mutual information between the positive and negative local node representations on the latent space and the global graph representation, the graph neural network can capture both local and global information. Experimental results demonstrate competitive performance on various node classification benchmarks, achieving good results and sometimes even surpassing the performance of supervised learning.
[ { "created": "Fri, 7 Oct 2022 02:48:14 GMT", "version": "v1" }, { "created": "Mon, 30 Jan 2023 09:25:18 GMT", "version": "v2" } ]
2023-01-31
[ [ "Gao", "Hongrui", "" ], [ "Li", "Yawen", "" ], [ "Liang", "Meiyu", "" ], [ "Guan", "Zeli", "" ] ]
Since most scientific literature data are unlabeled, this makes unsupervised graph-based semantic representation learning crucial. Therefore, an unsupervised semantic representation learning method of scientific literature based on graph attention mechanism and maximum mutual information (GAMMI) is proposed. By introducing a graph attention mechanism, the weighted summation of nearby node features make the weights of adjacent node features entirely depend on the node features. Depending on the features of the nearby nodes, different weights can be applied to each node in the graph. Therefore, the correlations between vertex features can be better integrated into the model. In addition, an unsupervised graph contrastive learning strategy is proposed to solve the problem of being unlabeled and scalable on large-scale graphs. By comparing the mutual information between the positive and negative local node representations on the latent space and the global graph representation, the graph neural network can capture both local and global information. Experimental results demonstrate competitive performance on various node classification benchmarks, achieving good results and sometimes even surpassing the performance of supervised learning.
2207.12259
AmirPouya Hemmasian
AmirPouya Hemmasian, Francis Ogoke, Parand Akbari, Jonathan Malen, Jack Beuth, Amir Barati Farimani
Surrogate Modeling of Melt Pool Thermal Field using Deep Learning
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Powder-based additive manufacturing has transformed the manufacturing industry over the last decade. In Laser Powder Bed Fusion, a specific part is built in an iterative manner in which two-dimensional cross-sections are formed on top of each other by melting and fusing the proper areas of the powder bed. In this process, the behavior of the melt pool and its thermal field has a very important role in predicting the quality of the manufactured part and its possible defects. However, the simulation of such a complex phenomenon is usually very time-consuming and requires huge computational resources. Flow-3D is one of the software packages capable of executing such simulations using iterative numerical solvers. In this work, we create three datasets of single-trail processes using Flow-3D and use them to train a convolutional neural network capable of predicting the behavior of the three-dimensional thermal field of the melt pool solely by taking three parameters as input: laser power, laser velocity, and time step. The CNN achieves a relative Root Mean Squared Error of 2% to 3% for the temperature field and an average Intersection over Union score of 80% to 90% in predicting the melt pool area. Moreover, since time is included as one of the inputs of the model, the thermal field can be instantly obtained for any arbitrary time step without the need to iterate and compute all the steps
[ { "created": "Mon, 25 Jul 2022 15:27:16 GMT", "version": "v1" }, { "created": "Thu, 4 Aug 2022 21:16:44 GMT", "version": "v2" } ]
2022-08-08
[ [ "Hemmasian", "AmirPouya", "" ], [ "Ogoke", "Francis", "" ], [ "Akbari", "Parand", "" ], [ "Malen", "Jonathan", "" ], [ "Beuth", "Jack", "" ], [ "Farimani", "Amir Barati", "" ] ]
Powder-based additive manufacturing has transformed the manufacturing industry over the last decade. In Laser Powder Bed Fusion, a specific part is built in an iterative manner in which two-dimensional cross-sections are formed on top of each other by melting and fusing the proper areas of the powder bed. In this process, the behavior of the melt pool and its thermal field has a very important role in predicting the quality of the manufactured part and its possible defects. However, the simulation of such a complex phenomenon is usually very time-consuming and requires huge computational resources. Flow-3D is one of the software packages capable of executing such simulations using iterative numerical solvers. In this work, we create three datasets of single-trail processes using Flow-3D and use them to train a convolutional neural network capable of predicting the behavior of the three-dimensional thermal field of the melt pool solely by taking three parameters as input: laser power, laser velocity, and time step. The CNN achieves a relative Root Mean Squared Error of 2% to 3% for the temperature field and an average Intersection over Union score of 80% to 90% in predicting the melt pool area. Moreover, since time is included as one of the inputs of the model, the thermal field can be instantly obtained for any arbitrary time step without the need to iterate and compute all the steps
2206.08722
J\"ames M\'en\'etrey
J\"ames M\'en\'etrey, Marcelo Pasin, Pascal Felber, Valerio Schiavoni
WaTZ: A Trusted WebAssembly Runtime Environment with Remote Attestation for TrustZone
This publication incorporates results from the VEDLIoT project, which received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 957197
ICDCS'22: Proceedings of the 42nd IEEE International Conference on Distributed Computing Systems, July 2022
10.1109/ICDCS54860.2022.00116
null
cs.CR cs.DC cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
WebAssembly (Wasm) is a novel low-level bytecode format that swiftly gained popularity for its efficiency, versatility and security, with near-native performance. Besides, trusted execution environments (TEEs) shield critical software assets against compromised infrastructures. However, TEEs do not guarantee the code to be trustworthy or that it was not tampered with. Instead, one relies on remote attestation to assess the code before execution. This paper describes WaTZ, which is (i) an efficient and secure runtime for trusted execution of Wasm code for Arm's TrustZone TEE, and (ii) a lightweight remote attestation system optimised for Wasm applications running in TrustZone, as it lacks built-in mechanisms for attestation. The remote attestation protocol is formally verified using a state-of-the-art analyser and model checker. Our extensive evaluation of Arm-based hardware uses synthetic and real-world benchmarks, illustrating typical tasks IoT devices achieve. WaTZ's execution speed is on par with Wasm runtimes in the normal world and reaches roughly half the speed of native execution, which is compensated by the additional security guarantees and the interoperability offered by Wasm. WaTZ is open-source and available on GitHub along with instructions to reproduce our experiments.
[ { "created": "Fri, 17 Jun 2022 12:19:48 GMT", "version": "v1" }, { "created": "Wed, 17 May 2023 15:04:34 GMT", "version": "v2" } ]
2023-05-18
[ [ "Ménétrey", "Jämes", "" ], [ "Pasin", "Marcelo", "" ], [ "Felber", "Pascal", "" ], [ "Schiavoni", "Valerio", "" ] ]
WebAssembly (Wasm) is a novel low-level bytecode format that swiftly gained popularity for its efficiency, versatility and security, with near-native performance. Besides, trusted execution environments (TEEs) shield critical software assets against compromised infrastructures. However, TEEs do not guarantee the code to be trustworthy or that it was not tampered with. Instead, one relies on remote attestation to assess the code before execution. This paper describes WaTZ, which is (i) an efficient and secure runtime for trusted execution of Wasm code for Arm's TrustZone TEE, and (ii) a lightweight remote attestation system optimised for Wasm applications running in TrustZone, as it lacks built-in mechanisms for attestation. The remote attestation protocol is formally verified using a state-of-the-art analyser and model checker. Our extensive evaluation of Arm-based hardware uses synthetic and real-world benchmarks, illustrating typical tasks IoT devices achieve. WaTZ's execution speed is on par with Wasm runtimes in the normal world and reaches roughly half the speed of native execution, which is compensated by the additional security guarantees and the interoperability offered by Wasm. WaTZ is open-source and available on GitHub along with instructions to reproduce our experiments.
2012.14740
Lei Cui
Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou
LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding
ACL 2021 main conference
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. We propose LayoutLMv2 architecture with new pre-training tasks to model the interaction among text, layout, and image in a single multi-modal framework. Specifically, with a two-stream multi-modal Transformer encoder, LayoutLMv2 uses not only the existing masked visual-language modeling task but also the new text-image alignment and text-image matching tasks, which make it better capture the cross-modality interaction in the pre-training stage. Meanwhile, it also integrates a spatial-aware self-attention mechanism into the Transformer architecture so that the model can fully understand the relative positional relationship among different text blocks. Experiment results show that LayoutLMv2 outperforms LayoutLM by a large margin and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including FUNSD (0.7895 $\to$ 0.8420), CORD (0.9493 $\to$ 0.9601), SROIE (0.9524 $\to$ 0.9781), Kleister-NDA (0.8340 $\to$ 0.8520), RVL-CDIP (0.9443 $\to$ 0.9564), and DocVQA (0.7295 $\to$ 0.8672). We made our model and code publicly available at \url{https://aka.ms/layoutlmv2}.
[ { "created": "Tue, 29 Dec 2020 13:01:52 GMT", "version": "v1" }, { "created": "Thu, 6 May 2021 07:02:57 GMT", "version": "v2" }, { "created": "Tue, 11 May 2021 06:42:33 GMT", "version": "v3" }, { "created": "Mon, 10 Jan 2022 04:08:10 GMT", "version": "v4" } ]
2022-01-11
[ [ "Xu", "Yang", "" ], [ "Xu", "Yiheng", "" ], [ "Lv", "Tengchao", "" ], [ "Cui", "Lei", "" ], [ "Wei", "Furu", "" ], [ "Wang", "Guoxin", "" ], [ "Lu", "Yijuan", "" ], [ "Florencio", "Dinei", "" ], [ "Zhang", "Cha", "" ], [ "Che", "Wanxiang", "" ], [ "Zhang", "Min", "" ], [ "Zhou", "Lidong", "" ] ]
Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. We propose LayoutLMv2 architecture with new pre-training tasks to model the interaction among text, layout, and image in a single multi-modal framework. Specifically, with a two-stream multi-modal Transformer encoder, LayoutLMv2 uses not only the existing masked visual-language modeling task but also the new text-image alignment and text-image matching tasks, which make it better capture the cross-modality interaction in the pre-training stage. Meanwhile, it also integrates a spatial-aware self-attention mechanism into the Transformer architecture so that the model can fully understand the relative positional relationship among different text blocks. Experiment results show that LayoutLMv2 outperforms LayoutLM by a large margin and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including FUNSD (0.7895 $\to$ 0.8420), CORD (0.9493 $\to$ 0.9601), SROIE (0.9524 $\to$ 0.9781), Kleister-NDA (0.8340 $\to$ 0.8520), RVL-CDIP (0.9443 $\to$ 0.9564), and DocVQA (0.7295 $\to$ 0.8672). We made our model and code publicly available at \url{https://aka.ms/layoutlmv2}.
2407.11421
Junhao Chen
Junhao Chen, Shengding Hu, Zhiyuan Liu, Maosong Sun
States Hidden in Hidden States: LLMs Emerge Discrete State Representations Implicitly
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) exhibit various emergent abilities. Among these abilities, some might reveal the internal working mechanisms of models. In this paper, we uncover a novel emergent capability in models: the intrinsic ability to perform extended sequences of calculations without relying on chain-of-thought step-by-step solutions. Remarkably, the most advanced models can directly output the results of two-digit number additions with lengths extending up to 15 addends. We hypothesize that the model emerges Implicit Discrete State Representations (IDSRs) within its hidden states and performs symbolic calculations internally. To test this hypothesis, we design a sequence of experiments that look into the hidden states. Specifically, we first confirm that IDSRs exist. Then, we provide interesting observations about the formation of IDSRs from layer, digit, and sequence perspectives. Finally, we confirm that models indeed use IDSRs to produce the final answers. However, we also discover that these state representations are far from lossless in current open-sourced models, leading to inaccuracies in their final performance. Our work presents a novel exploration of LLMs' symbolic calculation abilities and the underlying mechanisms.
[ { "created": "Tue, 16 Jul 2024 06:27:22 GMT", "version": "v1" } ]
2024-07-17
[ [ "Chen", "Junhao", "" ], [ "Hu", "Shengding", "" ], [ "Liu", "Zhiyuan", "" ], [ "Sun", "Maosong", "" ] ]
Large Language Models (LLMs) exhibit various emergent abilities. Among these abilities, some might reveal the internal working mechanisms of models. In this paper, we uncover a novel emergent capability in models: the intrinsic ability to perform extended sequences of calculations without relying on chain-of-thought step-by-step solutions. Remarkably, the most advanced models can directly output the results of two-digit number additions with lengths extending up to 15 addends. We hypothesize that the model emerges Implicit Discrete State Representations (IDSRs) within its hidden states and performs symbolic calculations internally. To test this hypothesis, we design a sequence of experiments that look into the hidden states. Specifically, we first confirm that IDSRs exist. Then, we provide interesting observations about the formation of IDSRs from layer, digit, and sequence perspectives. Finally, we confirm that models indeed use IDSRs to produce the final answers. However, we also discover that these state representations are far from lossless in current open-sourced models, leading to inaccuracies in their final performance. Our work presents a novel exploration of LLMs' symbolic calculation abilities and the underlying mechanisms.
2012.05756
Lingda Wang
Lingda Wang, Bingcong Li, Huozhi Zhou, Georgios B. Giannakis, Lav R. Varshney, Zhizhen Zhao
Adversarial Linear Contextual Bandits with Graph-Structured Side Observations
fix some typos
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the adversarial graphical contextual bandits, a variant of adversarial multi-armed bandits that leverage two categories of the most common side information: \emph{contexts} and \emph{side observations}. In this setting, a learning agent repeatedly chooses from a set of $K$ actions after being presented with a $d$-dimensional context vector. The agent not only incurs and observes the loss of the chosen action, but also observes the losses of its neighboring actions in the observation structures, which are encoded as a series of feedback graphs. This setting models a variety of applications in social networks, where both contexts and graph-structured side observations are available. Two efficient algorithms are developed based on \texttt{EXP3}. Under mild conditions, our analysis shows that for undirected feedback graphs the first algorithm, \texttt{EXP3-LGC-U}, achieves the regret of order $\mathcal{O}(\sqrt{(K+\alpha(G)d)T\log{K}})$ over the time horizon $T$, where $\alpha(G)$ is the average \emph{independence number} of the feedback graphs. A slightly weaker result is presented for the directed graph setting as well. The second algorithm, \texttt{EXP3-LGC-IX}, is developed for a special class of problems, for which the regret is reduced to $\mathcal{O}(\sqrt{\alpha(G)dT\log{K}\log(KT)})$ for both directed as well as undirected feedback graphs. Numerical tests corroborate the efficiency of proposed algorithms.
[ { "created": "Thu, 10 Dec 2020 15:40:07 GMT", "version": "v1" }, { "created": "Mon, 28 Dec 2020 01:52:23 GMT", "version": "v2" }, { "created": "Wed, 17 Feb 2021 01:58:52 GMT", "version": "v3" } ]
2021-02-18
[ [ "Wang", "Lingda", "" ], [ "Li", "Bingcong", "" ], [ "Zhou", "Huozhi", "" ], [ "Giannakis", "Georgios B.", "" ], [ "Varshney", "Lav R.", "" ], [ "Zhao", "Zhizhen", "" ] ]
This paper studies the adversarial graphical contextual bandits, a variant of adversarial multi-armed bandits that leverage two categories of the most common side information: \emph{contexts} and \emph{side observations}. In this setting, a learning agent repeatedly chooses from a set of $K$ actions after being presented with a $d$-dimensional context vector. The agent not only incurs and observes the loss of the chosen action, but also observes the losses of its neighboring actions in the observation structures, which are encoded as a series of feedback graphs. This setting models a variety of applications in social networks, where both contexts and graph-structured side observations are available. Two efficient algorithms are developed based on \texttt{EXP3}. Under mild conditions, our analysis shows that for undirected feedback graphs the first algorithm, \texttt{EXP3-LGC-U}, achieves the regret of order $\mathcal{O}(\sqrt{(K+\alpha(G)d)T\log{K}})$ over the time horizon $T$, where $\alpha(G)$ is the average \emph{independence number} of the feedback graphs. A slightly weaker result is presented for the directed graph setting as well. The second algorithm, \texttt{EXP3-LGC-IX}, is developed for a special class of problems, for which the regret is reduced to $\mathcal{O}(\sqrt{\alpha(G)dT\log{K}\log(KT)})$ for both directed as well as undirected feedback graphs. Numerical tests corroborate the efficiency of proposed algorithms.
2007.08688
Qisheng Zhang
Qisheng Zhang, Abdullah Zubair Mohammed, Zelin Wan, Jin-Hee Cho, Terrence J. Moore
Diversity-By-Design for Dependable and Secure Cyber-Physical Systems: A Survey
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diversity-based security approaches have been studied for several decades since the 1970's. The concept of diversity-by-design emerged in the 1980's and, since then, diversity-based system design research has been explored to build more secure and dependable systems. In this work, we are particularly interested in providing an in-depth, comprehensive survey of existing diversity-based approaches, insights, and future work directions for those who want to conduct research on developing secure and dependable cyber-physical systems (CPSs) using diversity as a system design feature. To be specific, this survey paper provides: (i) The common concept of diversity based on a multidisciplinary study of diversity from nine different fields along with the historical evolution of diversity-by-design for security; (ii) The design principles of diversity-based approaches; (iii) The key benefits and caveats of using diversity-by-design; (iv) The key concerns of CPS environments in introducing diversity-by-design; (v) A variety of existing diversity-based approaches based on five different classifications; (vi) The types of attacks mitigated by existing diversity-based approaches; (vii) The overall trends of evaluation methodologies used in diversity-based approaches, in terms of metrics, datasets, and testbeds; and (viii) The insights, lessons, and gaps identified from this extensive survey.
[ { "created": "Thu, 16 Jul 2020 23:25:36 GMT", "version": "v1" } ]
2020-07-20
[ [ "Zhang", "Qisheng", "" ], [ "Mohammed", "Abdullah Zubair", "" ], [ "Wan", "Zelin", "" ], [ "Cho", "Jin-Hee", "" ], [ "Moore", "Terrence J.", "" ] ]
Diversity-based security approaches have been studied for several decades since the 1970's. The concept of diversity-by-design emerged in the 1980's and, since then, diversity-based system design research has been explored to build more secure and dependable systems. In this work, we are particularly interested in providing an in-depth, comprehensive survey of existing diversity-based approaches, insights, and future work directions for those who want to conduct research on developing secure and dependable cyber-physical systems (CPSs) using diversity as a system design feature. To be specific, this survey paper provides: (i) The common concept of diversity based on a multidisciplinary study of diversity from nine different fields along with the historical evolution of diversity-by-design for security; (ii) The design principles of diversity-based approaches; (iii) The key benefits and caveats of using diversity-by-design; (iv) The key concerns of CPS environments in introducing diversity-by-design; (v) A variety of existing diversity-based approaches based on five different classifications; (vi) The types of attacks mitigated by existing diversity-based approaches; (vii) The overall trends of evaluation methodologies used in diversity-based approaches, in terms of metrics, datasets, and testbeds; and (viii) The insights, lessons, and gaps identified from this extensive survey.
2111.03212
Jiangwei Liu
Jiangwei Liu, Liangyu Min and Xiaohong Huang
An overview of event extraction and its applications
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
With the rapid development of information technology, online platforms have produced enormous text resources. As a particular form of Information Extraction (IE), Event Extraction (EE) has gained increasing popularity due to its ability to automatically extract events from human language. However, there are limited literature surveys on event extraction. Existing review works either spend much effort describing the details of various approaches or focus on a particular field. This study provides a comprehensive overview of the state-of-the-art event extraction methods and their applications from text, including closed-domain and open-domain event extraction. A trait of this survey is that it provides an overview in moderate complexity, avoiding involving too many details of particular approaches. This study focuses on discussing the common characters, application fields, advantages, and disadvantages of representative works, ignoring the specificities of individual approaches. Finally, we summarize the common issues, current solutions, and future research directions. We hope this work could help researchers and practitioners obtain a quick overview of recent event extraction.
[ { "created": "Fri, 5 Nov 2021 01:37:47 GMT", "version": "v1" } ]
2021-11-08
[ [ "Liu", "Jiangwei", "" ], [ "Min", "Liangyu", "" ], [ "Huang", "Xiaohong", "" ] ]
With the rapid development of information technology, online platforms have produced enormous text resources. As a particular form of Information Extraction (IE), Event Extraction (EE) has gained increasing popularity due to its ability to automatically extract events from human language. However, there are limited literature surveys on event extraction. Existing review works either spend much effort describing the details of various approaches or focus on a particular field. This study provides a comprehensive overview of the state-of-the-art event extraction methods and their applications from text, including closed-domain and open-domain event extraction. A trait of this survey is that it provides an overview in moderate complexity, avoiding involving too many details of particular approaches. This study focuses on discussing the common characters, application fields, advantages, and disadvantages of representative works, ignoring the specificities of individual approaches. Finally, we summarize the common issues, current solutions, and future research directions. We hope this work could help researchers and practitioners obtain a quick overview of recent event extraction.
2002.10035
Xianmang He
Xianmang He, Yindong Chen, Zusheng Zhang
Improving the Linkage Construction with Echelon-Ferrers for Constant-Dimension Codes
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Echelon-Ferrers is an important method to improve lower bounds for constant-dimension codes, which can be applied on various parameters. Fagang Li [12] combined the linkage construction and echelon-Ferrers to obtain some new lower bounds of constant-dimension codes. In this letter, we generalize this linkage construction to obtain new lower bounds.
[ { "created": "Mon, 24 Feb 2020 01:57:57 GMT", "version": "v1" }, { "created": "Fri, 6 Mar 2020 16:43:18 GMT", "version": "v2" }, { "created": "Thu, 30 Jul 2020 17:58:14 GMT", "version": "v3" } ]
2020-07-31
[ [ "He", "Xianmang", "" ], [ "Chen", "Yindong", "" ], [ "Zhang", "Zusheng", "" ] ]
Echelon-Ferrers is an important method to improve lower bounds for constant-dimension codes, which can be applied on various parameters. Fagang Li [12] combined the linkage construction and echelon-Ferrers to obtain some new lower bounds of constant-dimension codes. In this letter, we generalize this linkage construction to obtain new lower bounds.
2210.00765
Xiaoqi Zhao
Hongsheng Wang, Xiaoqi Zhao, Youwei Pang, Jinqing Qi
Few-Shot Segmentation via Rich Prototype Generation and Recurrent Prediction Enhancement
Accepted in PRCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prototype learning and decoder construction are the keys for few-shot segmentation. However, existing methods use only a single prototype generation mode, which can not cope with the intractable problem of objects with various scales. Moreover, the one-way forward propagation adopted by previous methods may cause information dilution from registered features during the decoding process. In this research, we propose a rich prototype generation module (RPGM) and a recurrent prediction enhancement module (RPEM) to reinforce the prototype learning paradigm and build a unified memory-augmented decoder for few-shot segmentation, respectively. Specifically, the RPGM combines superpixel and K-means clustering to generate rich prototype features with complementary scale relationships and adapt the scale gap between support and query images. The RPEM utilizes the recurrent mechanism to design a round-way propagation decoder. In this way, registered features can provide object-aware information continuously. Experiments show that our method consistently outperforms other competitors on two popular benchmarks PASCAL-${{5}^{i}}$ and COCO-${{20}^{i}}$.
[ { "created": "Mon, 3 Oct 2022 08:46:52 GMT", "version": "v1" } ]
2022-10-04
[ [ "Wang", "Hongsheng", "" ], [ "Zhao", "Xiaoqi", "" ], [ "Pang", "Youwei", "" ], [ "Qi", "Jinqing", "" ] ]
Prototype learning and decoder construction are the keys for few-shot segmentation. However, existing methods use only a single prototype generation mode, which can not cope with the intractable problem of objects with various scales. Moreover, the one-way forward propagation adopted by previous methods may cause information dilution from registered features during the decoding process. In this research, we propose a rich prototype generation module (RPGM) and a recurrent prediction enhancement module (RPEM) to reinforce the prototype learning paradigm and build a unified memory-augmented decoder for few-shot segmentation, respectively. Specifically, the RPGM combines superpixel and K-means clustering to generate rich prototype features with complementary scale relationships and adapt the scale gap between support and query images. The RPEM utilizes the recurrent mechanism to design a round-way propagation decoder. In this way, registered features can provide object-aware information continuously. Experiments show that our method consistently outperforms other competitors on two popular benchmarks PASCAL-${{5}^{i}}$ and COCO-${{20}^{i}}$.
2403.13248
Zhengqing Yuan
Zhengqing Yuan, Ruoxi Chen, Zhaoxu Li, Haolong Jia, Lifang He, Chi Wang, Lichao Sun
Mora: Enabling Generalist Video Generation via A Multi-Agent Framework
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sora is the first large-scale generalist video generation model that garnered significant attention across society. Since its launch by OpenAI in February 2024, no other video generation models have paralleled {Sora}'s performance or its capacity to support a broad spectrum of video generation tasks. Additionally, there are only a few fully published video generation models, with the majority being closed-source. To address this gap, this paper proposes a new multi-agent framework Mora, which incorporates several advanced visual AI agents to replicate generalist video generation demonstrated by Sora. In particular, Mora can utilize multiple visual agents and successfully mimic Sora's video generation capabilities in various tasks, such as (1) text-to-video generation, (2) text-conditional image-to-video generation, (3) extend generated videos, (4) video-to-video editing, (5) connect videos and (6) simulate digital worlds. Our extensive experimental results show that Mora achieves performance that is proximate to that of Sora in various tasks. However, there exists an obvious performance gap between our work and Sora when assessed holistically. In summary, we hope this project can guide the future trajectory of video generation through collaborative AI agents.
[ { "created": "Wed, 20 Mar 2024 02:19:21 GMT", "version": "v1" }, { "created": "Fri, 22 Mar 2024 12:43:56 GMT", "version": "v2" } ]
2024-03-25
[ [ "Yuan", "Zhengqing", "" ], [ "Chen", "Ruoxi", "" ], [ "Li", "Zhaoxu", "" ], [ "Jia", "Haolong", "" ], [ "He", "Lifang", "" ], [ "Wang", "Chi", "" ], [ "Sun", "Lichao", "" ] ]
Sora is the first large-scale generalist video generation model that garnered significant attention across society. Since its launch by OpenAI in February 2024, no other video generation models have paralleled {Sora}'s performance or its capacity to support a broad spectrum of video generation tasks. Additionally, there are only a few fully published video generation models, with the majority being closed-source. To address this gap, this paper proposes a new multi-agent framework Mora, which incorporates several advanced visual AI agents to replicate generalist video generation demonstrated by Sora. In particular, Mora can utilize multiple visual agents and successfully mimic Sora's video generation capabilities in various tasks, such as (1) text-to-video generation, (2) text-conditional image-to-video generation, (3) extend generated videos, (4) video-to-video editing, (5) connect videos and (6) simulate digital worlds. Our extensive experimental results show that Mora achieves performance that is proximate to that of Sora in various tasks. However, there exists an obvious performance gap between our work and Sora when assessed holistically. In summary, we hope this project can guide the future trajectory of video generation through collaborative AI agents.
2109.04027
Zilin Si
Zilin Si, Wenzhen Yuan
Taxim: An Example-based Simulation Model for GelSight Tactile Sensors
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Simulation is widely used in robotics for system verification and large-scale data collection. However, simulating sensors, including tactile sensors, has been a long-standing challenge. In this paper, we propose Taxim, a realistic and high-speed simulation model for a vision-based tactile sensor, GelSight. A GelSight sensor uses a piece of soft elastomer as the medium of contact and embeds optical structures to capture the deformation of the elastomer, which infers the geometry and forces applied at the contact surface. We propose an example-based method for simulating GelSight: we simulate the optical response to the deformation with a polynomial look-up table. This table maps the deformed geometries to pixel intensity sampled by the embedded camera. In order to simulate the surface markers' motion that is caused by the surface stretch of the elastomer, we apply the linear elastic deformation theory and the superposition principle. The simulation model is calibrated with less than 100 data points from a real sensor. The example-based approach enables the model to easily migrate to other GelSight sensors or its variations. To the best of our knowledge, our simulation framework is the first to incorporate marker motion field simulation that derives from elastomer deformation together with the optical simulation, creating a comprehensive and computationally efficient tactile simulation framework. Experiments reveal that our optical simulation has the lowest pixel-wise intensity errors compared to prior work and can run online with CPU computing. Our code and supplementary materials are open-sourced at https://github.com/CMURoboTouch/Taxim.
[ { "created": "Thu, 9 Sep 2021 04:22:27 GMT", "version": "v1" }, { "created": "Tue, 14 Dec 2021 17:02:43 GMT", "version": "v2" } ]
2021-12-15
[ [ "Si", "Zilin", "" ], [ "Yuan", "Wenzhen", "" ] ]
Simulation is widely used in robotics for system verification and large-scale data collection. However, simulating sensors, including tactile sensors, has been a long-standing challenge. In this paper, we propose Taxim, a realistic and high-speed simulation model for a vision-based tactile sensor, GelSight. A GelSight sensor uses a piece of soft elastomer as the medium of contact and embeds optical structures to capture the deformation of the elastomer, which infers the geometry and forces applied at the contact surface. We propose an example-based method for simulating GelSight: we simulate the optical response to the deformation with a polynomial look-up table. This table maps the deformed geometries to pixel intensity sampled by the embedded camera. In order to simulate the surface markers' motion that is caused by the surface stretch of the elastomer, we apply the linear elastic deformation theory and the superposition principle. The simulation model is calibrated with less than 100 data points from a real sensor. The example-based approach enables the model to easily migrate to other GelSight sensors or its variations. To the best of our knowledge, our simulation framework is the first to incorporate marker motion field simulation that derives from elastomer deformation together with the optical simulation, creating a comprehensive and computationally efficient tactile simulation framework. Experiments reveal that our optical simulation has the lowest pixel-wise intensity errors compared to prior work and can run online with CPU computing. Our code and supplementary materials are open-sourced at https://github.com/CMURoboTouch/Taxim.
1909.03772
Nicolai Anton Lynnerup
Nicolai A. Lynnerup, Laura Nolling, Rasmus Hasle, John Hallam
A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots
Appears in Proceedings of the Third Conference on Robot Learning (CoRL 2019). Companion source code at https://github.com/dti-research/SenseActExperiments/
null
null
null
cs.LG cs.AI cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As reinforcement learning (RL) achieves more success in solving complex tasks, more care is needed to ensure that RL research is reproducible and that algorithms herein can be compared easily and fairly with minimal bias. RL results are, however, notoriously hard to reproduce due to the algorithms' intrinsic variance, the environments' stochasticity, and numerous (potentially unreported) hyper-parameters. In this work we investigate the many issues leading to irreproducible research and how to manage those. We further show how to utilise a rigorous and standardised evaluation approach for easing the process of documentation, evaluation and fair comparison of different algorithms, where we emphasise the importance of choosing the right measurement metrics and conducting proper statistics on the results, for unbiased reporting of the results.
[ { "created": "Mon, 9 Sep 2019 11:33:09 GMT", "version": "v1" }, { "created": "Wed, 11 Sep 2019 07:42:00 GMT", "version": "v2" } ]
2019-09-12
[ [ "Lynnerup", "Nicolai A.", "" ], [ "Nolling", "Laura", "" ], [ "Hasle", "Rasmus", "" ], [ "Hallam", "John", "" ] ]
As reinforcement learning (RL) achieves more success in solving complex tasks, more care is needed to ensure that RL research is reproducible and that algorithms herein can be compared easily and fairly with minimal bias. RL results are, however, notoriously hard to reproduce due to the algorithms' intrinsic variance, the environments' stochasticity, and numerous (potentially unreported) hyper-parameters. In this work we investigate the many issues leading to irreproducible research and how to manage those. We further show how to utilise a rigorous and standardised evaluation approach for easing the process of documentation, evaluation and fair comparison of different algorithms, where we emphasise the importance of choosing the right measurement metrics and conducting proper statistics on the results, for unbiased reporting of the results.
1302.0540
Harris Georgiou
Harris V. Georgiou, Michael E. Mavroforakis
A game-theoretic framework for classifier ensembles using weighted majority voting with local accuracy estimates
21 pages, 9 tables, 1 figure, 68 references
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/3.0/
In this paper, a novel approach for the optimal combination of binary classifiers is proposed. The classifier combination problem is approached from a Game Theory perspective. The proposed framework of adapted weighted majority rules (WMR) is tested against common rank-based, Bayesian and simple majority models, as well as two soft-output averaging rules. Experiments with ensembles of Support Vector Machines (SVM), Ordinary Binary Tree Classifiers (OBTC) and weighted k-nearest-neighbor (w/k-NN) models on benchmark datasets indicate that this new adaptive WMR model, employing local accuracy estimators and the analytically computed optimal weights outperform all the other simple combination rules.
[ { "created": "Sun, 3 Feb 2013 22:12:52 GMT", "version": "v1" } ]
2013-02-05
[ [ "Georgiou", "Harris V.", "" ], [ "Mavroforakis", "Michael E.", "" ] ]
In this paper, a novel approach for the optimal combination of binary classifiers is proposed. The classifier combination problem is approached from a Game Theory perspective. The proposed framework of adapted weighted majority rules (WMR) is tested against common rank-based, Bayesian and simple majority models, as well as two soft-output averaging rules. Experiments with ensembles of Support Vector Machines (SVM), Ordinary Binary Tree Classifiers (OBTC) and weighted k-nearest-neighbor (w/k-NN) models on benchmark datasets indicate that this new adaptive WMR model, employing local accuracy estimators and the analytically computed optimal weights outperform all the other simple combination rules.
1709.10052
Marc Zeitoun
Thomas Place and Marc Zeitoun
Adding successor: A transfer theorem for separation and covering
null
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a class C of word languages, the C-separation problem asks for an algorithm that, given as input two regular languages, decides whether there exists a third language in C containing the first language, while being disjoint from the second. Separation is usually investigated as a means to obtain a deep understanding of the class C. In the paper, we are mainly interested in classes defined by logical formalisms. Such classes are often built on top of each other: given some logic, one builds a stronger one by adding new predicates to its signature. A natural construction is to enrich a logic with the successor relation. In this paper, we present a transfer result applying to this construction: we show that for suitable logically defined classes, separation for the logic enriched with the successor relation reduces to separation for the original logic. Our theorem also applies to a problem that is stronger than separation: covering. Moreover, we actually present two reductions: one for languages of finite words and the other for languages of infinite words.
[ { "created": "Thu, 28 Sep 2017 16:40:03 GMT", "version": "v1" } ]
2017-09-29
[ [ "Place", "Thomas", "" ], [ "Zeitoun", "Marc", "" ] ]
Given a class C of word languages, the C-separation problem asks for an algorithm that, given as input two regular languages, decides whether there exists a third language in C containing the first language, while being disjoint from the second. Separation is usually investigated as a means to obtain a deep understanding of the class C. In the paper, we are mainly interested in classes defined by logical formalisms. Such classes are often built on top of each other: given some logic, one builds a stronger one by adding new predicates to its signature. A natural construction is to enrich a logic with the successor relation. In this paper, we present a transfer result applying to this construction: we show that for suitable logically defined classes, separation for the logic enriched with the successor relation reduces to separation for the original logic. Our theorem also applies to a problem that is stronger than separation: covering. Moreover, we actually present two reductions: one for languages of finite words and the other for languages of infinite words.
1706.02275
Ryan Lowe T.
Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, Igor Mordatch
Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments
null
null
null
null
cs.LG cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.
[ { "created": "Wed, 7 Jun 2017 17:35:00 GMT", "version": "v1" }, { "created": "Wed, 21 Jun 2017 22:18:54 GMT", "version": "v2" }, { "created": "Tue, 16 Jan 2018 23:37:25 GMT", "version": "v3" }, { "created": "Sat, 14 Mar 2020 20:33:00 GMT", "version": "v4" } ]
2020-03-17
[ [ "Lowe", "Ryan", "" ], [ "Wu", "Yi", "" ], [ "Tamar", "Aviv", "" ], [ "Harb", "Jean", "" ], [ "Abbeel", "Pieter", "" ], [ "Mordatch", "Igor", "" ] ]
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.
1308.1042
Kafui Monu Dr.
Kafui Monu and Paul Ralph
Beyond Gamification: Implications of Purposeful Games for the Information Systems Discipline
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by/3.0/
Gamification is an emerging design principle for information systems where game design elements are applied to non-game contexts. IS researchers have suggested that the IS discipline must study this area but there are other applications such as serious games, and simulations that also use games in non-game contexts. Specifically, the management field has been using games and simulations for years and these applications are now being supported by information systems. We propose in this paper that we must think beyond gamification, towards other uses of games in non-gaming contexts, which we call purposeful gaming. In this paper we identify how the IS discipline can adapt to purposeful gaming. Specifically, we show how IT artifacts, IS design, and IS theories can be used in the purposeful gaming area. We also provide three conceptual dimensions of purposeful gaming that can aid IS practitioners and researchers to classify and understand purposeful games.
[ { "created": "Fri, 2 Aug 2013 16:08:31 GMT", "version": "v1" } ]
2013-08-06
[ [ "Monu", "Kafui", "" ], [ "Ralph", "Paul", "" ] ]
Gamification is an emerging design principle for information systems where game design elements are applied to non-game contexts. IS researchers have suggested that the IS discipline must study this area but there are other applications such as serious games, and simulations that also use games in non-game contexts. Specifically, the management field has been using games and simulations for years and these applications are now being supported by information systems. We propose in this paper that we must think beyond gamification, towards other uses of games in non-gaming contexts, which we call purposeful gaming. In this paper we identify how the IS discipline can adapt to purposeful gaming. Specifically, we show how IT artifacts, IS design, and IS theories can be used in the purposeful gaming area. We also provide three conceptual dimensions of purposeful gaming that can aid IS practitioners and researchers to classify and understand purposeful games.
2311.09704
Leo Freitas
Leo Freitas
International System of Quantities library in VDM
14 pages,, 1 figure, 21st Overture Workshop, Lubeck 2023
null
null
OVT21/2023/01
cs.SE
http://creativecommons.org/licenses/by/4.0/
The International Systems of Quantities (ISQ) standard was published in 1960 to tame the wide diversity of measurement systems being developed across the world, such as the centimetre-gram-second versus the meter-kilogram-second for example. Such a standard is highly motivated by the potential of ``trivial'' (rather error-prone) mistakes in converting between incompatible units. There have been such accidents in space missions, medical devices, etc. Thus, rendering modelling or simulation experiments unusable or unsafe. We address this problem by providing a \textbf{SAFE}-ISQ VDM-library that is: Simple, Accurate, Fast, and Effective. It extends an ecosystem of other VDM mathematical toolkit extensions, which include a translation and proof environment for VDM in Isabelle at https://github.com/leouk/VDM_Toolkit.
[ { "created": "Thu, 16 Nov 2023 09:29:02 GMT", "version": "v1" } ]
2023-11-17
[ [ "Freitas", "Leo", "" ] ]
The International Systems of Quantities (ISQ) standard was published in 1960 to tame the wide diversity of measurement systems being developed across the world, such as the centimetre-gram-second versus the meter-kilogram-second for example. Such a standard is highly motivated by the potential of ``trivial'' (rather error-prone) mistakes in converting between incompatible units. There have been such accidents in space missions, medical devices, etc. Thus, rendering modelling or simulation experiments unusable or unsafe. We address this problem by providing a \textbf{SAFE}-ISQ VDM-library that is: Simple, Accurate, Fast, and Effective. It extends an ecosystem of other VDM mathematical toolkit extensions, which include a translation and proof environment for VDM in Isabelle at https://github.com/leouk/VDM_Toolkit.
2307.08412
Arnab Mukherjee Mr.
Arnab Mukherjee, Souvik Majumdar, Anup Kumar Kolya, Saborni Nandi
A Privacy-Preserving Blockchain-based E-voting System
null
null
null
null
cs.CR cs.DC
http://creativecommons.org/licenses/by/4.0/
Within a modern democratic nation, elections play a significant role in the nation's functioning. However, with the existing infrastructure for conducting elections using Electronic Voting Systems (EVMs), many loopholes exist, which illegitimate entities might leverage to cast false votes or even tamper with the EVMs after the voting session is complete. The need of the hour is to introduce a robust, auditable, transparent, and tamper-proof e-voting system, enabling a more reliable and fair election process. To address such concerns, we propose a novel solution for blockchain-based e-voting, focusing on the security and privacy aspects of the e-voting process. We consider the security risks and loopholes and aim to preserve the anonymity of the voters while ensuring that illegitimate votes are properly handled. Additionally, we develop a prototype as a proof of concept using the Ethereum blockchain platform. Finally, we perform experiments to demonstrate the performance of the system.
[ { "created": "Mon, 17 Jul 2023 11:48:39 GMT", "version": "v1" } ]
2023-07-18
[ [ "Mukherjee", "Arnab", "" ], [ "Majumdar", "Souvik", "" ], [ "Kolya", "Anup Kumar", "" ], [ "Nandi", "Saborni", "" ] ]
Within a modern democratic nation, elections play a significant role in the nation's functioning. However, with the existing infrastructure for conducting elections using Electronic Voting Systems (EVMs), many loopholes exist, which illegitimate entities might leverage to cast false votes or even tamper with the EVMs after the voting session is complete. The need of the hour is to introduce a robust, auditable, transparent, and tamper-proof e-voting system, enabling a more reliable and fair election process. To address such concerns, we propose a novel solution for blockchain-based e-voting, focusing on the security and privacy aspects of the e-voting process. We consider the security risks and loopholes and aim to preserve the anonymity of the voters while ensuring that illegitimate votes are properly handled. Additionally, we develop a prototype as a proof of concept using the Ethereum blockchain platform. Finally, we perform experiments to demonstrate the performance of the system.
2304.00627
Felicitas H\"ormann
Felicitas H\"ormann and Hannes Bartz and Anna-Lena Horlemann
Distinguishing and Recovering Generalized Linearized Reed-Solomon Codes
20 pages, published in the proceedings of CBCrypto 2022
null
10.1007/978-3-031-29689-5_1
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the distinguishability of linearized Reed-Solomon (LRS) codes by defining and analyzing analogs of the square-code and the Overbeck distinguisher for classical Reed-Solomon and Gabidulin codes, respectively. Our main results show that the square-code distinguisher works for generalized linearized Reed-Solomon (GLRS) codes defined with the trivial automorphism, whereas the Overbeck-type distinguisher can handle LRS codes in the general setting. We further show how to recover defining code parameters from any generator matrix of such codes in the zero-derivation case. For other choices of automorphisms and derivations simulations indicate that these distinguishers and recovery algorithms do not work. The corresponding LRS and GLRS codes might hence be of interest for code-based cryptography.
[ { "created": "Sun, 2 Apr 2023 20:58:50 GMT", "version": "v1" } ]
2023-04-04
[ [ "Hörmann", "Felicitas", "" ], [ "Bartz", "Hannes", "" ], [ "Horlemann", "Anna-Lena", "" ] ]
We study the distinguishability of linearized Reed-Solomon (LRS) codes by defining and analyzing analogs of the square-code and the Overbeck distinguisher for classical Reed-Solomon and Gabidulin codes, respectively. Our main results show that the square-code distinguisher works for generalized linearized Reed-Solomon (GLRS) codes defined with the trivial automorphism, whereas the Overbeck-type distinguisher can handle LRS codes in the general setting. We further show how to recover defining code parameters from any generator matrix of such codes in the zero-derivation case. For other choices of automorphisms and derivations simulations indicate that these distinguishers and recovery algorithms do not work. The corresponding LRS and GLRS codes might hence be of interest for code-based cryptography.
2010.04072
Giulia Orr\`u
Giulia Orr\`u, Marco Micheletto, Julian Fierrez, Gian Luca Marcialis
Are Adaptive Face Recognition Systems still Necessary? Experiments on the APE Dataset
Preprint version of a paper accepted at IPAS 2020 (Fourth IEEE International Conference on Image Processing, Applications and Systems)
2020 IEEE 4th International Conference on Image Processing, Applications and Systems (IPAS), Genova, Italy, 2020, pp. 77-82
10.1109/IPAS50080.2020.9334946
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last five years, deep learning methods, in particular CNN, have attracted considerable attention in the field of face-based recognition, achieving impressive results. Despite this progress, it is not yet clear precisely to what extent deep features are able to follow all the intra-class variations that the face can present over time. In this paper we investigate the performance the performance improvement of face recognition systems by adopting self updating strategies of the face templates. For that purpose, we evaluate the performance of a well-known deep-learning face representation, namely, FaceNet, on a dataset that we generated explicitly conceived to embed intra-class variations of users on a large time span of captures: the APhotoEveryday (APE) dataset. Moreover, we compare these deep features with handcrafted features extracted using the BSIF algorithm. In both cases, we evaluate various template update strategies, in order to detect the most useful for such kind of features. Experimental results show the effectiveness of "optimized" self-update methods with respect to systems without update or random selection of templates.
[ { "created": "Thu, 8 Oct 2020 15:45:55 GMT", "version": "v1" }, { "created": "Sat, 17 Oct 2020 14:36:11 GMT", "version": "v2" } ]
2021-02-04
[ [ "Orrù", "Giulia", "" ], [ "Micheletto", "Marco", "" ], [ "Fierrez", "Julian", "" ], [ "Marcialis", "Gian Luca", "" ] ]
In the last five years, deep learning methods, in particular CNN, have attracted considerable attention in the field of face-based recognition, achieving impressive results. Despite this progress, it is not yet clear precisely to what extent deep features are able to follow all the intra-class variations that the face can present over time. In this paper we investigate the performance the performance improvement of face recognition systems by adopting self updating strategies of the face templates. For that purpose, we evaluate the performance of a well-known deep-learning face representation, namely, FaceNet, on a dataset that we generated explicitly conceived to embed intra-class variations of users on a large time span of captures: the APhotoEveryday (APE) dataset. Moreover, we compare these deep features with handcrafted features extracted using the BSIF algorithm. In both cases, we evaluate various template update strategies, in order to detect the most useful for such kind of features. Experimental results show the effectiveness of "optimized" self-update methods with respect to systems without update or random selection of templates.
2310.01916
Huayu Guo
Huayu Guo, Dongheng Chen, and Bruno Bentzen
Verified completeness in Henkin-style for intuitionistic propositional logic
null
Joint proceedings of the Third International Workshop on Logics for New-Generation Artificial Intelligence and the International Workshop on Logic, AI and Law, pp.36-48, 2023
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
This paper presents a formalization of the classical proof of completeness in Henkin-style developed by Troelstra and van Dalen for intuitionistic logic with respect to Kripke models. The completeness proof incorporates their insights in a fresh and elegant manner that is better suited for mechanization. We discuss details of our implementation in the Lean theorem prover with emphasis on the prime extension lemma and construction of the canonical model. Our implementation is restricted to a system of intuitionistic propositional logic with implication, conjunction, disjunction, and falsity given in terms of a Hilbert-style axiomatization. As far as we know, our implementation is the first verified Henkin-style proof of completeness for intuitionistic logic following Troelstra and van Dalen's method in the literature. The full source code can be found online at https://github.com/bbentzen/ipl.
[ { "created": "Tue, 3 Oct 2023 09:45:43 GMT", "version": "v1" } ]
2023-10-04
[ [ "Guo", "Huayu", "" ], [ "Chen", "Dongheng", "" ], [ "Bentzen", "Bruno", "" ] ]
This paper presents a formalization of the classical proof of completeness in Henkin-style developed by Troelstra and van Dalen for intuitionistic logic with respect to Kripke models. The completeness proof incorporates their insights in a fresh and elegant manner that is better suited for mechanization. We discuss details of our implementation in the Lean theorem prover with emphasis on the prime extension lemma and construction of the canonical model. Our implementation is restricted to a system of intuitionistic propositional logic with implication, conjunction, disjunction, and falsity given in terms of a Hilbert-style axiomatization. As far as we know, our implementation is the first verified Henkin-style proof of completeness for intuitionistic logic following Troelstra and van Dalen's method in the literature. The full source code can be found online at https://github.com/bbentzen/ipl.
1405.5206
Hao Li
Xiaohui Huang, Xing Hu, Weichang Jiang, Zhi Yang, Hao Li
Application of Multilayer Feedforward Neural Networks in Predicting Tree Height and Forest Stock Volume of Chinese Fir
null
null
null
null
cs.CE
http://creativecommons.org/licenses/by-nc-sa/3.0/
Wood increment is critical information in forestry management. Previous studies used mathematics models to describe complex growing pattern of forest stand, in order to determine the dynamic status of growing forest stand in multiple conditions. In our research, we aimed at studying non-linear relationships to establish precise and robust Artificial Neural Networks (ANN) models to predict the precise values of tree height and forest stock volume based on data of Chinese fir. Results show that Multilayer Feedforward Neural Networks with 4 nodes (MLFN-4) can predict the tree height with the lowest RMS error (1.77); Multilayer Feedforward Neural Networks with 7 nodes (MLFN-7) can predict the forest stock volume with the lowest RMS error (4.95). The training and testing process have proved that our models are precise and robust.
[ { "created": "Tue, 20 May 2014 19:52:43 GMT", "version": "v1" } ]
2014-05-21
[ [ "Huang", "Xiaohui", "" ], [ "Hu", "Xing", "" ], [ "Jiang", "Weichang", "" ], [ "Yang", "Zhi", "" ], [ "Li", "Hao", "" ] ]
Wood increment is critical information in forestry management. Previous studies used mathematics models to describe complex growing pattern of forest stand, in order to determine the dynamic status of growing forest stand in multiple conditions. In our research, we aimed at studying non-linear relationships to establish precise and robust Artificial Neural Networks (ANN) models to predict the precise values of tree height and forest stock volume based on data of Chinese fir. Results show that Multilayer Feedforward Neural Networks with 4 nodes (MLFN-4) can predict the tree height with the lowest RMS error (1.77); Multilayer Feedforward Neural Networks with 7 nodes (MLFN-7) can predict the forest stock volume with the lowest RMS error (4.95). The training and testing process have proved that our models are precise and robust.
2404.17508
Matthew England Dr
Dorian Florescu and Matthew England
Constrained Neural Networks for Interpretable Heuristic Creation to Optimise Computer Algebra Systems
Accepted for presentation at ICMS 2024
null
null
null
cs.SC cs.LG
http://creativecommons.org/licenses/by/4.0/
We present a new methodology for utilising machine learning technology in symbolic computation research. We explain how a well known human-designed heuristic to make the choice of variable ordering in cylindrical algebraic decomposition may be represented as a constrained neural network. This allows us to then use machine learning methods to further optimise the heuristic, leading to new networks of similar size, representing new heuristics of similar complexity as the original human-designed one. We present this as a form of ante-hoc explainability for use in computer algebra development.
[ { "created": "Fri, 26 Apr 2024 16:20:04 GMT", "version": "v1" } ]
2024-04-29
[ [ "Florescu", "Dorian", "" ], [ "England", "Matthew", "" ] ]
We present a new methodology for utilising machine learning technology in symbolic computation research. We explain how a well known human-designed heuristic to make the choice of variable ordering in cylindrical algebraic decomposition may be represented as a constrained neural network. This allows us to then use machine learning methods to further optimise the heuristic, leading to new networks of similar size, representing new heuristics of similar complexity as the original human-designed one. We present this as a form of ante-hoc explainability for use in computer algebra development.
1911.06791
Francesco Quinzan
Vanja Dosko\v{c} and Tobias Friedrich and Andreas G\"obel and Frank Neumann and Aneta Neumann and Francesco Quinzan
Non-Monotone Submodular Maximization with Multiple Knapsacks in Static and Dynamic Settings
null
null
null
null
cs.LG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of maximizing a non-monotone submodular function under multiple knapsack constraints. We propose a simple discrete greedy algorithm to approach this problem, and prove that it yields strong approximation guarantees for functions with bounded curvature. In contrast to other heuristics, this requires no problem relaxation to continuous domains and it maintains a constant-factor approximation guarantee in the problem size. In the case of a single knapsack, our analysis suggests that the standard greedy can be used in non-monotone settings. Additionally, we study this problem in a dynamic setting, by which knapsacks change during the optimization process. We modify our greedy algorithm to avoid a complete restart at each constraint update. This modification retains the approximation guarantees of the static case. We evaluate our results experimentally on a video summarization and sensor placement task. We show that our proposed algorithm competes with the state-of-the-art in static settings. Furthermore, we show that in dynamic settings with tight computational time budget, our modified greedy yields significant improvements over starting the greedy from scratch, in terms of the solution quality achieved.
[ { "created": "Fri, 15 Nov 2019 18:22:46 GMT", "version": "v1" }, { "created": "Mon, 18 Nov 2019 20:20:10 GMT", "version": "v2" }, { "created": "Tue, 18 Feb 2020 10:55:31 GMT", "version": "v3" } ]
2020-02-19
[ [ "Doskoč", "Vanja", "" ], [ "Friedrich", "Tobias", "" ], [ "Göbel", "Andreas", "" ], [ "Neumann", "Frank", "" ], [ "Neumann", "Aneta", "" ], [ "Quinzan", "Francesco", "" ] ]
We study the problem of maximizing a non-monotone submodular function under multiple knapsack constraints. We propose a simple discrete greedy algorithm to approach this problem, and prove that it yields strong approximation guarantees for functions with bounded curvature. In contrast to other heuristics, this requires no problem relaxation to continuous domains and it maintains a constant-factor approximation guarantee in the problem size. In the case of a single knapsack, our analysis suggests that the standard greedy can be used in non-monotone settings. Additionally, we study this problem in a dynamic setting, by which knapsacks change during the optimization process. We modify our greedy algorithm to avoid a complete restart at each constraint update. This modification retains the approximation guarantees of the static case. We evaluate our results experimentally on a video summarization and sensor placement task. We show that our proposed algorithm competes with the state-of-the-art in static settings. Furthermore, we show that in dynamic settings with tight computational time budget, our modified greedy yields significant improvements over starting the greedy from scratch, in terms of the solution quality achieved.
2307.11332
Reza Sameni
Reza Sameni
Beyond Convergence: Identifiability of Machine Learning and Deep Learning Models
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
Machine learning (ML) and deep learning models are extensively used for parameter optimization and regression problems. However, not all inverse problems in ML are ``identifiable,'' indicating that model parameters may not be uniquely determined from the available data and the data model's input-output relationship. In this study, we investigate the notion of model parameter identifiability through a case study focused on parameter estimation from motion sensor data. Utilizing a bipedal-spring mass human walk dynamics model, we generate synthetic data representing diverse gait patterns and conditions. Employing a deep neural network, we attempt to estimate subject-wise parameters, including mass, stiffness, and equilibrium leg length. The results show that while certain parameters can be identified from the observation data, others remain unidentifiable, highlighting that unidentifiability is an intrinsic limitation of the experimental setup, necessitating a change in data collection and experimental scenarios. Beyond this specific case study, the concept of identifiability has broader implications in ML and deep learning. Addressing unidentifiability requires proven identifiable models (with theoretical support), multimodal data fusion techniques, and advancements in model-based machine learning. Understanding and resolving unidentifiability challenges will lead to more reliable and accurate applications across diverse domains, transcending mere model convergence and enhancing the reliability of machine learning models.
[ { "created": "Fri, 21 Jul 2023 03:40:53 GMT", "version": "v1" } ]
2023-07-24
[ [ "Sameni", "Reza", "" ] ]
Machine learning (ML) and deep learning models are extensively used for parameter optimization and regression problems. However, not all inverse problems in ML are ``identifiable,'' indicating that model parameters may not be uniquely determined from the available data and the data model's input-output relationship. In this study, we investigate the notion of model parameter identifiability through a case study focused on parameter estimation from motion sensor data. Utilizing a bipedal-spring mass human walk dynamics model, we generate synthetic data representing diverse gait patterns and conditions. Employing a deep neural network, we attempt to estimate subject-wise parameters, including mass, stiffness, and equilibrium leg length. The results show that while certain parameters can be identified from the observation data, others remain unidentifiable, highlighting that unidentifiability is an intrinsic limitation of the experimental setup, necessitating a change in data collection and experimental scenarios. Beyond this specific case study, the concept of identifiability has broader implications in ML and deep learning. Addressing unidentifiability requires proven identifiable models (with theoretical support), multimodal data fusion techniques, and advancements in model-based machine learning. Understanding and resolving unidentifiability challenges will lead to more reliable and accurate applications across diverse domains, transcending mere model convergence and enhancing the reliability of machine learning models.
2408.07408
Fabian Egidy
Fabian Egidy and Christian Gla{\ss}er
Oracle without Optimal Proof Systems outside Nondeterministic Subexponential Time
This version presents preliminary results. The findings and methods described herein are part of ongoing research and are subject to revision. As such, this document is a Work in Progress
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the existence of optimal proof systems for sets outside of $\mathrm{NP}$. Currently, no set $L \notin \mathrm{NP}$ is known that has optimal proof systems. Our main result shows that this is not surprising, because we can rule out relativizable proofs of optimality for all sets outside $\mathrm{NTIME}(t)$ where $t$ is slightly superpolynomial. We construct an oracle $O$, such that for any set $L \subseteq \Sigma^*$ at least one of the following two properties holds: $L$ does not have optimal proof systems relative to $O$. $L \in \mathrm{UTIME}^O(2^{2(\log n)^{8+4\log(\log(\log(n)))}})$. The runtime bound is slightly superpolynomial. So there is no relativizable proof showing that a complex set has optimal proof systems. Hence, searching for non-trivial optimal proof systems with relativizable methods can only be successful (if at all) in a narrow range above $\mathrm{NP}$.
[ { "created": "Wed, 14 Aug 2024 09:25:29 GMT", "version": "v1" } ]
2024-08-15
[ [ "Egidy", "Fabian", "" ], [ "Glaßer", "Christian", "" ] ]
We study the existence of optimal proof systems for sets outside of $\mathrm{NP}$. Currently, no set $L \notin \mathrm{NP}$ is known that has optimal proof systems. Our main result shows that this is not surprising, because we can rule out relativizable proofs of optimality for all sets outside $\mathrm{NTIME}(t)$ where $t$ is slightly superpolynomial. We construct an oracle $O$, such that for any set $L \subseteq \Sigma^*$ at least one of the following two properties holds: $L$ does not have optimal proof systems relative to $O$. $L \in \mathrm{UTIME}^O(2^{2(\log n)^{8+4\log(\log(\log(n)))}})$. The runtime bound is slightly superpolynomial. So there is no relativizable proof showing that a complex set has optimal proof systems. Hence, searching for non-trivial optimal proof systems with relativizable methods can only be successful (if at all) in a narrow range above $\mathrm{NP}$.
1008.5325
Danny Bickson
Danny Bickson and Carlos Guestrin
Inference with Multivariate Heavy-Tails in Linear Models
In Neural Information Processing System (NIPS) 2010, Dec. 2010, Vancouver, Canada
null
null
null
cs.LG cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heavy-tailed distributions naturally occur in many real life problems. Unfortunately, it is typically not possible to compute inference in closed-form in graphical models which involve such heavy-tailed distributions. In this work, we propose a novel simple linear graphical model for independent latent random variables, called linear characteristic model (LCM), defined in the characteristic function domain. Using stable distributions, a heavy-tailed family of distributions which is a generalization of Cauchy, L\'evy and Gaussian distributions, we show for the first time, how to compute both exact and approximate inference in such a linear multivariate graphical model. LCMs are not limited to stable distributions, in fact LCMs are always defined for any random variables (discrete, continuous or a mixture of both). We provide a realistic problem from the field of computer networks to demonstrate the applicability of our construction. Other potential application is iterative decoding of linear channels with non-Gaussian noise.
[ { "created": "Tue, 31 Aug 2010 14:31:57 GMT", "version": "v1" }, { "created": "Fri, 5 Nov 2010 15:26:53 GMT", "version": "v2" }, { "created": "Mon, 8 Nov 2010 16:14:02 GMT", "version": "v3" }, { "created": "Mon, 21 Mar 2011 15:54:54 GMT", "version": "v4" } ]
2011-03-22
[ [ "Bickson", "Danny", "" ], [ "Guestrin", "Carlos", "" ] ]
Heavy-tailed distributions naturally occur in many real life problems. Unfortunately, it is typically not possible to compute inference in closed-form in graphical models which involve such heavy-tailed distributions. In this work, we propose a novel simple linear graphical model for independent latent random variables, called linear characteristic model (LCM), defined in the characteristic function domain. Using stable distributions, a heavy-tailed family of distributions which is a generalization of Cauchy, L\'evy and Gaussian distributions, we show for the first time, how to compute both exact and approximate inference in such a linear multivariate graphical model. LCMs are not limited to stable distributions, in fact LCMs are always defined for any random variables (discrete, continuous or a mixture of both). We provide a realistic problem from the field of computer networks to demonstrate the applicability of our construction. Other potential application is iterative decoding of linear channels with non-Gaussian noise.
2210.07970
Peter Xenopoulos
Senan Hogan-Hennessy, Peter Xenopoulos, Claudio Silva
Market Interventions in a Large-Scale Virtual Economy
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Massively multiplayer online role-playing games often contain sophisticated in-game economies. Many important real-world economic phenomena, such as inflation, economic growth, and business cycles, are also present in these virtual economies. One major difference between real-world and virtual economies is the ease and frequency by which a policymaker, in this case, a game developer, can introduce economic shocks. These economic shocks, typically implemented with game updates or signaled through community channels, provide fertile ground to study the effects of economic interventions on markets. In this work, we study the effect of in-game economic market interventions, namely, a transaction tax and an item sink, in Old School RuneScape. Using causal inference methods, we find that the tax did not meaningfully affect the trading volume of items at the tax boundaries and that the item sink contributed to the inflation of luxury good prices, without reducing trade volume. Furthermore, we find evidence that the illicit gold trading market was relatively unaffected by the implemented market interventions. Our findings yield useful insights not only into the effect of market interventions in virtual economies but also for real-world markets.
[ { "created": "Fri, 14 Oct 2022 17:08:29 GMT", "version": "v1" } ]
2022-10-17
[ [ "Hogan-Hennessy", "Senan", "" ], [ "Xenopoulos", "Peter", "" ], [ "Silva", "Claudio", "" ] ]
Massively multiplayer online role-playing games often contain sophisticated in-game economies. Many important real-world economic phenomena, such as inflation, economic growth, and business cycles, are also present in these virtual economies. One major difference between real-world and virtual economies is the ease and frequency by which a policymaker, in this case, a game developer, can introduce economic shocks. These economic shocks, typically implemented with game updates or signaled through community channels, provide fertile ground to study the effects of economic interventions on markets. In this work, we study the effect of in-game economic market interventions, namely, a transaction tax and an item sink, in Old School RuneScape. Using causal inference methods, we find that the tax did not meaningfully affect the trading volume of items at the tax boundaries and that the item sink contributed to the inflation of luxury good prices, without reducing trade volume. Furthermore, we find evidence that the illicit gold trading market was relatively unaffected by the implemented market interventions. Our findings yield useful insights not only into the effect of market interventions in virtual economies but also for real-world markets.
2310.17551
Jing Yao
Xiaoyuan Yi, Jing Yao, Xiting Wang and Xing Xie
Unpacking the Ethical Value Alignment in Big Models
null
null
null
null
cs.CY cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Big models have greatly advanced AI's ability to understand, generate, and manipulate information and content, enabling numerous applications. However, as these models become increasingly integrated into everyday life, their inherent ethical values and potential biases pose unforeseen risks to society. This paper provides an overview of the risks and challenges associated with big models, surveys existing AI ethics guidelines, and examines the ethical implications arising from the limitations of these models. Taking a normative ethics perspective, we propose a reassessment of recent normative guidelines, highlighting the importance of collaborative efforts in academia to establish a unified and universal AI ethics framework. Furthermore, we investigate the moral inclinations of current mainstream LLMs using the Moral Foundation theory, analyze existing alignment algorithms, and outline the unique challenges encountered in aligning ethical values within them. To address these challenges, we introduce a novel conceptual paradigm for aligning the ethical values of big models and discuss promising research directions for alignment criteria, evaluation, and method, representing an initial step towards the interdisciplinary construction of the ethically aligned AI This paper is a modified English version of our Chinese paper https://crad.ict.ac.cn/cn/article/doi/10.7544/issn1000-1239.202330553, intended to help non-Chinese native speakers better understand our work.
[ { "created": "Thu, 26 Oct 2023 16:45:40 GMT", "version": "v1" } ]
2023-10-27
[ [ "Yi", "Xiaoyuan", "" ], [ "Yao", "Jing", "" ], [ "Wang", "Xiting", "" ], [ "Xie", "Xing", "" ] ]
Big models have greatly advanced AI's ability to understand, generate, and manipulate information and content, enabling numerous applications. However, as these models become increasingly integrated into everyday life, their inherent ethical values and potential biases pose unforeseen risks to society. This paper provides an overview of the risks and challenges associated with big models, surveys existing AI ethics guidelines, and examines the ethical implications arising from the limitations of these models. Taking a normative ethics perspective, we propose a reassessment of recent normative guidelines, highlighting the importance of collaborative efforts in academia to establish a unified and universal AI ethics framework. Furthermore, we investigate the moral inclinations of current mainstream LLMs using the Moral Foundation theory, analyze existing alignment algorithms, and outline the unique challenges encountered in aligning ethical values within them. To address these challenges, we introduce a novel conceptual paradigm for aligning the ethical values of big models and discuss promising research directions for alignment criteria, evaluation, and method, representing an initial step towards the interdisciplinary construction of the ethically aligned AI This paper is a modified English version of our Chinese paper https://crad.ict.ac.cn/cn/article/doi/10.7544/issn1000-1239.202330553, intended to help non-Chinese native speakers better understand our work.
1811.07818
Amarnath R
BV Divyashree, Amarnath R, Naveen M, G Hemantha Kumar
Novel approach to locate region of interest in mammograms for Breast cancer
ROI, breast cancer, mammographic images, segmentation, entropy, quad tree
International Journal of Intelligent Systems and Applications in Engineering.(ISSN:2147-6799) Vol 6, No 3 (2018)
10.18201/ijisae.2018644775
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Locating region of interest for breast cancer masses in the mammographic image is a challenging problem in medical image processing. In this research work, the keen idea is to efficiently extract suspected mass region for further examination. In particular to this fact breast boundary segmentation on sliced rgb image using modified intensity based approach followed by quad tree based division to spot out suspicious area are proposed in the paper. To evaluate the performance DDSM standard dataset are experimented and achieved acceptable accuracy.
[ { "created": "Thu, 1 Nov 2018 11:01:40 GMT", "version": "v1" } ]
2018-11-20
[ [ "Divyashree", "BV", "" ], [ "R", "Amarnath", "" ], [ "M", "Naveen", "" ], [ "Kumar", "G Hemantha", "" ] ]
Locating region of interest for breast cancer masses in the mammographic image is a challenging problem in medical image processing. In this research work, the keen idea is to efficiently extract suspected mass region for further examination. In particular to this fact breast boundary segmentation on sliced rgb image using modified intensity based approach followed by quad tree based division to spot out suspicious area are proposed in the paper. To evaluate the performance DDSM standard dataset are experimented and achieved acceptable accuracy.
1004.3887
Uwe Aickelin
William Wilson, Phil Birkin, Uwe Aickelin
Motif Detection Inspired by Immune Memory
12 pages, 4 figures, (ICARIS2007),
Proceedings of the 6th International Conference on Artificial Immune Systems (ICARIS2007), Lecture Notes in Computer Science 4628, Santos, Brazil, 2007, p 276-287
null
null
cs.AI cs.NE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The search for patterns or motifs in data represents an area of key interest to many researchers. In this paper we present the Motif Tracking Algorithm, a novel immune inspired pattern identification tool that is able to identify variable length unknown motifs which repeat within time series data. The algorithm searches from a completely neutral perspective that is independent of the data being analysed and the underlying motifs. In this paper we test the flexibility of the motif tracking algorithm by applying it to the search for patterns in two industrial data sets. The algorithm is able to identify a population of motifs successfully in both cases, and the value of these motifs is discussed.
[ { "created": "Thu, 22 Apr 2010 10:55:23 GMT", "version": "v1" } ]
2010-07-05
[ [ "Wilson", "William", "" ], [ "Birkin", "Phil", "" ], [ "Aickelin", "Uwe", "" ] ]
The search for patterns or motifs in data represents an area of key interest to many researchers. In this paper we present the Motif Tracking Algorithm, a novel immune inspired pattern identification tool that is able to identify variable length unknown motifs which repeat within time series data. The algorithm searches from a completely neutral perspective that is independent of the data being analysed and the underlying motifs. In this paper we test the flexibility of the motif tracking algorithm by applying it to the search for patterns in two industrial data sets. The algorithm is able to identify a population of motifs successfully in both cases, and the value of these motifs is discussed.
2207.02368
Nurendra Choudhary
Nurendra Choudhary, Nikhil Rao, Karthik Subbian, Chandan K. Reddy
Text Enriched Sparse Hyperbolic Graph Convolutional Networks
Preprint under review. 13 pages, 10 figures, 6 tables
null
null
null
cs.IR cs.LG cs.SI
http://creativecommons.org/licenses/by-sa/4.0/
Heterogeneous networks, which connect informative nodes containing text with different edge types, are routinely used to store and process information in various real-world applications. Graph Neural Networks (GNNs) and their hyperbolic variants provide a promising approach to encode such networks in a low-dimensional latent space through neighborhood aggregation and hierarchical feature extraction, respectively. However, these approaches typically ignore metapath structures and the available semantic information. Furthermore, these approaches are sensitive to the noise present in the training data. To tackle these limitations, in this paper, we propose Text Enriched Sparse Hyperbolic Graph Convolution Network (TESH-GCN) to capture the graph's metapath structures using semantic signals and further improve prediction in large heterogeneous graphs. In TESH-GCN, we extract semantic node information, which successively acts as a connection signal to extract relevant nodes' local neighborhood and graph-level metapath features from the sparse adjacency tensor in a reformulated hyperbolic graph convolution layer. These extracted features in conjunction with semantic features from the language model (for robustness) are used for the final downstream task. Experiments on various heterogeneous graph datasets show that our model outperforms the current state-of-the-art approaches by a large margin on the task of link prediction. We also report a reduction in both the training time and model parameters compared to the existing hyperbolic approaches through a reformulated hyperbolic graph convolution. Furthermore, we illustrate the robustness of our model by experimenting with different levels of simulated noise in both the graph structure and text, and also, present a mechanism to explain TESH-GCN's prediction by analyzing the extracted metapaths.
[ { "created": "Wed, 6 Jul 2022 00:23:35 GMT", "version": "v1" }, { "created": "Thu, 7 Jul 2022 04:58:49 GMT", "version": "v2" } ]
2022-07-08
[ [ "Choudhary", "Nurendra", "" ], [ "Rao", "Nikhil", "" ], [ "Subbian", "Karthik", "" ], [ "Reddy", "Chandan K.", "" ] ]
Heterogeneous networks, which connect informative nodes containing text with different edge types, are routinely used to store and process information in various real-world applications. Graph Neural Networks (GNNs) and their hyperbolic variants provide a promising approach to encode such networks in a low-dimensional latent space through neighborhood aggregation and hierarchical feature extraction, respectively. However, these approaches typically ignore metapath structures and the available semantic information. Furthermore, these approaches are sensitive to the noise present in the training data. To tackle these limitations, in this paper, we propose Text Enriched Sparse Hyperbolic Graph Convolution Network (TESH-GCN) to capture the graph's metapath structures using semantic signals and further improve prediction in large heterogeneous graphs. In TESH-GCN, we extract semantic node information, which successively acts as a connection signal to extract relevant nodes' local neighborhood and graph-level metapath features from the sparse adjacency tensor in a reformulated hyperbolic graph convolution layer. These extracted features in conjunction with semantic features from the language model (for robustness) are used for the final downstream task. Experiments on various heterogeneous graph datasets show that our model outperforms the current state-of-the-art approaches by a large margin on the task of link prediction. We also report a reduction in both the training time and model parameters compared to the existing hyperbolic approaches through a reformulated hyperbolic graph convolution. Furthermore, we illustrate the robustness of our model by experimenting with different levels of simulated noise in both the graph structure and text, and also, present a mechanism to explain TESH-GCN's prediction by analyzing the extracted metapaths.
1508.02774
Thomas M. Breuel
Thomas M. Breuel
Benchmarking of LSTM Networks
null
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperforms least square training, (4) peephole units are not useful, (5) the standard non-linearities (tanh and sigmoid) perform best, (6) bidirectional training combined with CTC performs better than other methods.
[ { "created": "Tue, 11 Aug 2015 23:31:49 GMT", "version": "v1" } ]
2016-10-31
[ [ "Breuel", "Thomas M.", "" ] ]
LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperforms least square training, (4) peephole units are not useful, (5) the standard non-linearities (tanh and sigmoid) perform best, (6) bidirectional training combined with CTC performs better than other methods.
0912.3852
Sathish Gopalakrishnan
Sathish Gopalakrishnan
Sharp utilization thresholds for some real-time scheduling problems
null
null
null
null
cs.PF cs.DM cs.OS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scheduling policies for real-time systems exhibit threshold behavior that is related to the utilization of the task set they schedule, and in some cases this threshold is sharp. For the rate monotonic scheduling policy, we show that periodic workload with utilization less than a threshold $U_{RM}^{*}$ can be scheduled almost surely and that all workload with utilization greater than $U_{RM}^{*}$ is almost surely not schedulable. We study such sharp threshold behavior in the context of processor scheduling using static task priorities, not only for periodic real-time tasks but for aperiodic real-time tasks as well. The notion of a utilization threshold provides a simple schedulability test for most real-time applications. These results improve our understanding of scheduling policies and provide an interesting characterization of the typical behavior of policies. The threshold is sharp (small deviations around the threshold cause schedulability, as a property, to appear or disappear) for most policies; this is a happy consequence that can be used to address the limitations of existing utilization-based tests for schedulability. We demonstrate the use of such an approach for balancing power consumption with the need to meet deadlines in web servers.
[ { "created": "Sat, 19 Dec 2009 01:18:05 GMT", "version": "v1" } ]
2009-12-22
[ [ "Gopalakrishnan", "Sathish", "" ] ]
Scheduling policies for real-time systems exhibit threshold behavior that is related to the utilization of the task set they schedule, and in some cases this threshold is sharp. For the rate monotonic scheduling policy, we show that periodic workload with utilization less than a threshold $U_{RM}^{*}$ can be scheduled almost surely and that all workload with utilization greater than $U_{RM}^{*}$ is almost surely not schedulable. We study such sharp threshold behavior in the context of processor scheduling using static task priorities, not only for periodic real-time tasks but for aperiodic real-time tasks as well. The notion of a utilization threshold provides a simple schedulability test for most real-time applications. These results improve our understanding of scheduling policies and provide an interesting characterization of the typical behavior of policies. The threshold is sharp (small deviations around the threshold cause schedulability, as a property, to appear or disappear) for most policies; this is a happy consequence that can be used to address the limitations of existing utilization-based tests for schedulability. We demonstrate the use of such an approach for balancing power consumption with the need to meet deadlines in web servers.
2303.08463
Yizhe Wang
Congqi Cao, Yizhe Wang, Yue Lu, Xin Zhang and Yanning Zhang
Co-Occurrence Matters: Learning Action Relation for Temporal Action Localization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Temporal action localization (TAL) is a prevailing task due to its great application potential. Existing works in this field mainly suffer from two weaknesses: (1) They often neglect the multi-label case and only focus on temporal modeling. (2) They ignore the semantic information in class labels and only use the visual information. To solve these problems, we propose a novel Co-Occurrence Relation Module (CORM) that explicitly models the co-occurrence relationship between actions. Besides the visual information, it further utilizes the semantic embeddings of class labels to model the co-occurrence relationship. The CORM works in a plug-and-play manner and can be easily incorporated with the existing sequence models. By considering both visual and semantic co-occurrence, our method achieves high multi-label relationship modeling capacity. Meanwhile, existing datasets in TAL always focus on low-semantic atomic actions. Thus we construct a challenging multi-label dataset UCF-Crime-TAL that focuses on high-semantic actions by annotating the UCF-Crime dataset at frame level and considering the semantic overlap of different events. Extensive experiments on two commonly used TAL datasets, \textit{i.e.}, MultiTHUMOS and TSU, and our newly proposed UCF-Crime-TAL demenstrate the effectiveness of the proposed CORM, which achieves state-of-the-art performance on these datasets.
[ { "created": "Wed, 15 Mar 2023 09:07:04 GMT", "version": "v1" } ]
2023-03-16
[ [ "Cao", "Congqi", "" ], [ "Wang", "Yizhe", "" ], [ "Lu", "Yue", "" ], [ "Zhang", "Xin", "" ], [ "Zhang", "Yanning", "" ] ]
Temporal action localization (TAL) is a prevailing task due to its great application potential. Existing works in this field mainly suffer from two weaknesses: (1) They often neglect the multi-label case and only focus on temporal modeling. (2) They ignore the semantic information in class labels and only use the visual information. To solve these problems, we propose a novel Co-Occurrence Relation Module (CORM) that explicitly models the co-occurrence relationship between actions. Besides the visual information, it further utilizes the semantic embeddings of class labels to model the co-occurrence relationship. The CORM works in a plug-and-play manner and can be easily incorporated with the existing sequence models. By considering both visual and semantic co-occurrence, our method achieves high multi-label relationship modeling capacity. Meanwhile, existing datasets in TAL always focus on low-semantic atomic actions. Thus we construct a challenging multi-label dataset UCF-Crime-TAL that focuses on high-semantic actions by annotating the UCF-Crime dataset at frame level and considering the semantic overlap of different events. Extensive experiments on two commonly used TAL datasets, \textit{i.e.}, MultiTHUMOS and TSU, and our newly proposed UCF-Crime-TAL demenstrate the effectiveness of the proposed CORM, which achieves state-of-the-art performance on these datasets.
2307.14068
Bo Zhou
Long Liu, Bo Zhou, Zhipeng Zhao, Zening Liu
Dynamic Domain Discrepancy Adjustment for Active Multi-Domain Adaptation
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-source unsupervised domain adaptation (MUDA) aims to transfer knowledge from related source domains to an unlabeled target domain. While recent MUDA methods have shown promising results, most focus on aligning the overall feature distributions across source domains, which can lead to negative effects due to redundant features within each domain. Moreover, there is a significant performance gap between MUDA and supervised methods. To address these challenges, we propose a novel approach called Dynamic Domain Discrepancy Adjustment for Active Multi-Domain Adaptation (D3AAMDA). Firstly, we establish a multi-source dynamic modulation mechanism during the training process based on the degree of distribution differences between source and target domains. This mechanism controls the alignment level of features between each source domain and the target domain, effectively leveraging the local advantageous feature information within the source domains. Additionally, we propose a Multi-source Active Boundary Sample Selection (MABS) strategy, which utilizes a guided dynamic boundary loss to design an efficient query function for selecting important samples. This strategy achieves improved generalization to the target domain with minimal sampling costs. We extensively evaluate our proposed method on commonly used domain adaptation datasets, comparing it against existing UDA and ADA methods. The experimental results unequivocally demonstrate the superiority of our approach.
[ { "created": "Wed, 26 Jul 2023 09:40:19 GMT", "version": "v1" } ]
2023-07-27
[ [ "Liu", "Long", "" ], [ "Zhou", "Bo", "" ], [ "Zhao", "Zhipeng", "" ], [ "Liu", "Zening", "" ] ]
Multi-source unsupervised domain adaptation (MUDA) aims to transfer knowledge from related source domains to an unlabeled target domain. While recent MUDA methods have shown promising results, most focus on aligning the overall feature distributions across source domains, which can lead to negative effects due to redundant features within each domain. Moreover, there is a significant performance gap between MUDA and supervised methods. To address these challenges, we propose a novel approach called Dynamic Domain Discrepancy Adjustment for Active Multi-Domain Adaptation (D3AAMDA). Firstly, we establish a multi-source dynamic modulation mechanism during the training process based on the degree of distribution differences between source and target domains. This mechanism controls the alignment level of features between each source domain and the target domain, effectively leveraging the local advantageous feature information within the source domains. Additionally, we propose a Multi-source Active Boundary Sample Selection (MABS) strategy, which utilizes a guided dynamic boundary loss to design an efficient query function for selecting important samples. This strategy achieves improved generalization to the target domain with minimal sampling costs. We extensively evaluate our proposed method on commonly used domain adaptation datasets, comparing it against existing UDA and ADA methods. The experimental results unequivocally demonstrate the superiority of our approach.
cs/0410061
Vincenzo Pallotta
Vincenzo Pallotta, Hatem Ghorbel, Patrick Ruch, Giovanni Coray
An argumentative annotation schema for meeting discussions
4 pages
Procedings of the LREC 2004 international conference, 26-28 May 2004, Lisbon, Portugal. Pages 1003-1006
null
null
cs.CL cs.DL cs.IR
null
In this article, we are interested in the annotation of transcriptions of human-human dialogue taken from meeting records. We first propose a meeting content model where conversational acts are interpreted with respect to their argumentative force and their role in building the argumentative structure of the meeting discussion. Argumentation in dialogue describes the way participants take part in the discussion and argue their standpoints. Then, we propose an annotation scheme based on such an argumentative dialogue model as well as the evaluation of its adequacy. The obtained higher-level semantic annotations are exploited in the conceptual indexing of the information contained in meeting discussions.
[ { "created": "Mon, 25 Oct 2004 01:38:07 GMT", "version": "v1" } ]
2007-05-23
[ [ "Pallotta", "Vincenzo", "" ], [ "Ghorbel", "Hatem", "" ], [ "Ruch", "Patrick", "" ], [ "Coray", "Giovanni", "" ] ]
In this article, we are interested in the annotation of transcriptions of human-human dialogue taken from meeting records. We first propose a meeting content model where conversational acts are interpreted with respect to their argumentative force and their role in building the argumentative structure of the meeting discussion. Argumentation in dialogue describes the way participants take part in the discussion and argue their standpoints. Then, we propose an annotation scheme based on such an argumentative dialogue model as well as the evaluation of its adequacy. The obtained higher-level semantic annotations are exploited in the conceptual indexing of the information contained in meeting discussions.
2110.02848
Awni Hannun
Shubho Sengupta, Vineel Pratap, Awni Hannun
Parallel Composition of Weighted Finite-State Transducers
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finite-state transducers (FSTs) are frequently used in speech recognition. Transducer composition is an essential operation for combining different sources of information at different granularities. However, composition is also one of the more computationally expensive operations. Due to the heterogeneous structure of FSTs, parallel algorithms for composition are suboptimal in efficiency, generality, or both. We propose an algorithm for parallel composition and implement it on graphics processing units. We benchmark our parallel algorithm on the composition of random graphs and the composition of graphs commonly used in speech recognition. The parallel composition scales better with the size of the input graphs and for large graphs can be as much as 10 to 30 times faster than a sequential CPU algorithm.
[ { "created": "Wed, 6 Oct 2021 15:19:00 GMT", "version": "v1" } ]
2021-10-07
[ [ "Sengupta", "Shubho", "" ], [ "Pratap", "Vineel", "" ], [ "Hannun", "Awni", "" ] ]
Finite-state transducers (FSTs) are frequently used in speech recognition. Transducer composition is an essential operation for combining different sources of information at different granularities. However, composition is also one of the more computationally expensive operations. Due to the heterogeneous structure of FSTs, parallel algorithms for composition are suboptimal in efficiency, generality, or both. We propose an algorithm for parallel composition and implement it on graphics processing units. We benchmark our parallel algorithm on the composition of random graphs and the composition of graphs commonly used in speech recognition. The parallel composition scales better with the size of the input graphs and for large graphs can be as much as 10 to 30 times faster than a sequential CPU algorithm.
2209.10073
Anqi Zhu
Anqi Zhu, Qiuhong Ke, Mingming Gong and James Bailey
Adaptive Local-Component-aware Graph Convolutional Network for One-shot Skeleton-based Action Recognition
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Skeleton-based action recognition receives increasing attention because the skeleton representations reduce the amount of training data by eliminating visual information irrelevant to actions. To further improve the sample efficiency, meta-learning-based one-shot learning solutions were developed for skeleton-based action recognition. These methods find the nearest neighbor according to the similarity between instance-level global average embedding. However, such measurement holds unstable representativity due to inadequate generalized learning on local invariant and noisy features, while intuitively, more fine-grained recognition usually relies on determining key local body movements. To address this limitation, we present the Adaptive Local-Component-aware Graph Convolutional Network, which replaces the comparison metric with a focused sum of similarity measurements on aligned local embedding of action-critical spatial/temporal segments. Comprehensive one-shot experiments on the public benchmark of NTU-RGB+D 120 indicate that our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
[ { "created": "Wed, 21 Sep 2022 02:33:07 GMT", "version": "v1" } ]
2022-09-22
[ [ "Zhu", "Anqi", "" ], [ "Ke", "Qiuhong", "" ], [ "Gong", "Mingming", "" ], [ "Bailey", "James", "" ] ]
Skeleton-based action recognition receives increasing attention because the skeleton representations reduce the amount of training data by eliminating visual information irrelevant to actions. To further improve the sample efficiency, meta-learning-based one-shot learning solutions were developed for skeleton-based action recognition. These methods find the nearest neighbor according to the similarity between instance-level global average embedding. However, such measurement holds unstable representativity due to inadequate generalized learning on local invariant and noisy features, while intuitively, more fine-grained recognition usually relies on determining key local body movements. To address this limitation, we present the Adaptive Local-Component-aware Graph Convolutional Network, which replaces the comparison metric with a focused sum of similarity measurements on aligned local embedding of action-critical spatial/temporal segments. Comprehensive one-shot experiments on the public benchmark of NTU-RGB+D 120 indicate that our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
1711.09012
Ekram Hossain
Shermila Ranadheera, Setareh Maghsudi, and Ekram Hossain
Mobile Edge Computation Offloading Using Game Theory and Reinforcement Learning
null
null
null
null
cs.GT cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the ever-increasing popularity of resource-hungry and delay-constrained mobile applications, the computation and storage capabilities of remote cloud has partially migrated towards the mobile edge, giving rise to the concept known as Mobile Edge Computing (MEC). While MEC servers enjoy the close proximity to the end-users to provide services at reduced latency and lower energy costs, they suffer from limitations in computational and radio resources, which calls for fair efficient resource management in the MEC servers. The problem is however challenging due to the ultra-high density, distributed nature, and intrinsic randomness of next generation wireless networks. In this article, we focus on the application of game theory and reinforcement learning for efficient distributed resource management in MEC, in particular, for computation offloading. We briefly review the cutting-edge research and discuss future challenges. Furthermore, we develop a game-theoretical model for energy-efficient distributed edge server activation and study several learning techniques. Numerical results are provided to illustrate the performance of these distributed learning techniques. Also, open research issues in the context of resource management in MEC servers are discussed.
[ { "created": "Mon, 20 Nov 2017 04:01:18 GMT", "version": "v1" } ]
2017-11-27
[ [ "Ranadheera", "Shermila", "" ], [ "Maghsudi", "Setareh", "" ], [ "Hossain", "Ekram", "" ] ]
Due to the ever-increasing popularity of resource-hungry and delay-constrained mobile applications, the computation and storage capabilities of remote cloud has partially migrated towards the mobile edge, giving rise to the concept known as Mobile Edge Computing (MEC). While MEC servers enjoy the close proximity to the end-users to provide services at reduced latency and lower energy costs, they suffer from limitations in computational and radio resources, which calls for fair efficient resource management in the MEC servers. The problem is however challenging due to the ultra-high density, distributed nature, and intrinsic randomness of next generation wireless networks. In this article, we focus on the application of game theory and reinforcement learning for efficient distributed resource management in MEC, in particular, for computation offloading. We briefly review the cutting-edge research and discuss future challenges. Furthermore, we develop a game-theoretical model for energy-efficient distributed edge server activation and study several learning techniques. Numerical results are provided to illustrate the performance of these distributed learning techniques. Also, open research issues in the context of resource management in MEC servers are discussed.
2004.13843
Ram G Athreya
Ram G Athreya, Srividya Bansal, Axel-Cyrille Ngonga Ngomo, Ricardo Usbeck
Template-based Question Answering using Recursive Neural Networks
null
null
null
null
cs.CL cs.DB cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a neural network-based approach to automatically learn and classify natural language questions into its corresponding template using recursive neural networks. An obvious advantage of using neural networks is the elimination of the need for laborious feature engineering that can be cumbersome and error-prone. The input question is encoded into a vector representation. The model is trained and evaluated on the LC-QuAD dataset (Large-scale Complex Question Answering Dataset). The LC-QuAD queries are annotated based on 38 unique templates that the model attempts to classify. The resulting model is evaluated against both the LC-QuAD dataset and the 7th Question Answering Over Linked Data (QALD-7) dataset. The recursive neural network achieves template classification accuracy of 0.828 on the LC-QuAD dataset and an accuracy of 0.618 on the QALD-7 dataset. When the top-2 most likely templates were considered the model achieves an accuracy of 0.945 on the LC-QuAD dataset and 0.786 on the QALD-7 dataset. After slot filling, the overall system achieves a macro F-score 0.419 on the LC-QuAD dataset and a macro F-score of 0.417 on the QALD-7 dataset.
[ { "created": "Fri, 3 Apr 2020 18:14:39 GMT", "version": "v1" }, { "created": "Sun, 7 Jun 2020 00:26:26 GMT", "version": "v2" }, { "created": "Tue, 9 Jun 2020 01:41:26 GMT", "version": "v3" } ]
2020-06-11
[ [ "Athreya", "Ram G", "" ], [ "Bansal", "Srividya", "" ], [ "Ngomo", "Axel-Cyrille Ngonga", "" ], [ "Usbeck", "Ricardo", "" ] ]
We propose a neural network-based approach to automatically learn and classify natural language questions into its corresponding template using recursive neural networks. An obvious advantage of using neural networks is the elimination of the need for laborious feature engineering that can be cumbersome and error-prone. The input question is encoded into a vector representation. The model is trained and evaluated on the LC-QuAD dataset (Large-scale Complex Question Answering Dataset). The LC-QuAD queries are annotated based on 38 unique templates that the model attempts to classify. The resulting model is evaluated against both the LC-QuAD dataset and the 7th Question Answering Over Linked Data (QALD-7) dataset. The recursive neural network achieves template classification accuracy of 0.828 on the LC-QuAD dataset and an accuracy of 0.618 on the QALD-7 dataset. When the top-2 most likely templates were considered the model achieves an accuracy of 0.945 on the LC-QuAD dataset and 0.786 on the QALD-7 dataset. After slot filling, the overall system achieves a macro F-score 0.419 on the LC-QuAD dataset and a macro F-score of 0.417 on the QALD-7 dataset.
2112.04368
Sahan Bulathwela
Sahan Bulathwela, Mar\'ia P\'erez-Ortiz, Emine Yilmaz, John Shawe-Taylor
Semantic TrueLearn: Using Semantic Knowledge Graphs in Recommendation Systems
Presented at the First International Workshop on Joint Use of Probabilistic Graphical Models and Ontology at Conference on Knowledge Graph and Semantic Web 2021
null
null
null
cs.IR cs.AI cs.CY stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In informational recommenders, many challenges arise from the need to handle the semantic and hierarchical structure between knowledge areas. This work aims to advance towards building a state-aware educational recommendation system that incorporates semantic relatedness between knowledge topics, propagating latent information across semantically related topics. We introduce a novel learner model that exploits this semantic relatedness between knowledge components in learning resources using the Wikipedia link graph, with the aim to better predict learner engagement and latent knowledge in a lifelong learning scenario. In this sense, Semantic TrueLearn builds a humanly intuitive knowledge representation while leveraging Bayesian machine learning to improve the predictive performance of the educational engagement. Our experiments with a large dataset demonstrate that this new semantic version of TrueLearn algorithm achieves statistically significant improvements in terms of predictive performance with a simple extension that adds semantic awareness to the model.
[ { "created": "Wed, 8 Dec 2021 16:23:27 GMT", "version": "v1" } ]
2021-12-09
[ [ "Bulathwela", "Sahan", "" ], [ "Pérez-Ortiz", "María", "" ], [ "Yilmaz", "Emine", "" ], [ "Shawe-Taylor", "John", "" ] ]
In informational recommenders, many challenges arise from the need to handle the semantic and hierarchical structure between knowledge areas. This work aims to advance towards building a state-aware educational recommendation system that incorporates semantic relatedness between knowledge topics, propagating latent information across semantically related topics. We introduce a novel learner model that exploits this semantic relatedness between knowledge components in learning resources using the Wikipedia link graph, with the aim to better predict learner engagement and latent knowledge in a lifelong learning scenario. In this sense, Semantic TrueLearn builds a humanly intuitive knowledge representation while leveraging Bayesian machine learning to improve the predictive performance of the educational engagement. Our experiments with a large dataset demonstrate that this new semantic version of TrueLearn algorithm achieves statistically significant improvements in terms of predictive performance with a simple extension that adds semantic awareness to the model.
2305.09857
Vipul Raheja
Vipul Raheja, Dhruv Kumar, Ryan Koo, Dongyeop Kang
CoEdIT: Text Editing by Task-Specific Instruction Tuning
Accepted to EMNLP 2023 (Findings). 18 pages, 13 tables, 2 figures
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
We introduce CoEdIT, a state-of-the-art text editing system for writing assistance. CoEdIT takes instructions from the user specifying the attributes of the desired text, such as "Make the sentence simpler" or "Write it in a more neutral style," and outputs the edited text. We present a large language model fine-tuned on a diverse collection of task-specific instructions for text editing (a total of 82K instructions). Our model (1) achieves state-of-the-art performance on various text editing benchmarks, (2) is competitive with publicly available largest-sized LLMs trained on instructions while being nearly 60x smaller, (3) is capable of generalizing to unseen edit instructions, and (4) exhibits abilities to generalize to composite instructions containing different combinations of edit actions. Through extensive qualitative and quantitative analysis, we show that writers prefer the edits suggested by CoEdIT relative to other state-of-the-art text editing models. Our code, data, and models are publicly available at https://github.com/vipulraheja/coedit.
[ { "created": "Wed, 17 May 2023 00:05:24 GMT", "version": "v1" }, { "created": "Mon, 23 Oct 2023 23:17:13 GMT", "version": "v2" } ]
2023-10-25
[ [ "Raheja", "Vipul", "" ], [ "Kumar", "Dhruv", "" ], [ "Koo", "Ryan", "" ], [ "Kang", "Dongyeop", "" ] ]
We introduce CoEdIT, a state-of-the-art text editing system for writing assistance. CoEdIT takes instructions from the user specifying the attributes of the desired text, such as "Make the sentence simpler" or "Write it in a more neutral style," and outputs the edited text. We present a large language model fine-tuned on a diverse collection of task-specific instructions for text editing (a total of 82K instructions). Our model (1) achieves state-of-the-art performance on various text editing benchmarks, (2) is competitive with publicly available largest-sized LLMs trained on instructions while being nearly 60x smaller, (3) is capable of generalizing to unseen edit instructions, and (4) exhibits abilities to generalize to composite instructions containing different combinations of edit actions. Through extensive qualitative and quantitative analysis, we show that writers prefer the edits suggested by CoEdIT relative to other state-of-the-art text editing models. Our code, data, and models are publicly available at https://github.com/vipulraheja/coedit.
2009.13905
Jacopo Amidei
Jacopo Amidei
Aligning Intraobserver Agreement by Transitivity
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Annotation reproducibility and accuracy rely on good consistency within annotators. We propose a novel method for measuring within annotator consistency or annotator Intraobserver Agreement (IA). The proposed approach is based on transitivity, a measure that has been thoroughly studied in the context of rational decision-making. The transitivity measure, in contrast with the commonly used test-retest strategy for annotator IA, is less sensitive to the several types of bias introduced by the test-retest strategy. We present a representation theorem to the effect that relative judgement data that meet transitivity can be mapped to a scale (in terms of measurement theory). We also discuss a further application of transitivity as part of data collection design for addressing the problem of the quadratic complexity of data collection of relative judgements.
[ { "created": "Tue, 29 Sep 2020 09:55:04 GMT", "version": "v1" } ]
2020-09-30
[ [ "Amidei", "Jacopo", "" ] ]
Annotation reproducibility and accuracy rely on good consistency within annotators. We propose a novel method for measuring within annotator consistency or annotator Intraobserver Agreement (IA). The proposed approach is based on transitivity, a measure that has been thoroughly studied in the context of rational decision-making. The transitivity measure, in contrast with the commonly used test-retest strategy for annotator IA, is less sensitive to the several types of bias introduced by the test-retest strategy. We present a representation theorem to the effect that relative judgement data that meet transitivity can be mapped to a scale (in terms of measurement theory). We also discuss a further application of transitivity as part of data collection design for addressing the problem of the quadratic complexity of data collection of relative judgements.
1303.5243
Antonios Argyriou
Antonios Argyriou
Link Scheduling for Multiple Multicast Sessions in Distributed Wireless Networks
null
IEEE Wireless Communications Letters 2013
10.1109/WCL.2013.040513.120924
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this letter we investigate link scheduling algorithms for throughput maximization in multicast wireless networks. According to our system model, each source node transmits to a multicast group that resides one hop away. We adopt the physical interference model to reflect the aggregate signal to interference and noise ratio (SINR) at each node of the multicast group. We present an ILP formulation of the aforementioned problem. The basic feature of the problem formulation is that it decomposes the single multicast session into the corresponding point-to-point links. The rationale is that a solution algorithm has more flexibility regarding the scheduling options for individual nodes. The extended MILP problem that also considers power control is solved with LP relaxation. Performance results for both the ILP and MILP problems are obtained for different traffic loads and different number of nodes per multicast group.
[ { "created": "Thu, 21 Mar 2013 12:38:10 GMT", "version": "v1" } ]
2016-11-15
[ [ "Argyriou", "Antonios", "" ] ]
In this letter we investigate link scheduling algorithms for throughput maximization in multicast wireless networks. According to our system model, each source node transmits to a multicast group that resides one hop away. We adopt the physical interference model to reflect the aggregate signal to interference and noise ratio (SINR) at each node of the multicast group. We present an ILP formulation of the aforementioned problem. The basic feature of the problem formulation is that it decomposes the single multicast session into the corresponding point-to-point links. The rationale is that a solution algorithm has more flexibility regarding the scheduling options for individual nodes. The extended MILP problem that also considers power control is solved with LP relaxation. Performance results for both the ILP and MILP problems are obtained for different traffic loads and different number of nodes per multicast group.
2209.00686
Marco Zaffalon
Enrique Miranda and Marco Zaffalon
Nonlinear desirability theory
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Desirability can be understood as an extension of Anscombe and Aumann's Bayesian decision theory to sets of expected utilities. At the core of desirability lies an assumption of linearity of the scale in which rewards are measured. It is a traditional assumption used to derive the expected utility model, which clashes with a general representation of rational decision making, though. Allais has, in particular, pointed this out in 1953 with his famous paradox. We note that the utility scale plays the role of a closure operator when we regard desirability as a logical theory. This observation enables us to extend desirability to the nonlinear case by letting the utility scale be represented via a general closure operator. The new theory directly expresses rewards in actual nonlinear currency (money), much in Savage's spirit, while arguably weakening the founding assumptions to a minimum. We characterise the main properties of the new theory both from the perspective of sets of gambles and of their lower and upper prices (previsions). We show how Allais paradox finds a solution in the new theory, and discuss the role of sets of probabilities in the theory.
[ { "created": "Thu, 1 Sep 2022 18:44:29 GMT", "version": "v1" }, { "created": "Fri, 18 Nov 2022 11:57:06 GMT", "version": "v2" } ]
2022-11-21
[ [ "Miranda", "Enrique", "" ], [ "Zaffalon", "Marco", "" ] ]
Desirability can be understood as an extension of Anscombe and Aumann's Bayesian decision theory to sets of expected utilities. At the core of desirability lies an assumption of linearity of the scale in which rewards are measured. It is a traditional assumption used to derive the expected utility model, which clashes with a general representation of rational decision making, though. Allais has, in particular, pointed this out in 1953 with his famous paradox. We note that the utility scale plays the role of a closure operator when we regard desirability as a logical theory. This observation enables us to extend desirability to the nonlinear case by letting the utility scale be represented via a general closure operator. The new theory directly expresses rewards in actual nonlinear currency (money), much in Savage's spirit, while arguably weakening the founding assumptions to a minimum. We characterise the main properties of the new theory both from the perspective of sets of gambles and of their lower and upper prices (previsions). We show how Allais paradox finds a solution in the new theory, and discuss the role of sets of probabilities in the theory.
0905.2386
Joel Ratsaby
Joel Ratsaby
Combinatorial information distance
null
null
null
null
cs.DM cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let $|A|$ denote the cardinality of a finite set $A$. For any real number $x$ define $t(x)=x$ if $x\geq1$ and 1 otherwise. For any finite sets $A,B$ let $\delta(A,B)$ $=$ $\log_{2}(t(|B\cap\bar{A}||A|))$. We define {This appears as Technical Report # arXiv:0905.2386v4. A shorter version appears in the {Proc. of Mini-Conference on Applied Theoretical Computer Science (MATCOS-10)}, Slovenia, Oct. 13-14, 2010.} a new cobinatorial distance $d(A,B)$ $=$ $\max\{\delta(A,B),\delta(B,A)\} $ which may be applied to measure the distance between binary strings of different lengths. The distance is based on a classical combinatorial notion of information introduced by Kolmogorov.
[ { "created": "Thu, 14 May 2009 17:44:39 GMT", "version": "v1" }, { "created": "Wed, 24 Feb 2010 11:49:28 GMT", "version": "v2" }, { "created": "Fri, 6 Aug 2010 09:36:12 GMT", "version": "v3" }, { "created": "Thu, 9 Sep 2010 19:54:28 GMT", "version": "v4" }, { "created": "Sun, 17 Oct 2010 18:12:08 GMT", "version": "v5" } ]
2010-10-19
[ [ "Ratsaby", "Joel", "" ] ]
Let $|A|$ denote the cardinality of a finite set $A$. For any real number $x$ define $t(x)=x$ if $x\geq1$ and 1 otherwise. For any finite sets $A,B$ let $\delta(A,B)$ $=$ $\log_{2}(t(|B\cap\bar{A}||A|))$. We define {This appears as Technical Report # arXiv:0905.2386v4. A shorter version appears in the {Proc. of Mini-Conference on Applied Theoretical Computer Science (MATCOS-10)}, Slovenia, Oct. 13-14, 2010.} a new cobinatorial distance $d(A,B)$ $=$ $\max\{\delta(A,B),\delta(B,A)\} $ which may be applied to measure the distance between binary strings of different lengths. The distance is based on a classical combinatorial notion of information introduced by Kolmogorov.
2210.03841
Polina Zablotskaia
Siddhartha Brahma, Polina Zablotskaia, David Mimno
Breaking BERT: Evaluating and Optimizing Sparsified Attention
Shorter version accepted to SNN2021 workshop
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Transformers allow attention between all pairs of tokens, but there is reason to believe that most of these connections - and their quadratic time and memory - may not be necessary. But which ones? We evaluate the impact of sparsification patterns with a series of ablation experiments. First, we compare masks based on syntax, lexical similarity, and token position to random connections, and measure which patterns reduce performance the least. We find that on three common finetuning tasks even using attention that is at least 78% sparse can have little effect on performance if applied at later transformer layers, but that applying sparsity throughout the network reduces performance significantly. Second, we vary the degree of sparsity for three patterns supported by previous work, and find that connections to neighbouring tokens are the most significant. Finally, we treat sparsity as an optimizable parameter, and present an algorithm to learn degrees of neighboring connections that gives a fine-grained control over the accuracy-sparsity trade-off while approaching the performance of existing methods.
[ { "created": "Fri, 7 Oct 2022 22:32:27 GMT", "version": "v1" } ]
2022-10-11
[ [ "Brahma", "Siddhartha", "" ], [ "Zablotskaia", "Polina", "" ], [ "Mimno", "David", "" ] ]
Transformers allow attention between all pairs of tokens, but there is reason to believe that most of these connections - and their quadratic time and memory - may not be necessary. But which ones? We evaluate the impact of sparsification patterns with a series of ablation experiments. First, we compare masks based on syntax, lexical similarity, and token position to random connections, and measure which patterns reduce performance the least. We find that on three common finetuning tasks even using attention that is at least 78% sparse can have little effect on performance if applied at later transformer layers, but that applying sparsity throughout the network reduces performance significantly. Second, we vary the degree of sparsity for three patterns supported by previous work, and find that connections to neighbouring tokens are the most significant. Finally, we treat sparsity as an optimizable parameter, and present an algorithm to learn degrees of neighboring connections that gives a fine-grained control over the accuracy-sparsity trade-off while approaching the performance of existing methods.
2406.18493
Cynthia Kop
Cynthia Kop
A weakly monotonic, logically constrained, HORPO-variant
Technical report detailing an adaptation of the method in https://link.springer.com/chapter/10.1007/978-3-031-57267-8_13
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
In this short paper, we present a simple variant of the recursive path ordering, specified for Logically Constrained Simply Typed Rewriting Systems (LCSTRSs). This is a method for curried systems, without lambda but with partially applied function symbols, which can deal with logical constraints. As it is designed for use in the dependency pair framework, it is defined as reduction pair, allowing weak monotonicity.
[ { "created": "Wed, 26 Jun 2024 16:56:18 GMT", "version": "v1" } ]
2024-06-27
[ [ "Kop", "Cynthia", "" ] ]
In this short paper, we present a simple variant of the recursive path ordering, specified for Logically Constrained Simply Typed Rewriting Systems (LCSTRSs). This is a method for curried systems, without lambda but with partially applied function symbols, which can deal with logical constraints. As it is designed for use in the dependency pair framework, it is defined as reduction pair, allowing weak monotonicity.
2201.12163
Hiroshi Kajino
Hiroshi Kajino, Kohei Miyaguchi, Takayuki Osogami
Biases in In Silico Evaluation of Molecular Optimization Methods and Bias-Reduced Evaluation Methodology
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
We are interested in in silico evaluation methodology for molecular optimization methods. Given a sample of molecules and their properties of our interest, we wish not only to train an agent that can find molecules optimized with respect to the target property but also to evaluate its performance. A common practice is to train a predictor of the target property on the sample and use it for both training and evaluating the agent. We show that this evaluator potentially suffers from two biases; one is due to misspecification of the predictor and the other to reusing the same sample for training and evaluation. We discuss bias reduction methods for each of the biases comprehensively, and empirically investigate their effectiveness.
[ { "created": "Fri, 28 Jan 2022 14:53:14 GMT", "version": "v1" } ]
2022-01-31
[ [ "Kajino", "Hiroshi", "" ], [ "Miyaguchi", "Kohei", "" ], [ "Osogami", "Takayuki", "" ] ]
We are interested in in silico evaluation methodology for molecular optimization methods. Given a sample of molecules and their properties of our interest, we wish not only to train an agent that can find molecules optimized with respect to the target property but also to evaluate its performance. A common practice is to train a predictor of the target property on the sample and use it for both training and evaluating the agent. We show that this evaluator potentially suffers from two biases; one is due to misspecification of the predictor and the other to reusing the same sample for training and evaluation. We discuss bias reduction methods for each of the biases comprehensively, and empirically investigate their effectiveness.
2004.02432
Jaeyeon Kang
Jaeyeon Kang, Younghyun Jo, Seoung Wug Oh, Peter Vajda, and Seon Joo Kim
Deep Space-Time Video Upsampling Networks
ECCV2020 accepted
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video super-resolution (VSR) and frame interpolation (FI) are traditional computer vision problems, and the performance have been improving by incorporating deep learning recently. In this paper, we investigate the problem of jointly upsampling videos both in space and time, which is becoming more important with advances in display systems. One solution for this is to run VSR and FI, one by one, independently. This is highly inefficient as heavy deep neural networks (DNN) are involved in each solution. To this end, we propose an end-to-end DNN framework for the space-time video upsampling by efficiently merging VSR and FI into a joint framework. In our framework, a novel weighting scheme is proposed to fuse input frames effectively without explicit motion compensation for efficient processing of videos. The results show better results both quantitatively and qualitatively, while reducing the computation time (x7 faster) and the number of parameters (30%) compared to baselines.
[ { "created": "Mon, 6 Apr 2020 07:04:21 GMT", "version": "v1" }, { "created": "Mon, 10 Aug 2020 02:37:53 GMT", "version": "v2" } ]
2020-08-11
[ [ "Kang", "Jaeyeon", "" ], [ "Jo", "Younghyun", "" ], [ "Oh", "Seoung Wug", "" ], [ "Vajda", "Peter", "" ], [ "Kim", "Seon Joo", "" ] ]
Video super-resolution (VSR) and frame interpolation (FI) are traditional computer vision problems, and the performance have been improving by incorporating deep learning recently. In this paper, we investigate the problem of jointly upsampling videos both in space and time, which is becoming more important with advances in display systems. One solution for this is to run VSR and FI, one by one, independently. This is highly inefficient as heavy deep neural networks (DNN) are involved in each solution. To this end, we propose an end-to-end DNN framework for the space-time video upsampling by efficiently merging VSR and FI into a joint framework. In our framework, a novel weighting scheme is proposed to fuse input frames effectively without explicit motion compensation for efficient processing of videos. The results show better results both quantitatively and qualitatively, while reducing the computation time (x7 faster) and the number of parameters (30%) compared to baselines.
1304.7054
Chetan Jhurani
Chetan Jhurani
Batched Kronecker product for 2-D matrices and 3-D arrays on NVIDIA GPUs
null
null
null
null
cs.MS cs.DC math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe an interface and an implementation for performing Kronecker product actions on NVIDIA GPUs for multiple small 2-D matrices and 3-D arrays processed in parallel as a batch. This method is suited to cases where the Kronecker product component matrices are identical but the operands in a matrix-free application vary in the batch. Any batched GEMM (General Matrix Multiply) implementation, for example ours [1] or the one in cuBLAS, can also be used for performing batched Kronecker products on GPUs. However, the specialized implementation presented here is faster and uses less memory. Partly this is because a simple GEMM based approach would require extra copies to and from main memory. We focus on matrix sizes less than or equal to 16, since these are the typical polynomial degrees in Finite Elements, but the implementation can be easily extended for other sizes. We obtain 143 and 285 GFlop/s for single precision real when processing matrices of size 10 and 16, respectively on NVIDIA Tesla K20c using CUDA 5.0. The corresponding speeds for 3-D array Kronecker products are 126 and 268 GFlop/s, respectively. Double precision is easily supported using the C++ template mechanism.
[ { "created": "Fri, 26 Apr 2013 02:22:25 GMT", "version": "v1" } ]
2013-04-29
[ [ "Jhurani", "Chetan", "" ] ]
We describe an interface and an implementation for performing Kronecker product actions on NVIDIA GPUs for multiple small 2-D matrices and 3-D arrays processed in parallel as a batch. This method is suited to cases where the Kronecker product component matrices are identical but the operands in a matrix-free application vary in the batch. Any batched GEMM (General Matrix Multiply) implementation, for example ours [1] or the one in cuBLAS, can also be used for performing batched Kronecker products on GPUs. However, the specialized implementation presented here is faster and uses less memory. Partly this is because a simple GEMM based approach would require extra copies to and from main memory. We focus on matrix sizes less than or equal to 16, since these are the typical polynomial degrees in Finite Elements, but the implementation can be easily extended for other sizes. We obtain 143 and 285 GFlop/s for single precision real when processing matrices of size 10 and 16, respectively on NVIDIA Tesla K20c using CUDA 5.0. The corresponding speeds for 3-D array Kronecker products are 126 and 268 GFlop/s, respectively. Double precision is easily supported using the C++ template mechanism.
2308.05264
Soumyaroop Nandi
Soumyaroop Nandi, Prem Natarajan, Wael Abd-Almageed
TrainFors: A Large Benchmark Training Dataset for Image Manipulation Detection and Localization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evaluation datasets and metrics for image manipulation detection and localization (IMDL) research have been standardized. But the training dataset for such a task is still nonstandard. Previous researchers have used unconventional and deviating datasets to train neural networks for detecting image forgeries and localizing pixel maps of manipulated regions. For a fair comparison, the training set, test set, and evaluation metrics should be persistent. Hence, comparing the existing methods may not seem fair as the results depend heavily on the training datasets as well as the model architecture. Moreover, none of the previous works release the synthetic training dataset used for the IMDL task. We propose a standardized benchmark training dataset for image splicing, copy-move forgery, removal forgery, and image enhancement forgery. Furthermore, we identify the problems with the existing IMDL datasets and propose the required modifications. We also train the state-of-the-art IMDL methods on our proposed TrainFors1 dataset for a fair evaluation and report the actual performance of these methods under similar conditions.
[ { "created": "Thu, 10 Aug 2023 00:26:34 GMT", "version": "v1" } ]
2023-08-11
[ [ "Nandi", "Soumyaroop", "" ], [ "Natarajan", "Prem", "" ], [ "Abd-Almageed", "Wael", "" ] ]
The evaluation datasets and metrics for image manipulation detection and localization (IMDL) research have been standardized. But the training dataset for such a task is still nonstandard. Previous researchers have used unconventional and deviating datasets to train neural networks for detecting image forgeries and localizing pixel maps of manipulated regions. For a fair comparison, the training set, test set, and evaluation metrics should be persistent. Hence, comparing the existing methods may not seem fair as the results depend heavily on the training datasets as well as the model architecture. Moreover, none of the previous works release the synthetic training dataset used for the IMDL task. We propose a standardized benchmark training dataset for image splicing, copy-move forgery, removal forgery, and image enhancement forgery. Furthermore, we identify the problems with the existing IMDL datasets and propose the required modifications. We also train the state-of-the-art IMDL methods on our proposed TrainFors1 dataset for a fair evaluation and report the actual performance of these methods under similar conditions.
2102.10960
Guoqing Liu
Guoqing Liu, Chuheng Zhang, Li Zhao, Tao Qin, Jinhua Zhu, Jian Li, Nenghai Yu, Tie-Yan Liu
Return-Based Contrastive Representation Learning for Reinforcement Learning
ICLR 2021
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, various auxiliary tasks have been proposed to accelerate representation learning and improve sample efficiency in deep reinforcement learning (RL). However, existing auxiliary tasks do not take the characteristics of RL problems into consideration and are unsupervised. By leveraging returns, the most important feedback signals in RL, we propose a novel auxiliary task that forces the learnt representations to discriminate state-action pairs with different returns. Our auxiliary loss is theoretically justified to learn representations that capture the structure of a new form of state-action abstraction, under which state-action pairs with similar return distributions are aggregated together. In low data regime, our algorithm outperforms strong baselines on complex tasks in Atari games and DeepMind Control suite, and achieves even better performance when combined with existing auxiliary tasks.
[ { "created": "Mon, 22 Feb 2021 13:04:18 GMT", "version": "v1" } ]
2021-02-23
[ [ "Liu", "Guoqing", "" ], [ "Zhang", "Chuheng", "" ], [ "Zhao", "Li", "" ], [ "Qin", "Tao", "" ], [ "Zhu", "Jinhua", "" ], [ "Li", "Jian", "" ], [ "Yu", "Nenghai", "" ], [ "Liu", "Tie-Yan", "" ] ]
Recently, various auxiliary tasks have been proposed to accelerate representation learning and improve sample efficiency in deep reinforcement learning (RL). However, existing auxiliary tasks do not take the characteristics of RL problems into consideration and are unsupervised. By leveraging returns, the most important feedback signals in RL, we propose a novel auxiliary task that forces the learnt representations to discriminate state-action pairs with different returns. Our auxiliary loss is theoretically justified to learn representations that capture the structure of a new form of state-action abstraction, under which state-action pairs with similar return distributions are aggregated together. In low data regime, our algorithm outperforms strong baselines on complex tasks in Atari games and DeepMind Control suite, and achieves even better performance when combined with existing auxiliary tasks.
2408.07479
Luisa Coheur
Ana Sofia Evans, Helena Moniz and Lu\'isa Coheur
A Study on Bias Detection and Classification in Natural Language Processing
31 pages, 15 Tables, 4 Figures
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Human biases have been shown to influence the performance of models and algorithms in various fields, including Natural Language Processing. While the study of this phenomenon is garnering focus in recent years, the available resources are still relatively scarce, often focusing on different forms or manifestations of biases. The aim of our work is twofold: 1) gather publicly-available datasets and determine how to better combine them to effectively train models in the task of hate speech detection and classification; 2) analyse the main issues with these datasets, such as scarcity, skewed resources, and reliance on non-persistent data. We discuss these issues in tandem with the development of our experiments, in which we show that the combinations of different datasets greatly impact the models' performance.
[ { "created": "Wed, 14 Aug 2024 11:49:24 GMT", "version": "v1" } ]
2024-08-15
[ [ "Evans", "Ana Sofia", "" ], [ "Moniz", "Helena", "" ], [ "Coheur", "Luísa", "" ] ]
Human biases have been shown to influence the performance of models and algorithms in various fields, including Natural Language Processing. While the study of this phenomenon is garnering focus in recent years, the available resources are still relatively scarce, often focusing on different forms or manifestations of biases. The aim of our work is twofold: 1) gather publicly-available datasets and determine how to better combine them to effectively train models in the task of hate speech detection and classification; 2) analyse the main issues with these datasets, such as scarcity, skewed resources, and reliance on non-persistent data. We discuss these issues in tandem with the development of our experiments, in which we show that the combinations of different datasets greatly impact the models' performance.
2008.13690
Jussi Tohka
Jussi Tohka and Mark van Gils
Evaluation of machine learning algorithms for Health and Wellness applications: a tutorial
To be published in Computers in Biology and Medicine
null
10.1016/j.compbiomed.2021.104324
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
Research on decision support applications in healthcare, such as those related to diagnosis, prediction, treatment planning, etc., have seen enormously increased interest recently. This development is thanks to the increase in data availability as well as advances in artificial intelligence and machine learning research. Highly promising research examples are published daily. However, at the same time, there are some unrealistic expectations with regards to the requirements for reliable development and objective validation that is needed in healthcare settings. These expectations may lead to unmet schedules and disappointments (or non-uptake) at the end-user side. It is the aim of this tutorial to provide practical guidance on how to assess performance reliably and efficiently and avoid common traps. Instead of giving a list of do's and don't s, this tutorial tries to build a better understanding behind these do's and don't s and presents both the most relevant performance evaluation criteria as well as how to compute them. Along the way, we will indicate common mistakes and provide references discussing various topics more in-depth.
[ { "created": "Mon, 31 Aug 2020 15:50:51 GMT", "version": "v1" }, { "created": "Wed, 24 Mar 2021 16:40:09 GMT", "version": "v2" } ]
2021-03-25
[ [ "Tohka", "Jussi", "" ], [ "van Gils", "Mark", "" ] ]
Research on decision support applications in healthcare, such as those related to diagnosis, prediction, treatment planning, etc., have seen enormously increased interest recently. This development is thanks to the increase in data availability as well as advances in artificial intelligence and machine learning research. Highly promising research examples are published daily. However, at the same time, there are some unrealistic expectations with regards to the requirements for reliable development and objective validation that is needed in healthcare settings. These expectations may lead to unmet schedules and disappointments (or non-uptake) at the end-user side. It is the aim of this tutorial to provide practical guidance on how to assess performance reliably and efficiently and avoid common traps. Instead of giving a list of do's and don't s, this tutorial tries to build a better understanding behind these do's and don't s and presents both the most relevant performance evaluation criteria as well as how to compute them. Along the way, we will indicate common mistakes and provide references discussing various topics more in-depth.
1612.03382
Sahar Yousefi
Sahar Yousefi, M.T. Manzuri Shalmani, Jeremy Lin, Marius Staring
A Novel Motion Detection Method Resistant to Severe Illumination Changes
null
null
10.1109/TCSVT.2018.2885211
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recently, there has been a considerable attention given to the motion detection problem due to the explosive growth of its applications in video analysis and surveillance systems. While the previous approaches can produce good results, an accurate detection of motion remains a challenging task due to the difficulties raised by illumination variations, occlusion, camouflage, burst physical motion, dynamic texture, and environmental changes such as those on climate changes, sunlight changes during a day, etc. In this paper, we propose a novel per-pixel motion descriptor for both motion detection and dynamic texture segmentation which outperforms the current methods in the literature particularly in severe scenarios. The proposed descriptor is based on two complementary three-dimensional-discrete wavelet transform (3D-DWT) and three-dimensional wavelet leader. In this approach, a feature vector is extracted for each pixel by applying a novel three dimensional wavelet-based motion descriptor. Then, the extracted features are clustered by a clustering method such as well-known k-means algorithm or Gaussian Mixture Model (GMM). The experimental results demonstrate the effectiveness of our proposed method compared to the other motion detection approaches from the literature. The application of the proposed method and additional experimental results for the different datasets are available at (http://dspl.ce.sharif.edu/motiondetector.html).
[ { "created": "Sun, 11 Dec 2016 07:50:00 GMT", "version": "v1" }, { "created": "Wed, 1 Feb 2017 05:38:03 GMT", "version": "v2" }, { "created": "Mon, 8 May 2017 08:14:06 GMT", "version": "v3" }, { "created": "Wed, 10 May 2017 05:57:40 GMT", "version": "v4" }, { "created": "Thu, 5 Oct 2017 12:55:56 GMT", "version": "v5" }, { "created": "Thu, 15 Mar 2018 13:29:46 GMT", "version": "v6" } ]
2021-03-29
[ [ "Yousefi", "Sahar", "" ], [ "Shalmani", "M. T. Manzuri", "" ], [ "Lin", "Jeremy", "" ], [ "Staring", "Marius", "" ] ]
Recently, there has been a considerable attention given to the motion detection problem due to the explosive growth of its applications in video analysis and surveillance systems. While the previous approaches can produce good results, an accurate detection of motion remains a challenging task due to the difficulties raised by illumination variations, occlusion, camouflage, burst physical motion, dynamic texture, and environmental changes such as those on climate changes, sunlight changes during a day, etc. In this paper, we propose a novel per-pixel motion descriptor for both motion detection and dynamic texture segmentation which outperforms the current methods in the literature particularly in severe scenarios. The proposed descriptor is based on two complementary three-dimensional-discrete wavelet transform (3D-DWT) and three-dimensional wavelet leader. In this approach, a feature vector is extracted for each pixel by applying a novel three dimensional wavelet-based motion descriptor. Then, the extracted features are clustered by a clustering method such as well-known k-means algorithm or Gaussian Mixture Model (GMM). The experimental results demonstrate the effectiveness of our proposed method compared to the other motion detection approaches from the literature. The application of the proposed method and additional experimental results for the different datasets are available at (http://dspl.ce.sharif.edu/motiondetector.html).
2211.05824
Hassan Khan
Jason Ceci, Jonah Stegman, Hassan Khan
No Privacy in the Electronics Repair Industry
This paper has been accepted to appear at the 44th IEEE Symposium on Security and Privacy (IEEE S&P 2023)
null
null
null
cs.CR cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Electronics repair and service providers offer a range of services to computing device owners across North America -- from software installation to hardware repair. Device owners obtain these services and leave their device along with their access credentials at the mercy of technicians, which leads to privacy concerns for owners' personal data. We conduct a comprehensive four-part study to measure the state of privacy in the electronics repair industry. First, through a field study with 18 service providers, we uncover that most service providers do not have any privacy policy or controls to safeguard device owners' personal data from snooping by technicians. Second, we drop rigged devices for repair at 16 service providers and collect data on widespread privacy violations by technicians, including snooping on personal data, copying data off the device, and removing tracks of snooping activities. Third, we conduct an online survey (n=112) to collect data on customers' experiences when getting devices repaired. Fourth, we invite a subset of survey respondents (n=30) for semi-structured interviews to establish a deeper understanding of their experiences and identify potential solutions to curtail privacy violations by technicians. We apply our findings to discuss possible controls and actions different stakeholders and regulatory agencies should take to improve the state of privacy in the repair industry.
[ { "created": "Thu, 10 Nov 2022 19:27:21 GMT", "version": "v1" } ]
2022-11-14
[ [ "Ceci", "Jason", "" ], [ "Stegman", "Jonah", "" ], [ "Khan", "Hassan", "" ] ]
Electronics repair and service providers offer a range of services to computing device owners across North America -- from software installation to hardware repair. Device owners obtain these services and leave their device along with their access credentials at the mercy of technicians, which leads to privacy concerns for owners' personal data. We conduct a comprehensive four-part study to measure the state of privacy in the electronics repair industry. First, through a field study with 18 service providers, we uncover that most service providers do not have any privacy policy or controls to safeguard device owners' personal data from snooping by technicians. Second, we drop rigged devices for repair at 16 service providers and collect data on widespread privacy violations by technicians, including snooping on personal data, copying data off the device, and removing tracks of snooping activities. Third, we conduct an online survey (n=112) to collect data on customers' experiences when getting devices repaired. Fourth, we invite a subset of survey respondents (n=30) for semi-structured interviews to establish a deeper understanding of their experiences and identify potential solutions to curtail privacy violations by technicians. We apply our findings to discuss possible controls and actions different stakeholders and regulatory agencies should take to improve the state of privacy in the repair industry.
2210.15926
Hamid Fsian
Hamid Fsian, Vahid Mohammadi, Pierre Gouton, Saeid Minaei
Comparison of Stereo Matching Algorithms for the Development of Disparity Map
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Stereo Matching is one of the classical problems in computer vision for the extraction of 3D information but still controversial for accuracy and processing costs. The use of matching techniques and cost functions is crucial in the development of the disparity map. This paper presents a comparative study of six different stereo matching algorithms including Block Matching (BM), Block Matching with Dynamic Programming (BMDP), Belief Propagation (BP), Gradient Feature Matching (GF), Histogram of Oriented Gradient (HOG), and the proposed method. Also three cost functions namely Mean Squared Error (MSE), Sum of Absolute Differences (SAD), Normalized Cross-Correlation (NCC) were used and compared. The stereo images used in this study were from the Middlebury Stereo Datasets provided with perfect and imperfect calibrations. Results show that the selection of matching function is quite important and also depends on the images properties. Results showed that the BP algorithm in most cases provided better results getting accuracies over 95%.
[ { "created": "Fri, 28 Oct 2022 06:14:14 GMT", "version": "v1" } ]
2022-10-31
[ [ "Fsian", "Hamid", "" ], [ "Mohammadi", "Vahid", "" ], [ "Gouton", "Pierre", "" ], [ "Minaei", "Saeid", "" ] ]
Stereo Matching is one of the classical problems in computer vision for the extraction of 3D information but still controversial for accuracy and processing costs. The use of matching techniques and cost functions is crucial in the development of the disparity map. This paper presents a comparative study of six different stereo matching algorithms including Block Matching (BM), Block Matching with Dynamic Programming (BMDP), Belief Propagation (BP), Gradient Feature Matching (GF), Histogram of Oriented Gradient (HOG), and the proposed method. Also three cost functions namely Mean Squared Error (MSE), Sum of Absolute Differences (SAD), Normalized Cross-Correlation (NCC) were used and compared. The stereo images used in this study were from the Middlebury Stereo Datasets provided with perfect and imperfect calibrations. Results show that the selection of matching function is quite important and also depends on the images properties. Results showed that the BP algorithm in most cases provided better results getting accuracies over 95%.
1406.2255
Ahmed El Shafie
Ahmed El Shafie, Tamer Khattab, Amr El-Keyi
Energy-Efficient Cooperative Cognitive Relaying Schemes for Cognitive Radio Networks
null
null
null
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate a cognitive radio network in which a primary user (PU) may cooperate with a cognitive radio user (i.e., a secondary user (SU)) for transmissions of its data packets. The PU is assumed to be a buffered node operating in a time-slotted fashion where the time is partitioned into equal-length slots. We develop two schemes which involve cooperation between primary and secondary users. To satisfy certain quality of service (QoS) requirements, users share time slot duration and channel frequency bandwidth. Moreover, the SU may leverage the primary feedback message to further increase both its data rate and satisfy the PU QoS requirements. The proposed cooperative schemes are designed such that the SU data rate is maximized under the constraint that the PU average queueing delay is maintained less than the average queueing delay in case of non-cooperative PU. In addition, the proposed schemes guarantee the stability of the PU queue and maintain the average energy emitted by the SU below a certain value. The proposed schemes also provide more robust and potentially continuous service for SUs compared to the conventional practice in cognitive networks where SUs transmit in the spectrum holes and silence sessions of the PUs. We include primary source burstiness, sensing errors, and feedback decoding errors to the analysis of our proposed cooperative schemes. The optimization problems are solved offline and require a simple 2-dimensional grid-based search over the optimization variables. Numerical results show the beneficial gains of the cooperative schemes in terms of SU data rate and PU throughput, average PU queueing delay, and average PU energy savings.
[ { "created": "Mon, 9 Jun 2014 17:40:21 GMT", "version": "v1" }, { "created": "Tue, 8 Jul 2014 21:25:45 GMT", "version": "v2" }, { "created": "Thu, 26 Oct 2017 14:39:40 GMT", "version": "v3" } ]
2017-10-27
[ [ "Shafie", "Ahmed El", "" ], [ "Khattab", "Tamer", "" ], [ "El-Keyi", "Amr", "" ] ]
We investigate a cognitive radio network in which a primary user (PU) may cooperate with a cognitive radio user (i.e., a secondary user (SU)) for transmissions of its data packets. The PU is assumed to be a buffered node operating in a time-slotted fashion where the time is partitioned into equal-length slots. We develop two schemes which involve cooperation between primary and secondary users. To satisfy certain quality of service (QoS) requirements, users share time slot duration and channel frequency bandwidth. Moreover, the SU may leverage the primary feedback message to further increase both its data rate and satisfy the PU QoS requirements. The proposed cooperative schemes are designed such that the SU data rate is maximized under the constraint that the PU average queueing delay is maintained less than the average queueing delay in case of non-cooperative PU. In addition, the proposed schemes guarantee the stability of the PU queue and maintain the average energy emitted by the SU below a certain value. The proposed schemes also provide more robust and potentially continuous service for SUs compared to the conventional practice in cognitive networks where SUs transmit in the spectrum holes and silence sessions of the PUs. We include primary source burstiness, sensing errors, and feedback decoding errors to the analysis of our proposed cooperative schemes. The optimization problems are solved offline and require a simple 2-dimensional grid-based search over the optimization variables. Numerical results show the beneficial gains of the cooperative schemes in terms of SU data rate and PU throughput, average PU queueing delay, and average PU energy savings.
1005.0092
Andrei Sukhov M
E.S. Sagatov, A.M. Sukhov, P. Calyam
Influence of distortions of key frames on video transfer in wireless networks
6 pages, 4 figures, 2 Tables
null
10.1109/ISVC.2010.5656258
null
cs.NI cs.MM
http://creativecommons.org/licenses/by/3.0/
In this paper it is shown that for substantial increase of video quality in wireless network it is necessary to execute two obligatory points on modernization of the communication scheme. The player on the received part should throw back automatically duplicated RTP packets, server of streaming video should duplicate the packets containing the information of key frames. Coefficients of the mathematical model describing video quality in wireless network have been found for WiFi and 3G standards and codecs MPEG-2 and MPEG-4 (DivX). The special experimental technique which has allowed collecting and processing the data has been developed for calculation of values of factors.
[ { "created": "Sat, 1 May 2010 17:15:36 GMT", "version": "v1" } ]
2017-02-20
[ [ "Sagatov", "E. S.", "" ], [ "Sukhov", "A. M.", "" ], [ "Calyam", "P.", "" ] ]
In this paper it is shown that for substantial increase of video quality in wireless network it is necessary to execute two obligatory points on modernization of the communication scheme. The player on the received part should throw back automatically duplicated RTP packets, server of streaming video should duplicate the packets containing the information of key frames. Coefficients of the mathematical model describing video quality in wireless network have been found for WiFi and 3G standards and codecs MPEG-2 and MPEG-4 (DivX). The special experimental technique which has allowed collecting and processing the data has been developed for calculation of values of factors.
2110.09829
Ilir Kola
Ilir Kola, Pradeep K. Murukannaiah, Catholijn M. Jonker, M. Birna van Riemsdijk
Towards Social Situation Awareness in Support Agents
8 pages, 1 figure
null
10.1109/MIS.2022.3163625
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial agents that support people in their daily activities (e.g., virtual coaches and personal assistants) are increasingly prevalent. Since many daily activities are social in nature, support agents should understand a user's social situation to offer comprehensive support. However, there are no systematic approaches for developing support agents that are social situation aware. We identify key requirements for a support agent to be social situation aware and propose steps to realize those requirements. These steps are presented through a conceptual architecture centered on two key ideas: (1) conceptualizing social situation awareness as an instantiation of `general' situation awareness, and (2) using situation taxonomies for such instantiation. This enables support agents to represent a user's social situation, comprehend its meaning, and assess its impact on the user's behavior. We discuss empirical results supporting the effectiveness of the proposed approach and illustrate how the architecture can be used in support agents through two use cases.
[ { "created": "Tue, 19 Oct 2021 10:35:46 GMT", "version": "v1" }, { "created": "Wed, 20 Oct 2021 06:20:46 GMT", "version": "v2" }, { "created": "Mon, 4 Apr 2022 08:55:03 GMT", "version": "v3" } ]
2022-04-05
[ [ "Kola", "Ilir", "" ], [ "Murukannaiah", "Pradeep K.", "" ], [ "Jonker", "Catholijn M.", "" ], [ "van Riemsdijk", "M. Birna", "" ] ]
Artificial agents that support people in their daily activities (e.g., virtual coaches and personal assistants) are increasingly prevalent. Since many daily activities are social in nature, support agents should understand a user's social situation to offer comprehensive support. However, there are no systematic approaches for developing support agents that are social situation aware. We identify key requirements for a support agent to be social situation aware and propose steps to realize those requirements. These steps are presented through a conceptual architecture centered on two key ideas: (1) conceptualizing social situation awareness as an instantiation of `general' situation awareness, and (2) using situation taxonomies for such instantiation. This enables support agents to represent a user's social situation, comprehend its meaning, and assess its impact on the user's behavior. We discuss empirical results supporting the effectiveness of the proposed approach and illustrate how the architecture can be used in support agents through two use cases.
1406.0079
Shashishekar Ramakrishna
Shashishekar Ramakrishna and Adrian Paschke
Bridging the gap between Legal Practitioners and Knowledge Engineers using semi-formal KR
published in proceedings of the 8th International Workshop on Value Modeling and Business Ontology, VMBO, Berlin, 2014
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of Structured English as a computation independent knowledge representation format for non-technical users in business rules representation has been proposed in OMGs Semantics and Business Vocabulary Representation (SBVR). In the legal domain we face a similar problem. Formal representation languages, such as OASIS LegalRuleML and legal ontologies (LKIF, legal OWL2 ontologies etc.) support the technical knowledge engineer and the automated reasoning. But, they can be hardly used directly by the legal domain experts who do not have a computer science background. In this paper we adapt the SBVR Structured English approach for the legal domain and implement a proof-of-concept, called KR4IPLaw, which enables legal domain experts to represent their knowledge in Structured English in a computational independent and hence, for them, more usable way. The benefit of this approach is that the underlying pre-defined semantics of the Structured English approach makes transformations into formal languages such as OASIS LegalRuleML and OWL2 ontologies possible. We exemplify our approach in the domain of patent law.
[ { "created": "Sat, 31 May 2014 14:16:30 GMT", "version": "v1" } ]
2014-06-03
[ [ "Ramakrishna", "Shashishekar", "" ], [ "Paschke", "Adrian", "" ] ]
The use of Structured English as a computation independent knowledge representation format for non-technical users in business rules representation has been proposed in OMGs Semantics and Business Vocabulary Representation (SBVR). In the legal domain we face a similar problem. Formal representation languages, such as OASIS LegalRuleML and legal ontologies (LKIF, legal OWL2 ontologies etc.) support the technical knowledge engineer and the automated reasoning. But, they can be hardly used directly by the legal domain experts who do not have a computer science background. In this paper we adapt the SBVR Structured English approach for the legal domain and implement a proof-of-concept, called KR4IPLaw, which enables legal domain experts to represent their knowledge in Structured English in a computational independent and hence, for them, more usable way. The benefit of this approach is that the underlying pre-defined semantics of the Structured English approach makes transformations into formal languages such as OASIS LegalRuleML and OWL2 ontologies possible. We exemplify our approach in the domain of patent law.
1611.01853
Aviv Yehezkel
Reuven Cohen, Liran Katzir and Aviv Yehezkel
MTS Sketch for Accurate Estimation of Set-Expression Cardinalities from Small Samples
arXiv admin note: text overlap with arXiv:1508.06216
null
null
null
cs.DB cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sketch-based streaming algorithms allow efficient processing of big data. These algorithms use small fixed-size storage to store a summary ("sketch") of the input data, and use probabilistic algorithms to estimate the desired quantity. However, in many real-world applications it is impractical to collect and process the entire data stream, the common practice is thus to sample and process only a small part of it. While sampling is crucial for handling massive data sets, it may reduce accuracy. In this paper we present a new framework that can accurately estimate the cardinality of any set expression between any number of streams using only a small sample of each stream. The proposed framework consists of a new sketch, called Maximal-Term with Subsample (MTS), and a family of algorithms that use this sketch. An example of a possible query that can be efficiently answered using the proposed sketch is, How many distinct tuples appear in tables $T_1$ and $T_2$, but not in $T_3$? The algorithms presented in this paper answer such queries accurately, processing only a small sample of the tuples in each table and using a constant amount of memory. Such estimations are useful for the optimization of queries over very large database systems. We show that all our algorithms are unbiased, and we analyze their asymptotic variance.
[ { "created": "Sun, 6 Nov 2016 22:22:40 GMT", "version": "v1" } ]
2016-11-08
[ [ "Cohen", "Reuven", "" ], [ "Katzir", "Liran", "" ], [ "Yehezkel", "Aviv", "" ] ]
Sketch-based streaming algorithms allow efficient processing of big data. These algorithms use small fixed-size storage to store a summary ("sketch") of the input data, and use probabilistic algorithms to estimate the desired quantity. However, in many real-world applications it is impractical to collect and process the entire data stream, the common practice is thus to sample and process only a small part of it. While sampling is crucial for handling massive data sets, it may reduce accuracy. In this paper we present a new framework that can accurately estimate the cardinality of any set expression between any number of streams using only a small sample of each stream. The proposed framework consists of a new sketch, called Maximal-Term with Subsample (MTS), and a family of algorithms that use this sketch. An example of a possible query that can be efficiently answered using the proposed sketch is, How many distinct tuples appear in tables $T_1$ and $T_2$, but not in $T_3$? The algorithms presented in this paper answer such queries accurately, processing only a small sample of the tuples in each table and using a constant amount of memory. Such estimations are useful for the optimization of queries over very large database systems. We show that all our algorithms are unbiased, and we analyze their asymptotic variance.
1602.01038
Mahmoud Ashour
Mahmoud Ashour and Amr El-Keyi
Interactive Multiple Model Estimation of Doubly-Selective Channels for OFDM systems
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose an algorithm for channel estimation, acquisition and tracking, for orthogonal frequency division multiplexing (OFDM) systems. The proposed algorithm is suitable for vehicular communications that encounter very high mobility. A preamble sequence is used to derive an initial estimate of the channel using least squares (LS). The temporal variation of the channel within one OFDM symbol is approximated by two complex exponential basis expansion models (CE-BEM). One of the Fourier-based BEMs is intended to capture the low frequencies in the channel (slow variations corresponding to low Doppler), while the other is destined to capture high frequencies (fast variations corresponding to high Doppler). Kalman filtering is employed to track the BEM coefficients iteratively on an OFDM symbol-by-symbol basis. An interactive multiple model (IMM) estimator is implemented to dynamically mix the estimates obtained by the two Kalman filters, each of which matched to one of the BEMs. Extensive numerical simulations are conducted to signify the gain obtained by the proposed combining technique.
[ { "created": "Tue, 2 Feb 2016 18:41:37 GMT", "version": "v1" } ]
2016-02-03
[ [ "Ashour", "Mahmoud", "" ], [ "El-Keyi", "Amr", "" ] ]
In this paper, we propose an algorithm for channel estimation, acquisition and tracking, for orthogonal frequency division multiplexing (OFDM) systems. The proposed algorithm is suitable for vehicular communications that encounter very high mobility. A preamble sequence is used to derive an initial estimate of the channel using least squares (LS). The temporal variation of the channel within one OFDM symbol is approximated by two complex exponential basis expansion models (CE-BEM). One of the Fourier-based BEMs is intended to capture the low frequencies in the channel (slow variations corresponding to low Doppler), while the other is destined to capture high frequencies (fast variations corresponding to high Doppler). Kalman filtering is employed to track the BEM coefficients iteratively on an OFDM symbol-by-symbol basis. An interactive multiple model (IMM) estimator is implemented to dynamically mix the estimates obtained by the two Kalman filters, each of which matched to one of the BEMs. Extensive numerical simulations are conducted to signify the gain obtained by the proposed combining technique.
1207.0873
EPTCS
Luca Bortolussi (University of Trieste), Vashti Galpin (University of Edinburgh), Jane Hillston (University of Edinburgh)
Hybrid performance modelling of opportunistic networks
In Proceedings QAPL 2012, arXiv:1207.0559
EPTCS 85, 2012, pp. 106-121
10.4204/EPTCS.85.8
null
cs.SY cs.LO cs.NI cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We demonstrate the modelling of opportunistic networks using the process algebra stochastic HYPE. Network traffic is modelled as continuous flows, contact between nodes in the network is modelled stochastically, and instantaneous decisions are modelled as discrete events. Our model describes a network of stationary video sensors with a mobile ferry which collects data from the sensors and delivers it to the base station. We consider different mobility models and different buffer sizes for the ferries. This case study illustrates the flexibility and expressive power of stochastic HYPE. We also discuss the software that enables us to describe stochastic HYPE models and simulate them.
[ { "created": "Wed, 4 Jul 2012 01:25:04 GMT", "version": "v1" } ]
2012-07-05
[ [ "Bortolussi", "Luca", "", "University of Trieste" ], [ "Galpin", "Vashti", "", "University of\n Edinburgh" ], [ "Hillston", "Jane", "", "University of Edinburgh" ] ]
We demonstrate the modelling of opportunistic networks using the process algebra stochastic HYPE. Network traffic is modelled as continuous flows, contact between nodes in the network is modelled stochastically, and instantaneous decisions are modelled as discrete events. Our model describes a network of stationary video sensors with a mobile ferry which collects data from the sensors and delivers it to the base station. We consider different mobility models and different buffer sizes for the ferries. This case study illustrates the flexibility and expressive power of stochastic HYPE. We also discuss the software that enables us to describe stochastic HYPE models and simulate them.
2003.00409
Haokun Li
Haokun Li and Bican Xia
Solving Satisfiability of Polynomial Formulas By Sample-Cell Projection
null
null
null
null
cs.LO cs.AI cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new algorithm for deciding the satisfiability of polynomial formulas over the reals is proposed. The key point of the algorithm is a new projection operator, called sample-cell projection operator, custom-made for Conflict-Driven Clause Learning (CDCL)-style search. Although the new operator is also a CAD (Cylindrical Algebraic Decomposition)-like projection operator which computes the cell (not necessarily cylindrical) containing a given sample such that each polynomial from the problem is sign-invariant on the cell, it is of singly exponential time complexity. The sample-cell projection operator can efficiently guide CDCL-style search away from conflicting states. Experiments show the effectiveness of the new algorithm.
[ { "created": "Sun, 1 Mar 2020 05:36:09 GMT", "version": "v1" }, { "created": "Wed, 4 Mar 2020 03:01:35 GMT", "version": "v2" } ]
2020-03-05
[ [ "Li", "Haokun", "" ], [ "Xia", "Bican", "" ] ]
A new algorithm for deciding the satisfiability of polynomial formulas over the reals is proposed. The key point of the algorithm is a new projection operator, called sample-cell projection operator, custom-made for Conflict-Driven Clause Learning (CDCL)-style search. Although the new operator is also a CAD (Cylindrical Algebraic Decomposition)-like projection operator which computes the cell (not necessarily cylindrical) containing a given sample such that each polynomial from the problem is sign-invariant on the cell, it is of singly exponential time complexity. The sample-cell projection operator can efficiently guide CDCL-style search away from conflicting states. Experiments show the effectiveness of the new algorithm.
2012.14058
Yiming Liu
Yiming Liu, Erwu Liu, Rui Wang, Zhu Han, Binyu Lu
Asymptotic Achievability of the Cram\'er-Rao Lower Bound of Channel Estimation for Reconfigurable Intelligent Surface Aided Communication Systems
5 pages, 3 figures, 1 table. To be published in IEEE Wireless Communications Letters
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To achieve the joint active and passive beamforming gains in the reconfigurable intelligent surface assisted millimeter wave system, the reflected cascade channel needs to be accurately estimated. Many strategies have been proposed in the literature to solve this issue. However, whether the Cram\'er-Rao lower bound (CRLB) of such estimation is achievable still remains uncertain. To fill this gap, we first convert the channel estimation problem into a sparse signal recovery problem by utilizing the properties of discrete Fourier transform matrix and Kronecker product. Then, a joint typicality based estimator is utilized to carry out the signal recovery task. We show that, through both mathematical proofs and numerical simulations, the solution proposed in this letter can in fact asymptotically achieve the CRLB.
[ { "created": "Mon, 28 Dec 2020 02:03:22 GMT", "version": "v1" }, { "created": "Sat, 6 Feb 2021 04:59:03 GMT", "version": "v2" }, { "created": "Tue, 21 Sep 2021 05:36:25 GMT", "version": "v3" } ]
2021-09-22
[ [ "Liu", "Yiming", "" ], [ "Liu", "Erwu", "" ], [ "Wang", "Rui", "" ], [ "Han", "Zhu", "" ], [ "Lu", "Binyu", "" ] ]
To achieve the joint active and passive beamforming gains in the reconfigurable intelligent surface assisted millimeter wave system, the reflected cascade channel needs to be accurately estimated. Many strategies have been proposed in the literature to solve this issue. However, whether the Cram\'er-Rao lower bound (CRLB) of such estimation is achievable still remains uncertain. To fill this gap, we first convert the channel estimation problem into a sparse signal recovery problem by utilizing the properties of discrete Fourier transform matrix and Kronecker product. Then, a joint typicality based estimator is utilized to carry out the signal recovery task. We show that, through both mathematical proofs and numerical simulations, the solution proposed in this letter can in fact asymptotically achieve the CRLB.
1606.03021
Minh-Duc Hua
Minh-Duc Hua, Jochen Trumpf, Tarek Hamel, Robert Mahony, and Pascal Morin
Feature-based Recursive Observer Design for Homography Estimation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new algorithm for online estimation of a sequence of homographies applicable to image sequences obtained from robotic vehicles equipped with vision sensors. The approach taken exploits the underlying Special Linear group structure of the set of homographies along with gyroscope measurements and direct point-feature correspondences between images to develop temporal filter for the homography estimate. Theoretical analysis and experimental results are provided to demonstrate the robustness of the proposed algorithm. The experimental results show excellent performance even in the case of very fast camera motion (relative to frame rate), severe occlusion, and in the presence of specular reflections.
[ { "created": "Thu, 9 Jun 2016 16:35:46 GMT", "version": "v1" } ]
2016-06-10
[ [ "Hua", "Minh-Duc", "" ], [ "Trumpf", "Jochen", "" ], [ "Hamel", "Tarek", "" ], [ "Mahony", "Robert", "" ], [ "Morin", "Pascal", "" ] ]
This paper presents a new algorithm for online estimation of a sequence of homographies applicable to image sequences obtained from robotic vehicles equipped with vision sensors. The approach taken exploits the underlying Special Linear group structure of the set of homographies along with gyroscope measurements and direct point-feature correspondences between images to develop temporal filter for the homography estimate. Theoretical analysis and experimental results are provided to demonstrate the robustness of the proposed algorithm. The experimental results show excellent performance even in the case of very fast camera motion (relative to frame rate), severe occlusion, and in the presence of specular reflections.
0903.1146
Toby Walsh
Toby Walsh
Breaking Value Symmetry
Principles and Practice of Constraint Programming - CP 2007, 13th International Conference, CP 2007, Providence, RI, USA, September 23-27, 2007, Proceedings. Lecture Notes in Computer Science 4741 Springer 2007, ISBN 978-3-540-74969-
null
null
COMIC-2007-008
cs.AI cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One common type of symmetry is when values are symmetric. For example, if we are assigning colours (values) to nodes (variables) in a graph colouring problem then we can uniformly interchange the colours throughout a colouring. For a problem with value symmetries, all symmetric solutions can be eliminated in polynomial time. However, as we show here, both static and dynamic methods to deal with symmetry have computational limitations. With static methods, pruning all symmetric values is NP-hard in general. With dynamic methods, we can take exponential time on problems which static methods solve without search.
[ { "created": "Fri, 6 Mar 2009 03:50:17 GMT", "version": "v1" } ]
2009-03-09
[ [ "Walsh", "Toby", "" ] ]
One common type of symmetry is when values are symmetric. For example, if we are assigning colours (values) to nodes (variables) in a graph colouring problem then we can uniformly interchange the colours throughout a colouring. For a problem with value symmetries, all symmetric solutions can be eliminated in polynomial time. However, as we show here, both static and dynamic methods to deal with symmetry have computational limitations. With static methods, pruning all symmetric values is NP-hard in general. With dynamic methods, we can take exponential time on problems which static methods solve without search.
1612.04459
Xiaohu Ge
Xiaohu Ge, Jiaqi Chen, Songxue Ying, Min Chen
Energy and coverage efficiency trade-off in 5G small cell networks
Our work needs further polish
null
null
null
cs.NI cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
When small cells are densely deployed in the fifth generation (5G) cellular networks, the base stations (BSs) switch-off strategy is an effective approach for saving energy consumption considering changes of traffic load. In general, the loss of coverage efficiency is an inevitable cost for cellular networks adopting BSs switch-off strategies. Based on the BSs switch-off strategy, an optimized energy density efficiency of hard core point process (HCPP) small cell networks is proposed to trade off the energy and coverage efficiency. Simulation results imply that the minimum active BS distance used for the BSs switch-off strategy is recommended as 150 meters to achieve a tradeoff between energy and coverage efficiency in 5G small cell networks.
[ { "created": "Wed, 14 Dec 2016 02:17:20 GMT", "version": "v1" }, { "created": "Wed, 5 Jul 2017 10:54:52 GMT", "version": "v2" } ]
2017-07-06
[ [ "Ge", "Xiaohu", "" ], [ "Chen", "Jiaqi", "" ], [ "Ying", "Songxue", "" ], [ "Chen", "Min", "" ] ]
When small cells are densely deployed in the fifth generation (5G) cellular networks, the base stations (BSs) switch-off strategy is an effective approach for saving energy consumption considering changes of traffic load. In general, the loss of coverage efficiency is an inevitable cost for cellular networks adopting BSs switch-off strategies. Based on the BSs switch-off strategy, an optimized energy density efficiency of hard core point process (HCPP) small cell networks is proposed to trade off the energy and coverage efficiency. Simulation results imply that the minimum active BS distance used for the BSs switch-off strategy is recommended as 150 meters to achieve a tradeoff between energy and coverage efficiency in 5G small cell networks.
2105.05454
Darja Smite
Darja Smite, Marius Mikalsen, Nils B. Moe, Viktoria Stray and Eriks Klotins
From Collaboration to Solitude and Back: Remote Pair Programming during COVID-19
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Along with the increasing popularity of agile software development, software work has become much more social than ever. Contemporary software teams rely on a variety of collaborative practices, such as pair programming, the topic of our study. Many agilists advocated the importance of collocation, face-to-face interaction, and physical artefacts incorporated in the shared workspace, which the COVID-19 pandemic made unavailable; most software companies around the world were forced to send their engineers to work from home. As software projects and teams overnight turned into dis-tributed collaborations, we question what happened to the pair programming practice in the work-from-home mode. This paper reports on a longitudinal study of remote pair programming in two companies. We conducted 38 interviews with 30 engineers from Norway, Sweden, and the USA, and used the results of a survey in one of the case companies. Our study is unique as we collected the data longitudinally in April/May 2020, Sep/Oct 2020, and Jan/Feb 2021. We found that pair programming has decreased and some interviewees report not pairing at all for almost a full year. The experiences of those who paired vary from actively co-editing the code by using special tools to more passively co-reading and discussing the code and solutions by sharing the screen. Finally, we found that the interest in and the use of PP over time, since the first months of forced work from home to early 2021, has admittedly increased, also as a social practice.
[ { "created": "Wed, 12 May 2021 06:38:22 GMT", "version": "v1" } ]
2021-05-13
[ [ "Smite", "Darja", "" ], [ "Mikalsen", "Marius", "" ], [ "Moe", "Nils B.", "" ], [ "Stray", "Viktoria", "" ], [ "Klotins", "Eriks", "" ] ]
Along with the increasing popularity of agile software development, software work has become much more social than ever. Contemporary software teams rely on a variety of collaborative practices, such as pair programming, the topic of our study. Many agilists advocated the importance of collocation, face-to-face interaction, and physical artefacts incorporated in the shared workspace, which the COVID-19 pandemic made unavailable; most software companies around the world were forced to send their engineers to work from home. As software projects and teams overnight turned into dis-tributed collaborations, we question what happened to the pair programming practice in the work-from-home mode. This paper reports on a longitudinal study of remote pair programming in two companies. We conducted 38 interviews with 30 engineers from Norway, Sweden, and the USA, and used the results of a survey in one of the case companies. Our study is unique as we collected the data longitudinally in April/May 2020, Sep/Oct 2020, and Jan/Feb 2021. We found that pair programming has decreased and some interviewees report not pairing at all for almost a full year. The experiences of those who paired vary from actively co-editing the code by using special tools to more passively co-reading and discussing the code and solutions by sharing the screen. Finally, we found that the interest in and the use of PP over time, since the first months of forced work from home to early 2021, has admittedly increased, also as a social practice.
2012.10695
Quoc Phong Nguyen
Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet
An Information-Theoretic Framework for Unifying Active Learning Problems
35th AAAI Conference on Artificial Intelligence (AAAI 2021), Extended version with derivations, 12 pages
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an information-theoretic framework for unifying active learning problems: level set estimation (LSE), Bayesian optimization (BO), and their generalized variant. We first introduce a novel active learning criterion that subsumes an existing LSE algorithm and achieves state-of-the-art performance in LSE problems with a continuous input domain. Then, by exploiting the relationship between LSE and BO, we design a competitive information-theoretic acquisition function for BO that has interesting connections to upper confidence bound and max-value entropy search (MES). The latter connection reveals a drawback of MES which has important implications on not only MES but also on other MES-based acquisition functions. Finally, our unifying information-theoretic framework can be applied to solve a generalized problem of LSE and BO involving multiple level sets in a data-efficient manner. We empirically evaluate the performance of our proposed algorithms using synthetic benchmark functions, a real-world dataset, and in hyperparameter tuning of machine learning models.
[ { "created": "Sat, 19 Dec 2020 14:22:48 GMT", "version": "v1" } ]
2020-12-22
[ [ "Nguyen", "Quoc Phong", "" ], [ "Low", "Bryan Kian Hsiang", "" ], [ "Jaillet", "Patrick", "" ] ]
This paper presents an information-theoretic framework for unifying active learning problems: level set estimation (LSE), Bayesian optimization (BO), and their generalized variant. We first introduce a novel active learning criterion that subsumes an existing LSE algorithm and achieves state-of-the-art performance in LSE problems with a continuous input domain. Then, by exploiting the relationship between LSE and BO, we design a competitive information-theoretic acquisition function for BO that has interesting connections to upper confidence bound and max-value entropy search (MES). The latter connection reveals a drawback of MES which has important implications on not only MES but also on other MES-based acquisition functions. Finally, our unifying information-theoretic framework can be applied to solve a generalized problem of LSE and BO involving multiple level sets in a data-efficient manner. We empirically evaluate the performance of our proposed algorithms using synthetic benchmark functions, a real-world dataset, and in hyperparameter tuning of machine learning models.
1805.06665
Bin He
Bin He, Yi Guan, Rui Dai
Classifying medical relations in clinical text via convolutional neural networks
Accepted by Artificial Intelligence In Medicine
null
10.1016/j.artmed.2018.05.001
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning research on relation classification has achieved solid performance in the general domain. This study proposes a convolutional neural network (CNN) architecture with a multi-pooling operation for medical relation classification on clinical records and explores a loss function with a category-level constraint matrix. Experiments using the 2010 i2b2/VA relation corpus demonstrate these models, which do not depend on any external features, outperform previous single-model methods and our best model is competitive with the existing ensemble-based method.
[ { "created": "Thu, 17 May 2018 09:20:52 GMT", "version": "v1" } ]
2018-05-18
[ [ "He", "Bin", "" ], [ "Guan", "Yi", "" ], [ "Dai", "Rui", "" ] ]
Deep learning research on relation classification has achieved solid performance in the general domain. This study proposes a convolutional neural network (CNN) architecture with a multi-pooling operation for medical relation classification on clinical records and explores a loss function with a category-level constraint matrix. Experiments using the 2010 i2b2/VA relation corpus demonstrate these models, which do not depend on any external features, outperform previous single-model methods and our best model is competitive with the existing ensemble-based method.
1003.4146
Michael Bommarito II
Michael J. Bommarito II, Daniel Martin Katz
A Mathematical Approach to the Study of the United States Code
5 pages, 6 figures, 2 tables.
null
10.1016/j.physa.2010.05.057
null
cs.IR cs.CY cs.DL physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The United States Code (Code) is a document containing over 22 million words that represents a large and important source of Federal statutory law. Scholars and policy advocates often discuss the direction and magnitude of changes in various aspects of the Code. However, few have mathematically formalized the notions behind these discussions or directly measured the resulting representations. This paper addresses the current state of the literature in two ways. First, we formalize a representation of the United States Code as the union of a hierarchical network and a citation network over vertices containing the language of the Code. This representation reflects the fact that the Code is a hierarchically organized document containing language and explicit citations between provisions. Second, we use this formalization to measure aspects of the Code as codified in October 2008, November 2009, and March 2010. These measurements allow for a characterization of the actual changes in the Code over time. Our findings indicate that in the recent past, the Code has grown in its amount of structure, interdependence, and language.
[ { "created": "Mon, 22 Mar 2010 12:41:01 GMT", "version": "v1" } ]
2015-05-18
[ [ "Bommarito", "Michael J.", "II" ], [ "Katz", "Daniel Martin", "" ] ]
The United States Code (Code) is a document containing over 22 million words that represents a large and important source of Federal statutory law. Scholars and policy advocates often discuss the direction and magnitude of changes in various aspects of the Code. However, few have mathematically formalized the notions behind these discussions or directly measured the resulting representations. This paper addresses the current state of the literature in two ways. First, we formalize a representation of the United States Code as the union of a hierarchical network and a citation network over vertices containing the language of the Code. This representation reflects the fact that the Code is a hierarchically organized document containing language and explicit citations between provisions. Second, we use this formalization to measure aspects of the Code as codified in October 2008, November 2009, and March 2010. These measurements allow for a characterization of the actual changes in the Code over time. Our findings indicate that in the recent past, the Code has grown in its amount of structure, interdependence, and language.
2305.04684
Kazuki Osawa
Kazuki Osawa, Satoki Ishikawa, Rio Yokota, Shigang Li, and Torsten Hoefler
ASDL: A Unified Interface for Gradient Preconditioning in PyTorch
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Gradient preconditioning is a key technique to integrate the second-order information into gradients for improving and extending gradient-based learning algorithms. In deep learning, stochasticity, nonconvexity, and high dimensionality lead to a wide variety of gradient preconditioning methods, with implementation complexity and inconsistent performance and feasibility. We propose the Automatic Second-order Differentiation Library (ASDL), an extension library for PyTorch, which offers various implementations and a plug-and-play unified interface for gradient preconditioning. ASDL enables the study and structured comparison of a range of gradient preconditioning methods.
[ { "created": "Mon, 8 May 2023 12:59:49 GMT", "version": "v1" } ]
2023-05-09
[ [ "Osawa", "Kazuki", "" ], [ "Ishikawa", "Satoki", "" ], [ "Yokota", "Rio", "" ], [ "Li", "Shigang", "" ], [ "Hoefler", "Torsten", "" ] ]
Gradient preconditioning is a key technique to integrate the second-order information into gradients for improving and extending gradient-based learning algorithms. In deep learning, stochasticity, nonconvexity, and high dimensionality lead to a wide variety of gradient preconditioning methods, with implementation complexity and inconsistent performance and feasibility. We propose the Automatic Second-order Differentiation Library (ASDL), an extension library for PyTorch, which offers various implementations and a plug-and-play unified interface for gradient preconditioning. ASDL enables the study and structured comparison of a range of gradient preconditioning methods.
1810.12737
Ciriaco Andrea D'Angelo
Giovanni Abramo, Tindaro Cicero, Ciriaco Andrea D'Angelo
Should the research performance of scientists be distinguished by gender?
null
Abramo, G., Cicero, T., D'Angelo, C.A. (2015). Should the research performance of scientists be distinguished by gender? Journal of Informetrics, 9(1), 25-38
10.1016/j.joi.2014.11.002
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The literature on gender differences in research performance seems to suggest a gap between men and women, where the former outperform the latter. Whether one agrees with the different factors proposed to explain the phenomenon, it is worthwhile to verify if comparing the performance within each gender, rather than without distinction, gives significantly different ranking lists. If there were some structural factor that determined a penalty in performance of female researchers compared to their male peers, then under conditions of equal capacities of men and women, any comparative evaluations of individual performance that fail to account for gender differences would lead to distortion of the judgments in favor of men. In this work we measure the extent of differences in rank between the two methods of comparing performance in each field of the hard sciences: for professors in the Italian university system, we compare the distributions of research performance for men and women and subsequently the ranking lists with and without distinction by gender. The results are of interest for the optimization of efficient selection in formulation of recruitment, career advancement and incentive schemes.
[ { "created": "Tue, 30 Oct 2018 13:54:47 GMT", "version": "v1" } ]
2018-10-31
[ [ "Abramo", "Giovanni", "" ], [ "Cicero", "Tindaro", "" ], [ "D'Angelo", "Ciriaco Andrea", "" ] ]
The literature on gender differences in research performance seems to suggest a gap between men and women, where the former outperform the latter. Whether one agrees with the different factors proposed to explain the phenomenon, it is worthwhile to verify if comparing the performance within each gender, rather than without distinction, gives significantly different ranking lists. If there were some structural factor that determined a penalty in performance of female researchers compared to their male peers, then under conditions of equal capacities of men and women, any comparative evaluations of individual performance that fail to account for gender differences would lead to distortion of the judgments in favor of men. In this work we measure the extent of differences in rank between the two methods of comparing performance in each field of the hard sciences: for professors in the Italian university system, we compare the distributions of research performance for men and women and subsequently the ranking lists with and without distinction by gender. The results are of interest for the optimization of efficient selection in formulation of recruitment, career advancement and incentive schemes.
1511.03576
Mohammad Khabbaz
Mohammad Khabbaz
DataGrinder: Fast, Accurate, Fully non-Parametric Classification Approach Using 2D Convex Hulls
null
null
null
null
cs.DB cs.CG cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been a long time, since data mining technologies have made their ways to the field of data management. Classification is one of the most important data mining tasks for label prediction, categorization of objects into groups, advertisement and data management. In this paper, we focus on the standard classification problem which is predicting unknown labels in Euclidean space. Most efforts in Machine Learning communities are devoted to methods that use probabilistic algorithms which are heavy on Calculus and Linear Algebra. Most of these techniques have scalability issues for big data, and are hardly parallelizable if they are to maintain their high accuracies in their standard form. Sampling is a new direction for improving scalability, using many small parallel classifiers. In this paper, rather than conventional sampling methods, we focus on a discrete classification algorithm with O(n) expected running time. Our approach performs a similar task as sampling methods. However, we use column-wise sampling of data, rather than the row-wise sampling used in the literature. In either case, our algorithm is completely deterministic. Our algorithm, proposes a way of combining 2D convex hulls in order to achieve high classification accuracy as well as scalability in the same time. First, we thoroughly describe and prove our O(n) algorithm for finding the convex hull of a point set in 2D. Then, we show with experiments our classifier model built based on this idea is very competitive compared with existing sophisticated classification algorithms included in commercial statistical applications such as MATLAB.
[ { "created": "Wed, 11 Nov 2015 17:06:35 GMT", "version": "v1" } ]
2015-11-12
[ [ "Khabbaz", "Mohammad", "" ] ]
It has been a long time, since data mining technologies have made their ways to the field of data management. Classification is one of the most important data mining tasks for label prediction, categorization of objects into groups, advertisement and data management. In this paper, we focus on the standard classification problem which is predicting unknown labels in Euclidean space. Most efforts in Machine Learning communities are devoted to methods that use probabilistic algorithms which are heavy on Calculus and Linear Algebra. Most of these techniques have scalability issues for big data, and are hardly parallelizable if they are to maintain their high accuracies in their standard form. Sampling is a new direction for improving scalability, using many small parallel classifiers. In this paper, rather than conventional sampling methods, we focus on a discrete classification algorithm with O(n) expected running time. Our approach performs a similar task as sampling methods. However, we use column-wise sampling of data, rather than the row-wise sampling used in the literature. In either case, our algorithm is completely deterministic. Our algorithm, proposes a way of combining 2D convex hulls in order to achieve high classification accuracy as well as scalability in the same time. First, we thoroughly describe and prove our O(n) algorithm for finding the convex hull of a point set in 2D. Then, we show with experiments our classifier model built based on this idea is very competitive compared with existing sophisticated classification algorithms included in commercial statistical applications such as MATLAB.
2109.13916
Dan Hendrycks
Dan Hendrycks and Nicholas Carlini and John Schulman and Jacob Steinhardt
Unsolved Problems in ML Safety
Position Paper
null
null
null
cs.LG cs.AI cs.CL cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning (ML) systems are rapidly increasing in size, are acquiring new capabilities, and are increasingly deployed in high-stakes settings. As with other powerful technologies, safety for ML should be a leading research priority. In response to emerging safety challenges in ML, such as those introduced by recent large-scale models, we provide a new roadmap for ML Safety and refine the technical problems that the field needs to address. We present four problems ready for research, namely withstanding hazards ("Robustness"), identifying hazards ("Monitoring"), reducing inherent model hazards ("Alignment"), and reducing systemic hazards ("Systemic Safety"). Throughout, we clarify each problem's motivation and provide concrete research directions.
[ { "created": "Tue, 28 Sep 2021 17:59:36 GMT", "version": "v1" }, { "created": "Sat, 30 Oct 2021 19:41:22 GMT", "version": "v2" }, { "created": "Sat, 25 Dec 2021 19:27:40 GMT", "version": "v3" }, { "created": "Fri, 29 Apr 2022 17:41:33 GMT", "version": "v4" }, { "created": "Thu, 16 Jun 2022 21:12:42 GMT", "version": "v5" } ]
2022-06-20
[ [ "Hendrycks", "Dan", "" ], [ "Carlini", "Nicholas", "" ], [ "Schulman", "John", "" ], [ "Steinhardt", "Jacob", "" ] ]
Machine learning (ML) systems are rapidly increasing in size, are acquiring new capabilities, and are increasingly deployed in high-stakes settings. As with other powerful technologies, safety for ML should be a leading research priority. In response to emerging safety challenges in ML, such as those introduced by recent large-scale models, we provide a new roadmap for ML Safety and refine the technical problems that the field needs to address. We present four problems ready for research, namely withstanding hazards ("Robustness"), identifying hazards ("Monitoring"), reducing inherent model hazards ("Alignment"), and reducing systemic hazards ("Systemic Safety"). Throughout, we clarify each problem's motivation and provide concrete research directions.
2402.17262
Zhenhong Zhou
Zhenhong Zhou, Jiuyang Xiang, Haopeng Chen, Quan Liu, Zherui Li, Sen Su
Speak Out of Turn: Safety Vulnerability of Large Language Models in Multi-turn Dialogue
working in progress 23pages, 18 figures
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) have been demonstrated to generate illegal or unethical responses, particularly when subjected to "jailbreak." Research on jailbreak has highlighted the safety issues of LLMs. However, prior studies have predominantly focused on single-turn dialogue, ignoring the potential complexities and risks presented by multi-turn dialogue, a crucial mode through which humans derive information from LLMs. In this paper, we argue that humans could exploit multi-turn dialogue to induce LLMs into generating harmful information. LLMs may not intend to reject cautionary or borderline unsafe queries, even if each turn is closely served for one malicious purpose in a multi-turn dialogue. Therefore, by decomposing an unsafe query into several sub-queries for multi-turn dialogue, we induced LLMs to answer harmful sub-questions incrementally, culminating in an overall harmful response. Our experiments, conducted across a wide range of LLMs, indicate current inadequacies in the safety mechanisms of LLMs in multi-turn dialogue. Our findings expose vulnerabilities of LLMs in complex scenarios involving multi-turn dialogue, presenting new challenges for the safety of LLMs.
[ { "created": "Tue, 27 Feb 2024 07:11:59 GMT", "version": "v1" } ]
2024-02-28
[ [ "Zhou", "Zhenhong", "" ], [ "Xiang", "Jiuyang", "" ], [ "Chen", "Haopeng", "" ], [ "Liu", "Quan", "" ], [ "Li", "Zherui", "" ], [ "Su", "Sen", "" ] ]
Large Language Models (LLMs) have been demonstrated to generate illegal or unethical responses, particularly when subjected to "jailbreak." Research on jailbreak has highlighted the safety issues of LLMs. However, prior studies have predominantly focused on single-turn dialogue, ignoring the potential complexities and risks presented by multi-turn dialogue, a crucial mode through which humans derive information from LLMs. In this paper, we argue that humans could exploit multi-turn dialogue to induce LLMs into generating harmful information. LLMs may not intend to reject cautionary or borderline unsafe queries, even if each turn is closely served for one malicious purpose in a multi-turn dialogue. Therefore, by decomposing an unsafe query into several sub-queries for multi-turn dialogue, we induced LLMs to answer harmful sub-questions incrementally, culminating in an overall harmful response. Our experiments, conducted across a wide range of LLMs, indicate current inadequacies in the safety mechanisms of LLMs in multi-turn dialogue. Our findings expose vulnerabilities of LLMs in complex scenarios involving multi-turn dialogue, presenting new challenges for the safety of LLMs.
1404.1820
Derrick Wing Kwan Ng
Derrick Wing Kwan Ng and Robert Schober
Max-min Fair Wireless Energy Transfer for Secure Multiuser Communication Systems
5 pages, invited paper, IEEE Information Theory Workshop 2014, Hobart, Tasmania, Australia, Nov. 2014
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers max-min fairness for wireless energy transfer in a downlink multiuser communication system. Our resource allocation design maximizes the minimum harvested energy among multiple multiple-antenna energy harvesting receivers (potential eavesdroppers) while providing quality of service (QoS) for secure communication to multiple single-antenna information receivers. In particular, the algorithm design is formulated as a non-convex optimization problem which takes into account a minimum required signal-to-interference-plus-noise ratio (SINR) constraint at the information receivers and a constraint on the maximum tolerable channel capacity achieved by the energy harvesting receivers for a given transmit power budget. The proposed problem formulation exploits the dual use of artificial noise generation for facilitating efficient wireless energy transfer and secure communication. A semidefinite programming (SDP) relaxation approach is exploited to obtain a global optimal solution of the considered problem. Simulation results demonstrate the significant performance gain in harvested energy that is achieved by the proposed optimal scheme compared to two simple baseline schemes.
[ { "created": "Mon, 7 Apr 2014 15:53:18 GMT", "version": "v1" } ]
2014-04-08
[ [ "Ng", "Derrick Wing Kwan", "" ], [ "Schober", "Robert", "" ] ]
This paper considers max-min fairness for wireless energy transfer in a downlink multiuser communication system. Our resource allocation design maximizes the minimum harvested energy among multiple multiple-antenna energy harvesting receivers (potential eavesdroppers) while providing quality of service (QoS) for secure communication to multiple single-antenna information receivers. In particular, the algorithm design is formulated as a non-convex optimization problem which takes into account a minimum required signal-to-interference-plus-noise ratio (SINR) constraint at the information receivers and a constraint on the maximum tolerable channel capacity achieved by the energy harvesting receivers for a given transmit power budget. The proposed problem formulation exploits the dual use of artificial noise generation for facilitating efficient wireless energy transfer and secure communication. A semidefinite programming (SDP) relaxation approach is exploited to obtain a global optimal solution of the considered problem. Simulation results demonstrate the significant performance gain in harvested energy that is achieved by the proposed optimal scheme compared to two simple baseline schemes.
1608.07846
Henry Kim
Henry M. Kim, Jackie Ho Nam Cheung, Marek Laskowski, Iryna Gel
Data Analytics using Ontologies of Management Theories: Towards Implementing 'From Theory to Practice'
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore how computational ontologies can be impactful vis-a-vis the developing discipline of "data science." We posit an approach wherein management theories are represented as formal axioms, and then applied to draw inferences about data that reside in corporate databases. That is, management theories would be implemented as rules within a data analytics engine. We demonstrate a case study development of such an ontology by formally representing an accounting theory in First-Order Logic. Though quite preliminary, the idea that an information technology, namely ontologies, can potentially actualize the academic cliche, "From Theory to Practice," and be applicable to the burgeoning domain of data analytics is novel and exciting.
[ { "created": "Sun, 28 Aug 2016 19:51:31 GMT", "version": "v1" } ]
2016-08-30
[ [ "Kim", "Henry M.", "" ], [ "Cheung", "Jackie Ho Nam", "" ], [ "Laskowski", "Marek", "" ], [ "Gel", "Iryna", "" ] ]
We explore how computational ontologies can be impactful vis-a-vis the developing discipline of "data science." We posit an approach wherein management theories are represented as formal axioms, and then applied to draw inferences about data that reside in corporate databases. That is, management theories would be implemented as rules within a data analytics engine. We demonstrate a case study development of such an ontology by formally representing an accounting theory in First-Order Logic. Though quite preliminary, the idea that an information technology, namely ontologies, can potentially actualize the academic cliche, "From Theory to Practice," and be applicable to the burgeoning domain of data analytics is novel and exciting.
2407.05339
Jakob Mokander
Jakob M\"okander and Margi Sheth and Mimmi Gersbro-Sundler and Peder Blomgren and Luciano Floridi
Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry
null
Frontiers in Computer Science (2022)
10.3389/fcomp.2022.1068361
null
cs.CY cs.AI
http://creativecommons.org/licenses/by/4.0/
While the use of artificial intelligence (AI) systems promises to bring significant economic and social benefits, it is also coupled with ethical, legal, and technical challenges. Business leaders thus face the question of how to best reap the benefits of automation whilst managing the associated risks. As a first step, many companies have committed themselves to various sets of ethics principles aimed at guiding the design and use of AI systems. So far so good. But how can well-intentioned ethical principles be translated into effective practice? And what challenges await companies that attempt to operationalize AI governance? In this article, we address these questions by drawing on our first-hand experience of shaping and driving the roll-out of AI governance within AstraZeneca, a biopharmaceutical company. The examples we discuss highlight challenges that any organization attempting to operationalize AI governance will have to face. These include questions concerning how to define the material scope of AI governance, how to harmonize standards across decentralized organizations, and how to measure the impact of specific AI governance initiatives. By showcasing how AstraZeneca managed these operational questions, we hope to provide project managers, CIOs, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks within other organizations with generalizable best practices. In essence, companies seeking to operationalize AI governance are encouraged to build on existing policies and governance structures, use pragmatic and action-oriented terminology, focus on risk management in development and procurement, and empower employees through continuous education and change management.
[ { "created": "Sun, 7 Jul 2024 12:01:42 GMT", "version": "v1" } ]
2024-07-09
[ [ "Mökander", "Jakob", "" ], [ "Sheth", "Margi", "" ], [ "Gersbro-Sundler", "Mimmi", "" ], [ "Blomgren", "Peder", "" ], [ "Floridi", "Luciano", "" ] ]
While the use of artificial intelligence (AI) systems promises to bring significant economic and social benefits, it is also coupled with ethical, legal, and technical challenges. Business leaders thus face the question of how to best reap the benefits of automation whilst managing the associated risks. As a first step, many companies have committed themselves to various sets of ethics principles aimed at guiding the design and use of AI systems. So far so good. But how can well-intentioned ethical principles be translated into effective practice? And what challenges await companies that attempt to operationalize AI governance? In this article, we address these questions by drawing on our first-hand experience of shaping and driving the roll-out of AI governance within AstraZeneca, a biopharmaceutical company. The examples we discuss highlight challenges that any organization attempting to operationalize AI governance will have to face. These include questions concerning how to define the material scope of AI governance, how to harmonize standards across decentralized organizations, and how to measure the impact of specific AI governance initiatives. By showcasing how AstraZeneca managed these operational questions, we hope to provide project managers, CIOs, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks within other organizations with generalizable best practices. In essence, companies seeking to operationalize AI governance are encouraged to build on existing policies and governance structures, use pragmatic and action-oriented terminology, focus on risk management in development and procurement, and empower employees through continuous education and change management.
1112.2516
Jesper Schneider jws
Jesper W. Schneider
Caveats for using statistical significance tests in research assessments
Accepted version for Journal of Informetrics
null
10.1016/j.joi.2012.08.005
null
cs.DL stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators. Statistical significance tests are highly controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice of such tests, their dichotomous application in decision making, the difference between statistical and substantive significance, the implausibility of most null hypotheses, the crucial assumption of randomness, as well as the utility of standard errors and confidence intervals for inferential purposes. We argue that applying statistical significance tests and mechanically adhering to their results is highly problematic and detrimental to critical thinking. We claim that the use of such tests do not provide any advantages in relation to citation indicators, interpretations of them, or the decision making processes based upon them. On the contrary their use may be harmful. Like many other critics, we generally believe that statistical significance tests are over- and misused in the social sciences including scientometrics and we encourage a reform on these matters.
[ { "created": "Mon, 12 Dec 2011 11:57:12 GMT", "version": "v1" }, { "created": "Tue, 25 Sep 2012 07:15:27 GMT", "version": "v2" } ]
2012-09-26
[ [ "Schneider", "Jesper W.", "" ] ]
This paper raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators. Statistical significance tests are highly controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice of such tests, their dichotomous application in decision making, the difference between statistical and substantive significance, the implausibility of most null hypotheses, the crucial assumption of randomness, as well as the utility of standard errors and confidence intervals for inferential purposes. We argue that applying statistical significance tests and mechanically adhering to their results is highly problematic and detrimental to critical thinking. We claim that the use of such tests do not provide any advantages in relation to citation indicators, interpretations of them, or the decision making processes based upon them. On the contrary their use may be harmful. Like many other critics, we generally believe that statistical significance tests are over- and misused in the social sciences including scientometrics and we encourage a reform on these matters.
2002.08562
Dianbo Liu Dr
Dianbo Liu, Tim Miller
Federated pretraining and fine tuning of BERT using clinical notes from multiple silos
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large scale contextual representation models, such as BERT, have significantly advanced natural language processing (NLP) in recently years. However, in certain area like healthcare, accessing diverse large scale text data from multiple institutions is extremely challenging due to privacy and regulatory reasons. In this article, we show that it is possible to both pretrain and fine tune BERT models in a federated manner using clinical texts from different silos without moving the data.
[ { "created": "Thu, 20 Feb 2020 04:14:35 GMT", "version": "v1" } ]
2020-02-21
[ [ "Liu", "Dianbo", "" ], [ "Miller", "Tim", "" ] ]
Large scale contextual representation models, such as BERT, have significantly advanced natural language processing (NLP) in recently years. However, in certain area like healthcare, accessing diverse large scale text data from multiple institutions is extremely challenging due to privacy and regulatory reasons. In this article, we show that it is possible to both pretrain and fine tune BERT models in a federated manner using clinical texts from different silos without moving the data.
2305.01275
Peng-Tao Jiang
Peng-Tao Jiang, Yuqi Yang
Segment Anything is A Good Pseudo-label Generator for Weakly Supervised Semantic Segmentation
Technical report
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Weakly supervised semantic segmentation with weak labels is a long-lived ill-posed problem. Mainstream methods mainly focus on improving the quality of pseudo labels. In this report, we attempt to explore the potential of 'prompt to masks' from the powerful class-agnostic large segmentation model, segment-anything. Specifically, different weak labels are used as prompts to the segment-anything model, generating precise class masks. The class masks are utilized to generate pseudo labels to train the segmentation networks. We have conducted extensive experiments on PASCAL VOC 2012 dataset. Experiments demonstrate that segment-anything can serve as a good pseudo-label generator. The code will be made publicly available.
[ { "created": "Tue, 2 May 2023 09:22:38 GMT", "version": "v1" } ]
2023-05-03
[ [ "Jiang", "Peng-Tao", "" ], [ "Yang", "Yuqi", "" ] ]
Weakly supervised semantic segmentation with weak labels is a long-lived ill-posed problem. Mainstream methods mainly focus on improving the quality of pseudo labels. In this report, we attempt to explore the potential of 'prompt to masks' from the powerful class-agnostic large segmentation model, segment-anything. Specifically, different weak labels are used as prompts to the segment-anything model, generating precise class masks. The class masks are utilized to generate pseudo labels to train the segmentation networks. We have conducted extensive experiments on PASCAL VOC 2012 dataset. Experiments demonstrate that segment-anything can serve as a good pseudo-label generator. The code will be made publicly available.