id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2007.13442
Michal Valko
Pierre M\'enard, Omar Darwiche Domingues, Anders Jonsson, Emilie Kaufmann, Edouard Leurent, Michal Valko
Fast active learning for pure exploration in reinforcement learning
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Realistic environments often provide agents with very limited feedback. When the environment is initially unknown, the feedback, in the beginning, can be completely absent, and the agents may first choose to devote all their effort on exploring efficiently. The exploration remains a challenge while it has been addressed with many hand-tuned heuristics with different levels of generality on one side, and a few theoretically-backed exploration strategies on the other. Many of them are incarnated by intrinsic motivation and in particular explorations bonuses. A common rule of thumb for exploration bonuses is to use $1/\sqrt{n}$ bonus that is added to the empirical estimates of the reward, where $n$ is a number of times this particular state (or a state-action pair) was visited. We show that, surprisingly, for a pure-exploration objective of reward-free exploration, bonuses that scale with $1/n$ bring faster learning rates, improving the known upper bounds with respect to the dependence on the horizon $H$. Furthermore, we show that with an improved analysis of the stopping time, we can improve by a factor $H$ the sample complexity in the best-policy identification setting, which is another pure-exploration objective, where the environment provides rewards but the agent is not penalized for its behavior during the exploration phase.
[ { "created": "Mon, 27 Jul 2020 11:28:32 GMT", "version": "v1" }, { "created": "Sat, 10 Oct 2020 17:15:28 GMT", "version": "v2" } ]
2020-10-13
[ [ "Ménard", "Pierre", "" ], [ "Domingues", "Omar Darwiche", "" ], [ "Jonsson", "Anders", "" ], [ "Kaufmann", "Emilie", "" ], [ "Leurent", "Edouard", "" ], [ "Valko", "Michal", "" ] ]
Realistic environments often provide agents with very limited feedback. When the environment is initially unknown, the feedback, in the beginning, can be completely absent, and the agents may first choose to devote all their effort on exploring efficiently. The exploration remains a challenge while it has been addressed with many hand-tuned heuristics with different levels of generality on one side, and a few theoretically-backed exploration strategies on the other. Many of them are incarnated by intrinsic motivation and in particular explorations bonuses. A common rule of thumb for exploration bonuses is to use $1/\sqrt{n}$ bonus that is added to the empirical estimates of the reward, where $n$ is a number of times this particular state (or a state-action pair) was visited. We show that, surprisingly, for a pure-exploration objective of reward-free exploration, bonuses that scale with $1/n$ bring faster learning rates, improving the known upper bounds with respect to the dependence on the horizon $H$. Furthermore, we show that with an improved analysis of the stopping time, we can improve by a factor $H$ the sample complexity in the best-policy identification setting, which is another pure-exploration objective, where the environment provides rewards but the agent is not penalized for its behavior during the exploration phase.
2303.01639
Jun Rekimoto
Jun Rekimoto
WESPER: Zero-shot and Realtime Whisper to Normal Voice Conversion for Whisper-based Speech Interactions
ACM CHI 2023 paper
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23), April 23--28, 2023
10.1145/3544548.3580706
null
cs.SD cs.HC eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognizing whispered speech and converting it to normal speech creates many possibilities for speech interaction. Because the sound pressure of whispered speech is significantly lower than that of normal speech, it can be used as a semi-silent speech interaction in public places without being audible to others. Converting whispers to normal speech also improves the speech quality for people with speech or hearing impairments. However, conventional speech conversion techniques do not provide sufficient conversion quality or require speaker-dependent datasets consisting of pairs of whispered and normal speech utterances. To address these problems, we propose WESPER, a zero-shot, real-time whisper-to-normal speech conversion mechanism based on self-supervised learning. WESPER consists of a speech-to-unit (STU) encoder, which generates hidden speech units common to both whispered and normal speech, and a unit-to-speech (UTS) decoder, which reconstructs speech from the encoded speech units. Unlike the existing methods, this conversion is user-independent and does not require a paired dataset for whispered and normal speech. The UTS decoder can reconstruct speech in any target speaker's voice from speech units, and it requires only an unlabeled target speaker's speech data. We confirmed that the quality of the speech converted from a whisper was improved while preserving its natural prosody. Additionally, we confirmed the effectiveness of the proposed approach to perform speech reconstruction for people with speech or hearing disabilities. (project page: http://lab.rekimoto.org/projects/wesper )
[ { "created": "Fri, 3 Mar 2023 00:10:25 GMT", "version": "v1" } ]
2023-03-06
[ [ "Rekimoto", "Jun", "" ] ]
Recognizing whispered speech and converting it to normal speech creates many possibilities for speech interaction. Because the sound pressure of whispered speech is significantly lower than that of normal speech, it can be used as a semi-silent speech interaction in public places without being audible to others. Converting whispers to normal speech also improves the speech quality for people with speech or hearing impairments. However, conventional speech conversion techniques do not provide sufficient conversion quality or require speaker-dependent datasets consisting of pairs of whispered and normal speech utterances. To address these problems, we propose WESPER, a zero-shot, real-time whisper-to-normal speech conversion mechanism based on self-supervised learning. WESPER consists of a speech-to-unit (STU) encoder, which generates hidden speech units common to both whispered and normal speech, and a unit-to-speech (UTS) decoder, which reconstructs speech from the encoded speech units. Unlike the existing methods, this conversion is user-independent and does not require a paired dataset for whispered and normal speech. The UTS decoder can reconstruct speech in any target speaker's voice from speech units, and it requires only an unlabeled target speaker's speech data. We confirmed that the quality of the speech converted from a whisper was improved while preserving its natural prosody. Additionally, we confirmed the effectiveness of the proposed approach to perform speech reconstruction for people with speech or hearing disabilities. (project page: http://lab.rekimoto.org/projects/wesper )
2305.19818
Marina Munkhoeva
Marina Munkhoeva, Ivan Oseledets
Spectal Harmonics: Bridging Spectral Embedding and Matrix Completion in Self-Supervised Learning
12 pages, 3 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-supervised methods received tremendous attention thanks to their seemingly heuristic approach to learning representations that respect the semantics of the data without any apparent supervision in the form of labels. A growing body of literature is already being published in an attempt to build a coherent and theoretically grounded understanding of the workings of a zoo of losses used in modern self-supervised representation learning methods. In this paper, we attempt to provide an understanding from the perspective of a Laplace operator and connect the inductive bias stemming from the augmentation process to a low-rank matrix completion problem. To this end, we leverage the results from low-rank matrix completion to provide theoretical analysis on the convergence of modern SSL methods and a key property that affects their downstream performance.
[ { "created": "Wed, 31 May 2023 13:02:06 GMT", "version": "v1" }, { "created": "Mon, 30 Oct 2023 15:45:09 GMT", "version": "v2" } ]
2023-10-31
[ [ "Munkhoeva", "Marina", "" ], [ "Oseledets", "Ivan", "" ] ]
Self-supervised methods received tremendous attention thanks to their seemingly heuristic approach to learning representations that respect the semantics of the data without any apparent supervision in the form of labels. A growing body of literature is already being published in an attempt to build a coherent and theoretically grounded understanding of the workings of a zoo of losses used in modern self-supervised representation learning methods. In this paper, we attempt to provide an understanding from the perspective of a Laplace operator and connect the inductive bias stemming from the augmentation process to a low-rank matrix completion problem. To this end, we leverage the results from low-rank matrix completion to provide theoretical analysis on the convergence of modern SSL methods and a key property that affects their downstream performance.
1902.06440
Anas El Ankouri
Anas El Ankouri (IMT Atlantique), Luiz Neto, Ali Sanhaji, Sylvain Barthomeuf, Hugues Le Bras, Bertrand Le Guyader, Abdelatif Chagdali, Minqi Wang, N. Genay, K. Grzybowski, Sophie Durel, P. Chanclou
Experimental Demonstration of Real-time PDCP-RLC V-RAN Split Transmission over Fixed XGS-PON access
null
ECOC 2018 (EUROPEAN CONFERENCE OF OPTICAL COMMUNICATION), Sep 2018, ROME, Italy
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we experimentally assess the transmission of a PDCP-RLC virtualised RAN split interface through a commercial XGS-PON system. We investigate the impacts of DBA on the uplink and packet jitter on the downlink.
[ { "created": "Mon, 18 Feb 2019 07:59:03 GMT", "version": "v1" } ]
2019-02-19
[ [ "Ankouri", "Anas El", "", "IMT Atlantique" ], [ "Neto", "Luiz", "" ], [ "Sanhaji", "Ali", "" ], [ "Barthomeuf", "Sylvain", "" ], [ "Bras", "Hugues Le", "" ], [ "Guyader", "Bertrand Le", "" ], [ "Chagdali", "Abdelatif", "" ], [ "Wang", "Minqi", "" ], [ "Genay", "N.", "" ], [ "Grzybowski", "K.", "" ], [ "Durel", "Sophie", "" ], [ "Chanclou", "P.", "" ] ]
In this work, we experimentally assess the transmission of a PDCP-RLC virtualised RAN split interface through a commercial XGS-PON system. We investigate the impacts of DBA on the uplink and packet jitter on the downlink.
0708.0505
Luca Di Gaspero PhD
Luca Di Gaspero, Andrea Roli
A preliminary analysis on metaheuristics methods applied to the Haplotype Inference Problem
22 pages, 4 figures Technical Report: DEIS - Alma Mater Studiorum, University of Bologna no. DEIS-LIA-006-07
null
null
DEIS-LIA-006-07
cs.AI cs.CE cs.DM q-bio.QM
null
Haplotype Inference is a challenging problem in bioinformatics that consists in inferring the basic genetic constitution of diploid organisms on the basis of their genotype. This information allows researchers to perform association studies for the genetic variants involved in diseases and the individual responses to therapeutic agents. A notable approach to the problem is to encode it as a combinatorial problem (under certain hypotheses, such as the pure parsimony criterion) and to solve it using off-the-shelf combinatorial optimization techniques. The main methods applied to Haplotype Inference are either simple greedy heuristic or exact methods (Integer Linear Programming, Semidefinite Programming, SAT encoding) that, at present, are adequate only for moderate size instances. We believe that metaheuristic and hybrid approaches could provide a better scalability. Moreover, metaheuristics can be very easily combined with problem specific heuristics and they can also be integrated with tree-based search techniques, thus providing a promising framework for hybrid systems in which a good trade-off between effectiveness and efficiency can be reached. In this paper we illustrate a feasibility study of the approach and discuss some relevant design issues, such as modeling and design of approximate solvers that combine constructive heuristics, local search-based improvement strategies and learning mechanisms. Besides the relevance of the Haplotype Inference problem itself, this preliminary analysis is also an interesting case study because the formulation of the problem poses some challenges in modeling and hybrid metaheuristic solver design that can be generalized to other problems.
[ { "created": "Fri, 3 Aug 2007 12:49:21 GMT", "version": "v1" } ]
2007-08-06
[ [ "Di Gaspero", "Luca", "" ], [ "Roli", "Andrea", "" ] ]
Haplotype Inference is a challenging problem in bioinformatics that consists in inferring the basic genetic constitution of diploid organisms on the basis of their genotype. This information allows researchers to perform association studies for the genetic variants involved in diseases and the individual responses to therapeutic agents. A notable approach to the problem is to encode it as a combinatorial problem (under certain hypotheses, such as the pure parsimony criterion) and to solve it using off-the-shelf combinatorial optimization techniques. The main methods applied to Haplotype Inference are either simple greedy heuristic or exact methods (Integer Linear Programming, Semidefinite Programming, SAT encoding) that, at present, are adequate only for moderate size instances. We believe that metaheuristic and hybrid approaches could provide a better scalability. Moreover, metaheuristics can be very easily combined with problem specific heuristics and they can also be integrated with tree-based search techniques, thus providing a promising framework for hybrid systems in which a good trade-off between effectiveness and efficiency can be reached. In this paper we illustrate a feasibility study of the approach and discuss some relevant design issues, such as modeling and design of approximate solvers that combine constructive heuristics, local search-based improvement strategies and learning mechanisms. Besides the relevance of the Haplotype Inference problem itself, this preliminary analysis is also an interesting case study because the formulation of the problem poses some challenges in modeling and hybrid metaheuristic solver design that can be generalized to other problems.
1702.02047
Ziyuan Gao
Ziyuan Gao, Christoph Ries, Hans Ulrich Simon and Sandra Zilles
Preference-based Teaching
35 pages
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new model of teaching named "preference-based teaching" and a corresponding complexity parameter---the preference-based teaching dimension (PBTD)---representing the worst-case number of examples needed to teach any concept in a given concept class. Although the PBTD coincides with the well-known recursive teaching dimension (RTD) on finite classes, it is radically different on infinite ones: the RTD becomes infinite already for trivial infinite classes (such as half-intervals) whereas the PBTD evaluates to reasonably small values for a wide collection of infinite classes including classes consisting of so-called closed sets w.r.t. a given closure operator, including various classes related to linear sets over $\mathbb{N}_0$ (whose RTD had been studied quite recently) and including the class of Euclidean half-spaces. On top of presenting these concrete results, we provide the reader with a theoretical framework (of a combinatorial flavor) which helps to derive bounds on the PBTD.
[ { "created": "Mon, 6 Feb 2017 18:40:32 GMT", "version": "v1" }, { "created": "Wed, 8 Feb 2017 11:37:57 GMT", "version": "v2" } ]
2017-02-09
[ [ "Gao", "Ziyuan", "" ], [ "Ries", "Christoph", "" ], [ "Simon", "Hans Ulrich", "" ], [ "Zilles", "Sandra", "" ] ]
We introduce a new model of teaching named "preference-based teaching" and a corresponding complexity parameter---the preference-based teaching dimension (PBTD)---representing the worst-case number of examples needed to teach any concept in a given concept class. Although the PBTD coincides with the well-known recursive teaching dimension (RTD) on finite classes, it is radically different on infinite ones: the RTD becomes infinite already for trivial infinite classes (such as half-intervals) whereas the PBTD evaluates to reasonably small values for a wide collection of infinite classes including classes consisting of so-called closed sets w.r.t. a given closure operator, including various classes related to linear sets over $\mathbb{N}_0$ (whose RTD had been studied quite recently) and including the class of Euclidean half-spaces. On top of presenting these concrete results, we provide the reader with a theoretical framework (of a combinatorial flavor) which helps to derive bounds on the PBTD.
2306.06088
Alexandre Binninger
Alexandre Binninger, Amir Hertz, Olga Sorkine-Hornung, Daniel Cohen-Or, Raja Giryes
SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling
25 pages, 24 figures
null
null
null
cs.GR cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches, including those of abstract nature. Our method allows users to quickly and easily sketch a shape, and then maps the sketch into the latent space of a part-aware neural implicit shape architecture. SENS analyzes the sketch and encodes its parts into ViT patch encoding, subsequently feeding them into a transformer decoder that converts them to shape embeddings suitable for editing 3D neural implicit shapes. SENS provides intuitive sketch-based generation and editing, and also succeeds in capturing the intent of the user's sketch to generate a variety of novel and expressive 3D shapes, even from abstract and imprecise sketches. Additionally, SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal. It also offers part-based modeling capabilities, enabling the combination of features from multiple sketches to create more complex and customized 3D shapes. We demonstrate the effectiveness of our model compared to the state-of-the-art using objective metric evaluation criteria and a user study, both indicating strong performance on sketches with a medium level of abstraction. Furthermore, we showcase our method's intuitive sketch-based shape editing capabilities, and validate it through a usability study.
[ { "created": "Fri, 9 Jun 2023 17:50:53 GMT", "version": "v1" }, { "created": "Wed, 21 Feb 2024 13:35:34 GMT", "version": "v2" } ]
2024-02-22
[ [ "Binninger", "Alexandre", "" ], [ "Hertz", "Amir", "" ], [ "Sorkine-Hornung", "Olga", "" ], [ "Cohen-Or", "Daniel", "" ], [ "Giryes", "Raja", "" ] ]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches, including those of abstract nature. Our method allows users to quickly and easily sketch a shape, and then maps the sketch into the latent space of a part-aware neural implicit shape architecture. SENS analyzes the sketch and encodes its parts into ViT patch encoding, subsequently feeding them into a transformer decoder that converts them to shape embeddings suitable for editing 3D neural implicit shapes. SENS provides intuitive sketch-based generation and editing, and also succeeds in capturing the intent of the user's sketch to generate a variety of novel and expressive 3D shapes, even from abstract and imprecise sketches. Additionally, SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal. It also offers part-based modeling capabilities, enabling the combination of features from multiple sketches to create more complex and customized 3D shapes. We demonstrate the effectiveness of our model compared to the state-of-the-art using objective metric evaluation criteria and a user study, both indicating strong performance on sketches with a medium level of abstraction. Furthermore, we showcase our method's intuitive sketch-based shape editing capabilities, and validate it through a usability study.
2204.02720
Jan Maty\'a\v{s} K\v{r}i\v{s}\v{t}an
V\'aclav Bla\v{z}ej, Jan Maty\'a\v{s} K\v{r}i\v{s}\v{t}an, Tom\'a\v{s} Valla
Efficient attack sequences in m-eternal domination
null
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the m-eternal domination problem from the perspective of the attacker. For many graph classes, the minimum required number of guards to defend eternally is known. By definition, if the defender has less than the required number of guards, then there exists a sequence of attacks that ensures the attacker's victory. Little is known about such sequences of attacks, in particular, no bound on its length is known. We show that if the game is played on a tree $T$ on $n$ vertices and the defender has less than the necessary number of guards, then the attacker can win in at most $n$ turns. Furthermore, we present an efficient procedure that produces such an attacking strategy.
[ { "created": "Wed, 6 Apr 2022 10:50:08 GMT", "version": "v1" } ]
2022-04-07
[ [ "Blažej", "Václav", "" ], [ "Křišťan", "Jan Matyáš", "" ], [ "Valla", "Tomáš", "" ] ]
We study the m-eternal domination problem from the perspective of the attacker. For many graph classes, the minimum required number of guards to defend eternally is known. By definition, if the defender has less than the required number of guards, then there exists a sequence of attacks that ensures the attacker's victory. Little is known about such sequences of attacks, in particular, no bound on its length is known. We show that if the game is played on a tree $T$ on $n$ vertices and the defender has less than the necessary number of guards, then the attacker can win in at most $n$ turns. Furthermore, we present an efficient procedure that produces such an attacking strategy.
2006.08606
Akbar Siami Namin
Shuvalaxmi Dass and Akbar Siami Namin
Vulnerability Coverage as an Adequacy Testing Criterion
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mainstream software applications and tools are the configurable platforms with an enormous number of parameters along with their values. Certain settings and possible interactions between these parameters may harden (or soften) the security and robustness of these applications against some known vulnerabilities. However, the large number of vulnerabilities reported and associated with these tools make the exhaustive testing of these tools infeasible against these vulnerabilities infeasible. As an instance of general software testing problem, the research question to address is whether the system under test is robust and secure against these vulnerabilities. This paper introduces the idea of ``vulnerability coverage,'' a concept to adequately test a given application for a certain classes of vulnerabilities, as reported by the National Vulnerability Database (NVD). The deriving idea is to utilize the Common Vulnerability Scoring System (CVSS) as a means to measure the fitness of test inputs generated by evolutionary algorithms and then through pattern matching identify vulnerabilities that match the generated vulnerability vectors and then test the system under test for those identified vulnerabilities. We report the performance of two evolutionary algorithms (i.e., Genetic Algorithms and Particle Swarm Optimization) in generating the vulnerability pattern vectors.
[ { "created": "Sun, 14 Jun 2020 15:53:10 GMT", "version": "v1" } ]
2020-06-17
[ [ "Dass", "Shuvalaxmi", "" ], [ "Namin", "Akbar Siami", "" ] ]
Mainstream software applications and tools are the configurable platforms with an enormous number of parameters along with their values. Certain settings and possible interactions between these parameters may harden (or soften) the security and robustness of these applications against some known vulnerabilities. However, the large number of vulnerabilities reported and associated with these tools make the exhaustive testing of these tools infeasible against these vulnerabilities infeasible. As an instance of general software testing problem, the research question to address is whether the system under test is robust and secure against these vulnerabilities. This paper introduces the idea of ``vulnerability coverage,'' a concept to adequately test a given application for a certain classes of vulnerabilities, as reported by the National Vulnerability Database (NVD). The deriving idea is to utilize the Common Vulnerability Scoring System (CVSS) as a means to measure the fitness of test inputs generated by evolutionary algorithms and then through pattern matching identify vulnerabilities that match the generated vulnerability vectors and then test the system under test for those identified vulnerabilities. We report the performance of two evolutionary algorithms (i.e., Genetic Algorithms and Particle Swarm Optimization) in generating the vulnerability pattern vectors.
2102.10375
Chaochao Li
Chaochao Li, Mingliang Xu
Hybrid-driven Trajectory Prediction Based on Group Emotion
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a hybrid-driven trajectory prediction method based on group emotion. The data driven and model driven methods are combined to make a compromise between the controllability, generality, and efficiency of the method on the basis of simulating more real crowd movements. A hybrid driven method is proposed to improve the reliability of the calculation results based on real crowd data, and ensure the controllability of the model. It reduces the dependence of our model on real data and realizes the complementary advantages of these two kinds of methods. In addition, we divide crowd into groups based on human relations in society. So our method can calculate the movements in different scales. We predict individual movement trajectories according to the trajectories of group and fully consider the influence of the group movement state on the individual movements. Besides we also propose a group emotion calculation method and our method also considers the effect of group emotion on crowd movements.
[ { "created": "Sat, 20 Feb 2021 15:52:39 GMT", "version": "v1" } ]
2021-02-23
[ [ "Li", "Chaochao", "" ], [ "Xu", "Mingliang", "" ] ]
We present a hybrid-driven trajectory prediction method based on group emotion. The data driven and model driven methods are combined to make a compromise between the controllability, generality, and efficiency of the method on the basis of simulating more real crowd movements. A hybrid driven method is proposed to improve the reliability of the calculation results based on real crowd data, and ensure the controllability of the model. It reduces the dependence of our model on real data and realizes the complementary advantages of these two kinds of methods. In addition, we divide crowd into groups based on human relations in society. So our method can calculate the movements in different scales. We predict individual movement trajectories according to the trajectories of group and fully consider the influence of the group movement state on the individual movements. Besides we also propose a group emotion calculation method and our method also considers the effect of group emotion on crowd movements.
2106.03441
Shengqiang Zhang
Shengqiang Zhang, Xingxing Zhang, Hangbo Bao, Furu Wei
Attention Temperature Matters in Abstractive Summarization Distillation
Accepted in ACL 2022 Main conference
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent progress of abstractive text summarization largely relies on large pre-trained sequence-to-sequence Transformer models, which are computationally expensive. This paper aims to distill these large models into smaller ones for faster inference and minimal performance loss. Pseudo-labeling based methods are popular in sequence-to-sequence model distillation. In this paper, we find simply manipulating attention temperatures in Transformers can make pseudo labels easier to learn for student models. Our experiments on three summarization datasets show our proposed method consistently improves over vanilla pseudo-labeling based methods. We also find that both the pseudo labels and summaries produced by our students are shorter and more abstractive. Our code is available at \url{https://github.com/Shengqiang-Zhang/plate}.
[ { "created": "Mon, 7 Jun 2021 09:18:21 GMT", "version": "v1" }, { "created": "Tue, 8 Jun 2021 03:09:45 GMT", "version": "v2" }, { "created": "Tue, 1 Mar 2022 14:27:55 GMT", "version": "v3" } ]
2022-03-02
[ [ "Zhang", "Shengqiang", "" ], [ "Zhang", "Xingxing", "" ], [ "Bao", "Hangbo", "" ], [ "Wei", "Furu", "" ] ]
Recent progress of abstractive text summarization largely relies on large pre-trained sequence-to-sequence Transformer models, which are computationally expensive. This paper aims to distill these large models into smaller ones for faster inference and minimal performance loss. Pseudo-labeling based methods are popular in sequence-to-sequence model distillation. In this paper, we find simply manipulating attention temperatures in Transformers can make pseudo labels easier to learn for student models. Our experiments on three summarization datasets show our proposed method consistently improves over vanilla pseudo-labeling based methods. We also find that both the pseudo labels and summaries produced by our students are shorter and more abstractive. Our code is available at \url{https://github.com/Shengqiang-Zhang/plate}.
1904.07348
Prashanta Saha
Prashanta Saha and Upulee Kanewala
Fault Detection Effectiveness of Metamorphic Relations Developed for Testing Supervised Classifiers
8 pages, AITesting 2019
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In machine learning, supervised classifiers are used to obtain predictions for unlabeled data by inferring prediction functions using labeled data. Supervised classifiers are widely applied in domains such as computational biology, computational physics and healthcare to make critical decisions. However, it is often hard to test supervised classifiers since the expected answers are unknown. This is commonly known as the \emph{oracle problem} and metamorphic testing (MT) has been used to test such programs. In MT, metamorphic relations (MRs) are developed from intrinsic characteristics of the software under test (SUT). These MRs are used to generate test data and to verify the correctness of the test results without the presence of a test oracle. Effectiveness of MT heavily depends on the MRs used for testing. In this paper we have conducted an extensive empirical study to evaluate the fault detection effectiveness of MRs that have been used in multiple previous studies to test supervised classifiers. Our study uses a total of 709 reachable mutants generated by multiple mutation engines and uses data sets with varying characteristics to test the SUT. Our results reveal that only 14.8\% of these mutants are detected using the MRs and that the fault detection effectiveness of these MRs do not scale with the increased number of mutants when compared to what was reported in previous studies.
[ { "created": "Mon, 15 Apr 2019 22:23:32 GMT", "version": "v1" } ]
2019-04-17
[ [ "Saha", "Prashanta", "" ], [ "Kanewala", "Upulee", "" ] ]
In machine learning, supervised classifiers are used to obtain predictions for unlabeled data by inferring prediction functions using labeled data. Supervised classifiers are widely applied in domains such as computational biology, computational physics and healthcare to make critical decisions. However, it is often hard to test supervised classifiers since the expected answers are unknown. This is commonly known as the \emph{oracle problem} and metamorphic testing (MT) has been used to test such programs. In MT, metamorphic relations (MRs) are developed from intrinsic characteristics of the software under test (SUT). These MRs are used to generate test data and to verify the correctness of the test results without the presence of a test oracle. Effectiveness of MT heavily depends on the MRs used for testing. In this paper we have conducted an extensive empirical study to evaluate the fault detection effectiveness of MRs that have been used in multiple previous studies to test supervised classifiers. Our study uses a total of 709 reachable mutants generated by multiple mutation engines and uses data sets with varying characteristics to test the SUT. Our results reveal that only 14.8\% of these mutants are detected using the MRs and that the fault detection effectiveness of these MRs do not scale with the increased number of mutants when compared to what was reported in previous studies.
2311.12886
Zhenghao Zhang
Zuozhuo Dai and Zhenghao Zhang and Yao Yao and Bingxue Qiu and Siyu Zhu and Long Qin and Weizhi Wang
AnimateAnything: Fine-Grained Open Domain Image Animation with Motion Guidance
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Image animation is a key task in computer vision which aims to generate dynamic visual content from static image. Recent image animation methods employ neural based rendering technique to generate realistic animations. Despite these advancements, achieving fine-grained and controllable image animation guided by text remains challenging, particularly for open-domain images captured in diverse real environments. In this paper, we introduce an open domain image animation method that leverages the motion prior of video diffusion model. Our approach introduces targeted motion area guidance and motion strength guidance, enabling precise control the movable area and its motion speed. This results in enhanced alignment between the animated visual elements and the prompting text, thereby facilitating a fine-grained and interactive animation generation process for intricate motion sequences. We validate the effectiveness of our method through rigorous experiments on an open-domain dataset, with the results showcasing its superior performance. Project page can be found at https://animationai.github.io/AnimateAnything.
[ { "created": "Tue, 21 Nov 2023 03:47:54 GMT", "version": "v1" }, { "created": "Mon, 4 Dec 2023 05:43:53 GMT", "version": "v2" } ]
2023-12-06
[ [ "Dai", "Zuozhuo", "" ], [ "Zhang", "Zhenghao", "" ], [ "Yao", "Yao", "" ], [ "Qiu", "Bingxue", "" ], [ "Zhu", "Siyu", "" ], [ "Qin", "Long", "" ], [ "Wang", "Weizhi", "" ] ]
Image animation is a key task in computer vision which aims to generate dynamic visual content from static image. Recent image animation methods employ neural based rendering technique to generate realistic animations. Despite these advancements, achieving fine-grained and controllable image animation guided by text remains challenging, particularly for open-domain images captured in diverse real environments. In this paper, we introduce an open domain image animation method that leverages the motion prior of video diffusion model. Our approach introduces targeted motion area guidance and motion strength guidance, enabling precise control the movable area and its motion speed. This results in enhanced alignment between the animated visual elements and the prompting text, thereby facilitating a fine-grained and interactive animation generation process for intricate motion sequences. We validate the effectiveness of our method through rigorous experiments on an open-domain dataset, with the results showcasing its superior performance. Project page can be found at https://animationai.github.io/AnimateAnything.
2304.10770
Shanchuan Wan
Shanchuan Wan, Yujin Tang, Yingtao Tian, Tomoyuki Kaneko
DEIR: Efficient and Robust Exploration through Discriminative-Model-Based Episodic Intrinsic Rewards
Accepted as a conference paper to the 32nd International Joint Conference on Artificial Intelligence (IJCAI-23)
null
null
null
cs.LG cs.AI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exploration is a fundamental aspect of reinforcement learning (RL), and its effectiveness is a deciding factor in the performance of RL algorithms, especially when facing sparse extrinsic rewards. Recent studies have shown the effectiveness of encouraging exploration with intrinsic rewards estimated from novelties in observations. However, there is a gap between the novelty of an observation and an exploration, as both the stochasticity in the environment and the agent's behavior may affect the observation. To evaluate exploratory behaviors accurately, we propose DEIR, a novel method in which we theoretically derive an intrinsic reward with a conditional mutual information term that principally scales with the novelty contributed by agent explorations, and then implement the reward with a discriminative forward model. Extensive experiments on both standard and advanced exploration tasks in MiniGrid show that DEIR quickly learns a better policy than the baselines. Our evaluations on ProcGen demonstrate both the generalization capability and the general applicability of our intrinsic reward. Our source code is available at https://github.com/swan-utokyo/deir.
[ { "created": "Fri, 21 Apr 2023 06:39:38 GMT", "version": "v1" }, { "created": "Thu, 18 May 2023 15:42:27 GMT", "version": "v2" } ]
2023-05-19
[ [ "Wan", "Shanchuan", "" ], [ "Tang", "Yujin", "" ], [ "Tian", "Yingtao", "" ], [ "Kaneko", "Tomoyuki", "" ] ]
Exploration is a fundamental aspect of reinforcement learning (RL), and its effectiveness is a deciding factor in the performance of RL algorithms, especially when facing sparse extrinsic rewards. Recent studies have shown the effectiveness of encouraging exploration with intrinsic rewards estimated from novelties in observations. However, there is a gap between the novelty of an observation and an exploration, as both the stochasticity in the environment and the agent's behavior may affect the observation. To evaluate exploratory behaviors accurately, we propose DEIR, a novel method in which we theoretically derive an intrinsic reward with a conditional mutual information term that principally scales with the novelty contributed by agent explorations, and then implement the reward with a discriminative forward model. Extensive experiments on both standard and advanced exploration tasks in MiniGrid show that DEIR quickly learns a better policy than the baselines. Our evaluations on ProcGen demonstrate both the generalization capability and the general applicability of our intrinsic reward. Our source code is available at https://github.com/swan-utokyo/deir.
2305.04079
Jakob Svennevik Notland
Jakob Svennevik Notland and Mariusz Nowostawski and Jingyue Li
An Empirical Study on Governance in Bitcoin's Consensus Evolution
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Blockchain systems run consensus rules as code to agree on the state of the distributed ledger and secure the network. Changing these rules can be risky and challenging. In addition, it can often be controversial and take much effort to make all the necessary participants agree to adopt a change. Arguably, Bitcoin has seen centralisation tendencies in pools and in development. However, how these tendencies influence blockchain governance has received minimal community and academic attention. Our study analyses the governmental structures in a blockchain by looking into the history of Bitcoin. We investigate the process of changing consensus rules through a grounded theory analysis comprising quantitative and qualitative data from 34 consensus forks in Bitcoin and Bitcoin Cash. The results reveal the decentralised behaviour in Bitcoin and blockchain. Our results are in contrast to related work, emphasising centralisation among miners and developers. Furthermore, our results show how the consensus-driven deployment techniques and governance of consensus rules are intertwined.
[ { "created": "Sat, 6 May 2023 15:57:13 GMT", "version": "v1" }, { "created": "Wed, 14 Feb 2024 16:20:18 GMT", "version": "v2" } ]
2024-02-15
[ [ "Notland", "Jakob Svennevik", "" ], [ "Nowostawski", "Mariusz", "" ], [ "Li", "Jingyue", "" ] ]
Blockchain systems run consensus rules as code to agree on the state of the distributed ledger and secure the network. Changing these rules can be risky and challenging. In addition, it can often be controversial and take much effort to make all the necessary participants agree to adopt a change. Arguably, Bitcoin has seen centralisation tendencies in pools and in development. However, how these tendencies influence blockchain governance has received minimal community and academic attention. Our study analyses the governmental structures in a blockchain by looking into the history of Bitcoin. We investigate the process of changing consensus rules through a grounded theory analysis comprising quantitative and qualitative data from 34 consensus forks in Bitcoin and Bitcoin Cash. The results reveal the decentralised behaviour in Bitcoin and blockchain. Our results are in contrast to related work, emphasising centralisation among miners and developers. Furthermore, our results show how the consensus-driven deployment techniques and governance of consensus rules are intertwined.
1805.07866
Yingyezhe Jin
Yingyezhe Jin, Wenrui Zhang and Peng Li
Hybrid Macro/Micro Level Backpropagation for Training Deep Spiking Neural Networks
11 pages, 5 figures. Published at NeurIPS (Neural Information Processing System) 2018. Code available: https://github.com/jinyyy666/mm-bp-snn
null
null
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking neural networks (SNNs) are positioned to enable spatio-temporal information processing and ultra-low power event-driven neuromorphic hardware. However, SNNs are yet to reach the same performances of conventional deep artificial neural networks (ANNs), a long-standing challenge due to complex dynamics and non-differentiable spike events encountered in training. The existing SNN error backpropagation (BP) methods are limited in terms of scalability, lack of proper handling of spiking discontinuities, and/or mismatch between the rate-coded loss function and computed gradient. We present a hybrid macro/micro level backpropagation (HM2-BP) algorithm for training multi-layer SNNs. The temporal effects are precisely captured by the proposed spike-train level post-synaptic potential (S-PSP) at the microscopic level. The rate-coded errors are defined at the macroscopic level, computed and back-propagated across both macroscopic and microscopic levels. Different from existing BP methods, HM2-BP directly computes the gradient of the rate-coded loss function w.r.t tunable parameters. We evaluate the proposed HM2-BP algorithm by training deep fully connected and convolutional SNNs based on the static MNIST [14] and dynamic neuromorphic N-MNIST [26]. HM2-BP achieves an accuracy level of 99.49% and 98.88% for MNIST and N-MNIST, respectively, outperforming the best reported performances obtained from the existing SNN BP algorithms. Furthermore, the HM2-BP produces the highest accuracies based on SNNs for the EMNIST [3] dataset, and leads to high recognition accuracy for the 16-speaker spoken English letters of TI46 Corpus [16], a challenging patio-temporal speech recognition benchmark for which no prior success based on SNNs was reported. It also achieves competitive performances surpassing those of conventional deep learning models when dealing with asynchronous spiking streams.
[ { "created": "Mon, 21 May 2018 02:04:30 GMT", "version": "v1" }, { "created": "Mon, 17 Sep 2018 05:32:05 GMT", "version": "v2" }, { "created": "Mon, 22 Oct 2018 06:34:07 GMT", "version": "v3" }, { "created": "Fri, 26 Oct 2018 03:47:02 GMT", "version": "v4" }, { "created": "Wed, 12 Dec 2018 04:44:45 GMT", "version": "v5" }, { "created": "Sat, 19 Jan 2019 16:43:59 GMT", "version": "v6" } ]
2019-01-23
[ [ "Jin", "Yingyezhe", "" ], [ "Zhang", "Wenrui", "" ], [ "Li", "Peng", "" ] ]
Spiking neural networks (SNNs) are positioned to enable spatio-temporal information processing and ultra-low power event-driven neuromorphic hardware. However, SNNs are yet to reach the same performances of conventional deep artificial neural networks (ANNs), a long-standing challenge due to complex dynamics and non-differentiable spike events encountered in training. The existing SNN error backpropagation (BP) methods are limited in terms of scalability, lack of proper handling of spiking discontinuities, and/or mismatch between the rate-coded loss function and computed gradient. We present a hybrid macro/micro level backpropagation (HM2-BP) algorithm for training multi-layer SNNs. The temporal effects are precisely captured by the proposed spike-train level post-synaptic potential (S-PSP) at the microscopic level. The rate-coded errors are defined at the macroscopic level, computed and back-propagated across both macroscopic and microscopic levels. Different from existing BP methods, HM2-BP directly computes the gradient of the rate-coded loss function w.r.t tunable parameters. We evaluate the proposed HM2-BP algorithm by training deep fully connected and convolutional SNNs based on the static MNIST [14] and dynamic neuromorphic N-MNIST [26]. HM2-BP achieves an accuracy level of 99.49% and 98.88% for MNIST and N-MNIST, respectively, outperforming the best reported performances obtained from the existing SNN BP algorithms. Furthermore, the HM2-BP produces the highest accuracies based on SNNs for the EMNIST [3] dataset, and leads to high recognition accuracy for the 16-speaker spoken English letters of TI46 Corpus [16], a challenging patio-temporal speech recognition benchmark for which no prior success based on SNNs was reported. It also achieves competitive performances surpassing those of conventional deep learning models when dealing with asynchronous spiking streams.
2104.01778
Yuan Gong
Yuan Gong, Yu-An Chung, James Glass
AST: Audio Spectrogram Transformer
Accepted at Interspeech 2021. Code at https://github.com/YuanGongND/ast
null
null
null
cs.SD cs.AI
http://creativecommons.org/licenses/by/4.0/
In the past decade, convolutional neural networks (CNNs) have been widely adopted as the main building block for end-to-end audio classification models, which aim to learn a direct mapping from audio spectrograms to corresponding labels. To better capture long-range global context, a recent trend is to add a self-attention mechanism on top of the CNN, forming a CNN-attention hybrid model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST), the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50, and 98.1% accuracy on Speech Commands V2.
[ { "created": "Mon, 5 Apr 2021 05:26:29 GMT", "version": "v1" }, { "created": "Tue, 6 Apr 2021 20:29:37 GMT", "version": "v2" }, { "created": "Thu, 8 Jul 2021 20:16:28 GMT", "version": "v3" } ]
2021-07-12
[ [ "Gong", "Yuan", "" ], [ "Chung", "Yu-An", "" ], [ "Glass", "James", "" ] ]
In the past decade, convolutional neural networks (CNNs) have been widely adopted as the main building block for end-to-end audio classification models, which aim to learn a direct mapping from audio spectrograms to corresponding labels. To better capture long-range global context, a recent trend is to add a self-attention mechanism on top of the CNN, forming a CNN-attention hybrid model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST), the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50, and 98.1% accuracy on Speech Commands V2.
1807.05490
Zehong Cao Dr.
Shiming Chen, Yisong Wang, Chin-Teng Lin, Weiping Ding, Zehong Cao
Semi-supervised Feature Learning For Improving Writer Identification
This manuscript is submitting to Information Science
Information Sciences (Volume 482, May 2019, Pages 156-170)
10.1016/j.ins.2019.01.024
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data augmentation is usually used by supervised learning approaches for offline writer identification, but such approaches require extra training data and potentially lead to overfitting errors. In this study, a semi-supervised feature learning pipeline was proposed to improve the performance of writer identification by training with extra unlabeled data and the original labeled data simultaneously. Specifically, we proposed a weighted label smoothing regularization (WLSR) method for data augmentation, which assigned the weighted uniform label distribution to the extra unlabeled data. The WLSR method could regularize the convolutional neural network (CNN) baseline to allow more discriminative features to be learned to represent the properties of different writing styles. The experimental results on well-known benchmark datasets (ICDAR2013 and CVL) showed that our proposed semi-supervised feature learning approach could significantly improve the baseline measurement and perform competitively with existing writer identification approaches. Our findings provide new insights into offline write identification.
[ { "created": "Sun, 15 Jul 2018 05:18:20 GMT", "version": "v1" }, { "created": "Wed, 8 Aug 2018 02:08:15 GMT", "version": "v2" }, { "created": "Sat, 6 Oct 2018 15:06:38 GMT", "version": "v3" } ]
2019-05-28
[ [ "Chen", "Shiming", "" ], [ "Wang", "Yisong", "" ], [ "Lin", "Chin-Teng", "" ], [ "Ding", "Weiping", "" ], [ "Cao", "Zehong", "" ] ]
Data augmentation is usually used by supervised learning approaches for offline writer identification, but such approaches require extra training data and potentially lead to overfitting errors. In this study, a semi-supervised feature learning pipeline was proposed to improve the performance of writer identification by training with extra unlabeled data and the original labeled data simultaneously. Specifically, we proposed a weighted label smoothing regularization (WLSR) method for data augmentation, which assigned the weighted uniform label distribution to the extra unlabeled data. The WLSR method could regularize the convolutional neural network (CNN) baseline to allow more discriminative features to be learned to represent the properties of different writing styles. The experimental results on well-known benchmark datasets (ICDAR2013 and CVL) showed that our proposed semi-supervised feature learning approach could significantly improve the baseline measurement and perform competitively with existing writer identification approaches. Our findings provide new insights into offline write identification.
1211.5873
EPTCS
Franck Cassez (NICTA), Ralf Huuck (NICTA and UNSW), Gerwin Klein (NICTA and UNSW), Bastian Schlich (ABB)
Proceedings Seventh Conference on Systems Software Verification
null
EPTCS 102, 2012
10.4204/EPTCS.102
null
cs.SE cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This volume contains the papers accepted at the 7th Systems Software Verification Conference (SSV 2012), held in Sydney, November 28-30, 2012. The aim of SSV workshops and conference series is to bring together researchers and developers from both academia and industry who are facing real software and real problems with the goal of finding real, applicable solutions.
[ { "created": "Mon, 26 Nov 2012 07:33:19 GMT", "version": "v1" } ]
2012-11-27
[ [ "Cassez", "Franck", "", "NICTA" ], [ "Huuck", "Ralf", "", "NICTA and UNSW" ], [ "Klein", "Gerwin", "", "NICTA and UNSW" ], [ "Schlich", "Bastian", "", "ABB" ] ]
This volume contains the papers accepted at the 7th Systems Software Verification Conference (SSV 2012), held in Sydney, November 28-30, 2012. The aim of SSV workshops and conference series is to bring together researchers and developers from both academia and industry who are facing real software and real problems with the goal of finding real, applicable solutions.
2209.04747
Radu Tudor Ionescu
Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, Mubarak Shah
Diffusion Models in Vision: A Survey
Accepted in IEEE Transactions on Pattern Analysis and Machine Intelligence. 25 pages, 3 figures
null
10.1109/TPAMI.2023.3261988
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Denoising diffusion models represent a recent emerging topic in computer vision, demonstrating remarkable results in the area of generative modeling. A diffusion model is a deep generative model that is based on two stages, a forward diffusion stage and a reverse diffusion stage. In the forward diffusion stage, the input data is gradually perturbed over several steps by adding Gaussian noise. In the reverse stage, a model is tasked at recovering the original input data by learning to gradually reverse the diffusion process, step by step. Diffusion models are widely appreciated for the quality and diversity of the generated samples, despite their known computational burdens, i.e. low speeds due to the high number of steps involved during sampling. In this survey, we provide a comprehensive review of articles on denoising diffusion models applied in vision, comprising both theoretical and practical contributions in the field. First, we identify and present three generic diffusion modeling frameworks, which are based on denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations. We further discuss the relations between diffusion models and other deep generative models, including variational auto-encoders, generative adversarial networks, energy-based models, autoregressive models and normalizing flows. Then, we introduce a multi-perspective categorization of diffusion models applied in computer vision. Finally, we illustrate the current limitations of diffusion models and envision some interesting directions for future research.
[ { "created": "Sat, 10 Sep 2022 22:00:30 GMT", "version": "v1" }, { "created": "Thu, 6 Oct 2022 08:26:17 GMT", "version": "v2" }, { "created": "Tue, 20 Dec 2022 09:49:30 GMT", "version": "v3" }, { "created": "Thu, 23 Mar 2023 11:42:58 GMT", "version": "v4" }, { "created": "Sat, 1 Apr 2023 14:27:33 GMT", "version": "v5" } ]
2023-04-04
[ [ "Croitoru", "Florinel-Alin", "" ], [ "Hondru", "Vlad", "" ], [ "Ionescu", "Radu Tudor", "" ], [ "Shah", "Mubarak", "" ] ]
Denoising diffusion models represent a recent emerging topic in computer vision, demonstrating remarkable results in the area of generative modeling. A diffusion model is a deep generative model that is based on two stages, a forward diffusion stage and a reverse diffusion stage. In the forward diffusion stage, the input data is gradually perturbed over several steps by adding Gaussian noise. In the reverse stage, a model is tasked at recovering the original input data by learning to gradually reverse the diffusion process, step by step. Diffusion models are widely appreciated for the quality and diversity of the generated samples, despite their known computational burdens, i.e. low speeds due to the high number of steps involved during sampling. In this survey, we provide a comprehensive review of articles on denoising diffusion models applied in vision, comprising both theoretical and practical contributions in the field. First, we identify and present three generic diffusion modeling frameworks, which are based on denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations. We further discuss the relations between diffusion models and other deep generative models, including variational auto-encoders, generative adversarial networks, energy-based models, autoregressive models and normalizing flows. Then, we introduce a multi-perspective categorization of diffusion models applied in computer vision. Finally, we illustrate the current limitations of diffusion models and envision some interesting directions for future research.
2405.13319
Denys Katerenchuk
Denys Katerenchuk and Rivka Levitan
''You should probably read this'': Hedge Detection in Text
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans express ideas, beliefs, and statements through language. The manner of expression can carry information indicating the author's degree of confidence in their statement. Understanding the certainty level of a claim is crucial in areas such as medicine, finance, engineering, and many others where errors can lead to disastrous results. In this work, we apply a joint model that leverages words and part-of-speech tags to improve hedge detection in text and achieve a new top score on the CoNLL-2010 Wikipedia corpus.
[ { "created": "Wed, 22 May 2024 03:25:35 GMT", "version": "v1" } ]
2024-05-24
[ [ "Katerenchuk", "Denys", "" ], [ "Levitan", "Rivka", "" ] ]
Humans express ideas, beliefs, and statements through language. The manner of expression can carry information indicating the author's degree of confidence in their statement. Understanding the certainty level of a claim is crucial in areas such as medicine, finance, engineering, and many others where errors can lead to disastrous results. In this work, we apply a joint model that leverages words and part-of-speech tags to improve hedge detection in text and achieve a new top score on the CoNLL-2010 Wikipedia corpus.
2010.15110
Gintare Karolina Dziugaite
Stanislav Fort, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel M. Roy, Surya Ganguli
Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
19 pages, 19 figures, In Advances in Neural Information Processing Systems 34 (NeurIPS 2020)
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In suitably initialized wide networks, small learning rates transform deep neural networks (DNNs) into neural tangent kernel (NTK) machines, whose training dynamics is well-approximated by a linear weight expansion of the network at initialization. Standard training, however, diverges from its linearization in ways that are poorly understood. We study the relationship between the training dynamics of nonlinear deep networks, the geometry of the loss landscape, and the time evolution of a data-dependent NTK. We do so through a large-scale phenomenological analysis of training, synthesizing diverse measures characterizing loss landscape geometry and NTK dynamics. In multiple neural architectures and datasets, we find these diverse measures evolve in a highly correlated manner, revealing a universal picture of the deep learning process. In this picture, deep network training exhibits a highly chaotic rapid initial transient that within 2 to 3 epochs determines the final linearly connected basin of low loss containing the end point of training. During this chaotic transient, the NTK changes rapidly, learning useful features from the training data that enables it to outperform the standard initial NTK by a factor of 3 in less than 3 to 4 epochs. After this rapid chaotic transient, the NTK changes at constant velocity, and its performance matches that of full network training in 15% to 45% of training time. Overall, our analysis reveals a striking correlation between a diverse set of metrics over training time, governed by a rapid chaotic to stable transition in the first few epochs, that together poses challenges and opportunities for the development of more accurate theories of deep learning.
[ { "created": "Wed, 28 Oct 2020 17:53:01 GMT", "version": "v1" } ]
2020-10-29
[ [ "Fort", "Stanislav", "" ], [ "Dziugaite", "Gintare Karolina", "" ], [ "Paul", "Mansheej", "" ], [ "Kharaghani", "Sepideh", "" ], [ "Roy", "Daniel M.", "" ], [ "Ganguli", "Surya", "" ] ]
In suitably initialized wide networks, small learning rates transform deep neural networks (DNNs) into neural tangent kernel (NTK) machines, whose training dynamics is well-approximated by a linear weight expansion of the network at initialization. Standard training, however, diverges from its linearization in ways that are poorly understood. We study the relationship between the training dynamics of nonlinear deep networks, the geometry of the loss landscape, and the time evolution of a data-dependent NTK. We do so through a large-scale phenomenological analysis of training, synthesizing diverse measures characterizing loss landscape geometry and NTK dynamics. In multiple neural architectures and datasets, we find these diverse measures evolve in a highly correlated manner, revealing a universal picture of the deep learning process. In this picture, deep network training exhibits a highly chaotic rapid initial transient that within 2 to 3 epochs determines the final linearly connected basin of low loss containing the end point of training. During this chaotic transient, the NTK changes rapidly, learning useful features from the training data that enables it to outperform the standard initial NTK by a factor of 3 in less than 3 to 4 epochs. After this rapid chaotic transient, the NTK changes at constant velocity, and its performance matches that of full network training in 15% to 45% of training time. Overall, our analysis reveals a striking correlation between a diverse set of metrics over training time, governed by a rapid chaotic to stable transition in the first few epochs, that together poses challenges and opportunities for the development of more accurate theories of deep learning.
2102.12575
Yuanpeng He
Yuanpeng He
Ordinal relative belief entropy
14 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Specially customised Entropies are widely applied in measuring the degree of uncertainties existing in the frame of discernment. However, all of these entropies regard the frame as a whole that has already been determined which dose not conform to actual situations. In real life, everything comes in an order, so how to measure uncertainties of the dynamic process of determining sequence of propositions contained in a frame of discernment is still an open issue and no related research has been proceeded. Therefore, a novel ordinal entropy to measure uncertainties of the frame of discernment considering the order of confirmation of propositions is proposed in this paper. Compared with traditional entropies, it manifests effects on degree of uncertainty brought by orders of propositions existing in a frame of discernment. Besides, some numerical examples are provided to verify the correctness and validity of the proposed entropy in this paper.
[ { "created": "Sun, 21 Feb 2021 04:17:04 GMT", "version": "v1" } ]
2021-02-26
[ [ "He", "Yuanpeng", "" ] ]
Specially customised Entropies are widely applied in measuring the degree of uncertainties existing in the frame of discernment. However, all of these entropies regard the frame as a whole that has already been determined which dose not conform to actual situations. In real life, everything comes in an order, so how to measure uncertainties of the dynamic process of determining sequence of propositions contained in a frame of discernment is still an open issue and no related research has been proceeded. Therefore, a novel ordinal entropy to measure uncertainties of the frame of discernment considering the order of confirmation of propositions is proposed in this paper. Compared with traditional entropies, it manifests effects on degree of uncertainty brought by orders of propositions existing in a frame of discernment. Besides, some numerical examples are provided to verify the correctness and validity of the proposed entropy in this paper.
2001.01697
Ashiqur KhudaBukhsh Ashiqur Rahman KhudaBukhsh
Rupak Sarkar, Hirak Sarkar, Sayantan Mahinder, Ashiqur R. KhudaBukhsh
Social Media Attributions in the Context of Water Crisis
null
null
null
null
cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Attribution of natural disasters/collective misfortune is a widely-studied political science problem. However, such studies are typically survey-centric or rely on a handful of experts to weigh in on the matter. In this paper, we explore how can we use social media data and an AI-driven approach to complement traditional surveys and automatically extract attribution factors. We focus on the most-recent Chennai water crisis which started off as a regional issue but rapidly escalated into a discussion topic with global importance following alarming water-crisis statistics. Specifically, we present a novel prediction task of attribution tie detection which identifies the factors held responsible for the crisis (e.g., poor city planning, exploding population etc.). On a challenging data set constructed from YouTube comments (72,098 comments posted by 43,859 users on 623 relevant videos to the crisis), we present a neural classifier to extract attribution ties that achieved a reasonable performance (Accuracy: 81.34\% on attribution detection and 71.19\% on attribution resolution).
[ { "created": "Mon, 6 Jan 2020 18:20:09 GMT", "version": "v1" } ]
2020-01-07
[ [ "Sarkar", "Rupak", "" ], [ "Sarkar", "Hirak", "" ], [ "Mahinder", "Sayantan", "" ], [ "KhudaBukhsh", "Ashiqur R.", "" ] ]
Attribution of natural disasters/collective misfortune is a widely-studied political science problem. However, such studies are typically survey-centric or rely on a handful of experts to weigh in on the matter. In this paper, we explore how can we use social media data and an AI-driven approach to complement traditional surveys and automatically extract attribution factors. We focus on the most-recent Chennai water crisis which started off as a regional issue but rapidly escalated into a discussion topic with global importance following alarming water-crisis statistics. Specifically, we present a novel prediction task of attribution tie detection which identifies the factors held responsible for the crisis (e.g., poor city planning, exploding population etc.). On a challenging data set constructed from YouTube comments (72,098 comments posted by 43,859 users on 623 relevant videos to the crisis), we present a neural classifier to extract attribution ties that achieved a reasonable performance (Accuracy: 81.34\% on attribution detection and 71.19\% on attribution resolution).
1609.08265
Sudhir R. Ghorpade
Sudhir R. Ghorpade and Prasant Singh
Minimum Distance and the Minimum Weight Codewords of Schubert Codes
26 pages; Slightly revised version; to appear in Finite Fields Appl
Finite Fields Appl. 49 (2018), 1-28
10.1016/j.ffa.2017.08.014
null
cs.IT math.AG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider linear codes associated to Schubert varieties in Grassmannians. A formula for the minimum distance of these codes was conjectured in 2000 and after having been established in various special cases, it was proved in 2008 by Xiang. We give an alternative proof of this formula. Further, we propose a characterization of the minimum weight codewords of Schubert codes by introducing the notion of Schubert decomposable elements of certain exterior powers. It is shown that codewords corresponding to Schubert decomposable elements are of minimum weight and also that the converse is true in many cases. A lower bound, and in some cases, an exact formula, for the number of minimum weight codewords of Schubert codes is also given. From a geometric point of view, these results correspond to determining the maximum number of $\mathbb{F}_q$-rational points that can lie on a hyperplane section of a Schubert variety in a Grassmannian with its nondegenerate embedding in a projective subspace of the Pl\"ucker projective space, and also the number of hyperplanes for which the maximum is attained.
[ { "created": "Tue, 27 Sep 2016 05:46:33 GMT", "version": "v1" }, { "created": "Fri, 15 Sep 2017 10:37:51 GMT", "version": "v2" } ]
2018-01-30
[ [ "Ghorpade", "Sudhir R.", "" ], [ "Singh", "Prasant", "" ] ]
We consider linear codes associated to Schubert varieties in Grassmannians. A formula for the minimum distance of these codes was conjectured in 2000 and after having been established in various special cases, it was proved in 2008 by Xiang. We give an alternative proof of this formula. Further, we propose a characterization of the minimum weight codewords of Schubert codes by introducing the notion of Schubert decomposable elements of certain exterior powers. It is shown that codewords corresponding to Schubert decomposable elements are of minimum weight and also that the converse is true in many cases. A lower bound, and in some cases, an exact formula, for the number of minimum weight codewords of Schubert codes is also given. From a geometric point of view, these results correspond to determining the maximum number of $\mathbb{F}_q$-rational points that can lie on a hyperplane section of a Schubert variety in a Grassmannian with its nondegenerate embedding in a projective subspace of the Pl\"ucker projective space, and also the number of hyperplanes for which the maximum is attained.
2205.11558
Sreejan Kumar
Sreejan Kumar, Carlos G. Correa, Ishita Dasgupta, Raja Marjieh, Michael Y. Hu, Robert D. Hawkins, Nathaniel D. Daw, Jonathan D. Cohen, Karthik Narasimhan, Thomas L. Griffiths
Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines
In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS 2022), winner of Outstanding Paper Award
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Strong inductive biases give humans the ability to quickly learn to perform a variety of tasks. Although meta-learning is a method to endow neural networks with useful inductive biases, agents trained by meta-learning may sometimes acquire very different strategies from humans. We show that co-training these agents on predicting representations from natural language task descriptions and programs induced to generate such tasks guides them toward more human-like inductive biases. Human-generated language descriptions and program induction models that add new learned primitives both contain abstract concepts that can compress description length. Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.
[ { "created": "Mon, 23 May 2022 18:17:58 GMT", "version": "v1" }, { "created": "Thu, 13 Oct 2022 12:32:49 GMT", "version": "v2" }, { "created": "Sun, 5 Feb 2023 18:44:46 GMT", "version": "v3" } ]
2023-02-07
[ [ "Kumar", "Sreejan", "" ], [ "Correa", "Carlos G.", "" ], [ "Dasgupta", "Ishita", "" ], [ "Marjieh", "Raja", "" ], [ "Hu", "Michael Y.", "" ], [ "Hawkins", "Robert D.", "" ], [ "Daw", "Nathaniel D.", "" ], [ "Cohen", "Jonathan D.", "" ], [ "Narasimhan", "Karthik", "" ], [ "Griffiths", "Thomas L.", "" ] ]
Strong inductive biases give humans the ability to quickly learn to perform a variety of tasks. Although meta-learning is a method to endow neural networks with useful inductive biases, agents trained by meta-learning may sometimes acquire very different strategies from humans. We show that co-training these agents on predicting representations from natural language task descriptions and programs induced to generate such tasks guides them toward more human-like inductive biases. Human-generated language descriptions and program induction models that add new learned primitives both contain abstract concepts that can compress description length. Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.
1207.4147
Nathanael Hyafil
Nathanael Hyafil, Craig Boutilier
Regret Minimizing Equilibria and Mechanisms for Games with Strict Type Uncertainty
Appears in Proceedings of the Twentieth Conference on Uncertainty in Artificial Intelligence (UAI2004)
null
null
UAI-P-2004-PG-268-277
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mechanism design has found considerable application to the construction of agent-interaction protocols. In the standard setting, the type (e.g., utility function) of an agent is not known by other agents, nor is it known by the mechanism designer. When this uncertainty is quantified probabilistically, a mechanism induces a game of incomplete information among the agents. However, in many settings, uncertainty over utility functions cannot easily be quantified. We consider the problem of incomplete information games in which type uncertainty is strict or unquantified. We propose the use of minimax regret as a decision criterion in such games, a robust approach for dealing with type uncertainty. We define minimax-regret equilibria and prove that these exist in mixed strategies for finite games. We also consider the problem of mechanism design in this framework by adopting minimax regret as an optimization criterion for the designer itself, and study automated optimization of such mechanisms.
[ { "created": "Wed, 11 Jul 2012 14:55:55 GMT", "version": "v1" } ]
2012-07-19
[ [ "Hyafil", "Nathanael", "" ], [ "Boutilier", "Craig", "" ] ]
Mechanism design has found considerable application to the construction of agent-interaction protocols. In the standard setting, the type (e.g., utility function) of an agent is not known by other agents, nor is it known by the mechanism designer. When this uncertainty is quantified probabilistically, a mechanism induces a game of incomplete information among the agents. However, in many settings, uncertainty over utility functions cannot easily be quantified. We consider the problem of incomplete information games in which type uncertainty is strict or unquantified. We propose the use of minimax regret as a decision criterion in such games, a robust approach for dealing with type uncertainty. We define minimax-regret equilibria and prove that these exist in mixed strategies for finite games. We also consider the problem of mechanism design in this framework by adopting minimax regret as an optimization criterion for the designer itself, and study automated optimization of such mechanisms.
2105.10216
Simon Walk
Matthias W\"olbitsch, Thomas Hasler, Patrick Kasper, Denis Helic, Simon Walk
RFID-based Article-to-Fixture Predictions in Real-World Fashion Stores
Extended version of conference submission to IEEE RFID
null
null
null
cs.IR
http://creativecommons.org/licenses/by-nc-sa/4.0/
In recent years, Radio Frequency Identification (RFID) technology has been applied to improve numerous processes, such as inventory management in retail stores. However, automatic localization of RFID-tagged goods in stores is still a challenging problem. To address this issue, we equip fixtures (e.g., shelves) with reference tags and use data we collect during RFID-based stocktakes to map articles to fixtures. Knowing the location of goods enables the implementation of several practical applications, such as automated Money Mapping (i.e., a heat map of sales across fixtures). Specifically, we conduct controlled lab experiments and a case-study in two fashion retail stores to evaluate our article-to-fixture prediction approaches. The approaches are based on calculating distances between read event time series using DTW, and clustering of read events using DBSCAN. We find that, read events collected during RFID-based stocktakes can be used to assign articles to fixtures with an accuracy of more than 90%. Additionally, we conduct a pilot to investigate the challenges related to the integration of such a localization system in the day-to-day business of retail stores. Hence, in this paper we present an exploratory venture into novel and practical RFID-based applications in fashion retails stores, beyond the scope of stock management.
[ { "created": "Fri, 21 May 2021 09:12:36 GMT", "version": "v1" } ]
2021-05-24
[ [ "Wölbitsch", "Matthias", "" ], [ "Hasler", "Thomas", "" ], [ "Kasper", "Patrick", "" ], [ "Helic", "Denis", "" ], [ "Walk", "Simon", "" ] ]
In recent years, Radio Frequency Identification (RFID) technology has been applied to improve numerous processes, such as inventory management in retail stores. However, automatic localization of RFID-tagged goods in stores is still a challenging problem. To address this issue, we equip fixtures (e.g., shelves) with reference tags and use data we collect during RFID-based stocktakes to map articles to fixtures. Knowing the location of goods enables the implementation of several practical applications, such as automated Money Mapping (i.e., a heat map of sales across fixtures). Specifically, we conduct controlled lab experiments and a case-study in two fashion retail stores to evaluate our article-to-fixture prediction approaches. The approaches are based on calculating distances between read event time series using DTW, and clustering of read events using DBSCAN. We find that, read events collected during RFID-based stocktakes can be used to assign articles to fixtures with an accuracy of more than 90%. Additionally, we conduct a pilot to investigate the challenges related to the integration of such a localization system in the day-to-day business of retail stores. Hence, in this paper we present an exploratory venture into novel and practical RFID-based applications in fashion retails stores, beyond the scope of stock management.
2110.05679
Xuechen Li
Xuechen Li, Florian Tram\`er, Percy Liang, Tatsunori Hashimoto
Large Language Models Can Be Strong Differentially Private Learners
31 pages; update ethics statement to clarify benefits and potential long-term harms
null
null
null
cs.LG cs.CL
http://creativecommons.org/licenses/by/4.0/
Differentially Private (DP) learning has seen limited success for building large deep learning models of text, and straightforward attempts at applying Differentially Private Stochastic Gradient Descent (DP-SGD) to NLP tasks have resulted in large performance drops and high computational overhead. We show that this performance drop can be mitigated with (1) the use of large pretrained language models; (2) non-standard hyperparameters that suit DP optimization; and (3) fine-tuning objectives which are aligned with the pretraining procedure. With the above, we obtain NLP models that outperform state-of-the-art DP-trained models under the same privacy budget and strong non-private baselines -- by directly fine-tuning pretrained models with DP optimization on moderately-sized corpora. To address the computational challenge of running DP-SGD with large Transformers, we propose a memory saving technique that allows clipping in DP-SGD to run without instantiating per-example gradients for any linear layer in the model. The technique enables privately training Transformers with almost the same memory cost as non-private training at a modest run-time overhead. Contrary to conventional wisdom that DP optimization fails at learning high-dimensional models (due to noise that scales with dimension) empirical results reveal that private learning with pretrained language models doesn't tend to suffer from dimension-dependent performance degradation. Code to reproduce results can be found at https://github.com/lxuechen/private-transformers.
[ { "created": "Tue, 12 Oct 2021 01:45:27 GMT", "version": "v1" }, { "created": "Sun, 10 Jul 2022 20:48:32 GMT", "version": "v2" }, { "created": "Tue, 12 Jul 2022 01:30:31 GMT", "version": "v3" }, { "created": "Mon, 18 Jul 2022 01:42:10 GMT", "version": "v4" }, { "created": "Wed, 12 Oct 2022 05:25:28 GMT", "version": "v5" }, { "created": "Thu, 10 Nov 2022 18:42:34 GMT", "version": "v6" } ]
2022-11-11
[ [ "Li", "Xuechen", "" ], [ "Tramèr", "Florian", "" ], [ "Liang", "Percy", "" ], [ "Hashimoto", "Tatsunori", "" ] ]
Differentially Private (DP) learning has seen limited success for building large deep learning models of text, and straightforward attempts at applying Differentially Private Stochastic Gradient Descent (DP-SGD) to NLP tasks have resulted in large performance drops and high computational overhead. We show that this performance drop can be mitigated with (1) the use of large pretrained language models; (2) non-standard hyperparameters that suit DP optimization; and (3) fine-tuning objectives which are aligned with the pretraining procedure. With the above, we obtain NLP models that outperform state-of-the-art DP-trained models under the same privacy budget and strong non-private baselines -- by directly fine-tuning pretrained models with DP optimization on moderately-sized corpora. To address the computational challenge of running DP-SGD with large Transformers, we propose a memory saving technique that allows clipping in DP-SGD to run without instantiating per-example gradients for any linear layer in the model. The technique enables privately training Transformers with almost the same memory cost as non-private training at a modest run-time overhead. Contrary to conventional wisdom that DP optimization fails at learning high-dimensional models (due to noise that scales with dimension) empirical results reveal that private learning with pretrained language models doesn't tend to suffer from dimension-dependent performance degradation. Code to reproduce results can be found at https://github.com/lxuechen/private-transformers.
2303.14821
Michael Walter
Mich\`ele Vergne and Michael Walter
Moment cone membership for quivers in strongly polynomial time
7 pages
null
null
null
cs.CC math.CO math.RT math.SG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this note we observe that membership in moment cones of spaces of quiver representations can be decided in strongly polynomial time, for any acyclic quiver. This generalizes a recent result by Chindris-Collins-Kline for bipartite quivers. Their approach was to construct "multiplicity polytopes" with a geometric realization similar to the Knutson-Tao polytopes for tensor product multiplicities. Here we show that a less geometric but straightforward variant of their construction leads to such a multiplicity polytope for any acyclic quiver. Tardos' strongly polynomial time algorithm for combinatorial linear programming along with the saturation property then implies that moment cone membership can be decided in strongly polynomial time. The analogous question for semi-invariants remains open.
[ { "created": "Sun, 26 Mar 2023 21:11:05 GMT", "version": "v1" } ]
2023-03-28
[ [ "Vergne", "Michèle", "" ], [ "Walter", "Michael", "" ] ]
In this note we observe that membership in moment cones of spaces of quiver representations can be decided in strongly polynomial time, for any acyclic quiver. This generalizes a recent result by Chindris-Collins-Kline for bipartite quivers. Their approach was to construct "multiplicity polytopes" with a geometric realization similar to the Knutson-Tao polytopes for tensor product multiplicities. Here we show that a less geometric but straightforward variant of their construction leads to such a multiplicity polytope for any acyclic quiver. Tardos' strongly polynomial time algorithm for combinatorial linear programming along with the saturation property then implies that moment cone membership can be decided in strongly polynomial time. The analogous question for semi-invariants remains open.
0710.4751
EDA Publishing Association
Lars Wehmeyer, Peter Marwedel
Influence of Memory Hierarchies on Predictability for Time Constrained Embedded Software
Submitted on behalf of EDAA (http://www.edaa.com/)
Dans Design, Automation and Test in Europe - DATE'05, Munich : Allemagne (2005)
null
null
cs.AR
null
Safety-critical embedded systems having to meet real-time constraints are expected to be highly predictable in order to guarantee at design time that certain timing deadlines will always be met. This requirement usually prevents designers from utilizing caches due to their highly dynamic, thus hardly predictable behavior. The integration of scratchpad memories represents an alternative approach which allows the system to benefit from a performance gain comparable to that of caches while at the same time maintaining predictability. In this work, we compare the impact of scratchpad memories and caches on worst case execution time (WCET) analysis results. We show that caches, despite requiring complex techniques, can have a negative impact on the predicted WCET, while the estimated WCET for scratchpad memories scales with the achieved Performance gain at no extra analysis cost.
[ { "created": "Thu, 25 Oct 2007 09:51:11 GMT", "version": "v1" } ]
2011-11-09
[ [ "Wehmeyer", "Lars", "" ], [ "Marwedel", "Peter", "" ] ]
Safety-critical embedded systems having to meet real-time constraints are expected to be highly predictable in order to guarantee at design time that certain timing deadlines will always be met. This requirement usually prevents designers from utilizing caches due to their highly dynamic, thus hardly predictable behavior. The integration of scratchpad memories represents an alternative approach which allows the system to benefit from a performance gain comparable to that of caches while at the same time maintaining predictability. In this work, we compare the impact of scratchpad memories and caches on worst case execution time (WCET) analysis results. We show that caches, despite requiring complex techniques, can have a negative impact on the predicted WCET, while the estimated WCET for scratchpad memories scales with the achieved Performance gain at no extra analysis cost.
2401.00978
Ke Li
Shuang Li, Ke Li, Wei Li, Ming Yang
Evolutionary Alternating Direction Method of Multipliers for Constrained Multi-Objective Optimization with Unknown Constraints
29 pages, 17 figures
null
null
COLALab Report #2024002
cs.NE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Constrained multi-objective optimization problems (CMOPs) pervade real-world applications in science, engineering, and design. Constraint violation has been a building block in designing evolutionary multi-objective optimization algorithms for solving constrained multi-objective optimization problems. However, in certain scenarios, constraint functions might be unknown or inadequately defined, making constraint violation unattainable and potentially misleading for conventional constrained evolutionary multi-objective optimization algorithms. To address this issue, we present the first of its kind evolutionary optimization framework, inspired by the principles of the alternating direction method of multipliers that decouples objective and constraint functions. This framework tackles CMOPs with unknown constraints by reformulating the original problem into an additive form of two subproblems, each of which is allotted a dedicated evolutionary population. Notably, these two populations operate towards complementary evolutionary directions during their optimization processes. In order to minimize discrepancy, their evolutionary directions alternate, aiding the discovery of feasible solutions. Comparative experiments conducted against five state-of-the-art constrained evolutionary multi-objective optimization algorithms, on 120 benchmark test problem instances with varying properties, as well as two real-world engineering optimization problems, demonstrate the effectiveness and superiority of our proposed framework. Its salient features include faster convergence and enhanced resilience to various Pareto front shapes.
[ { "created": "Tue, 2 Jan 2024 00:38:20 GMT", "version": "v1" } ]
2024-01-03
[ [ "Li", "Shuang", "" ], [ "Li", "Ke", "" ], [ "Li", "Wei", "" ], [ "Yang", "Ming", "" ] ]
Constrained multi-objective optimization problems (CMOPs) pervade real-world applications in science, engineering, and design. Constraint violation has been a building block in designing evolutionary multi-objective optimization algorithms for solving constrained multi-objective optimization problems. However, in certain scenarios, constraint functions might be unknown or inadequately defined, making constraint violation unattainable and potentially misleading for conventional constrained evolutionary multi-objective optimization algorithms. To address this issue, we present the first of its kind evolutionary optimization framework, inspired by the principles of the alternating direction method of multipliers that decouples objective and constraint functions. This framework tackles CMOPs with unknown constraints by reformulating the original problem into an additive form of two subproblems, each of which is allotted a dedicated evolutionary population. Notably, these two populations operate towards complementary evolutionary directions during their optimization processes. In order to minimize discrepancy, their evolutionary directions alternate, aiding the discovery of feasible solutions. Comparative experiments conducted against five state-of-the-art constrained evolutionary multi-objective optimization algorithms, on 120 benchmark test problem instances with varying properties, as well as two real-world engineering optimization problems, demonstrate the effectiveness and superiority of our proposed framework. Its salient features include faster convergence and enhanced resilience to various Pareto front shapes.
2208.02244
Colin Topping
Colin Topping, Ola Michalec, Awais Rashid
Contrasting global approaches for identifying and managing cybersecurity risks in supply chains
8 pages, 2 figures
null
null
null
cs.CR cs.CY
http://creativecommons.org/licenses/by/4.0/
Supply chains are increasingly targeted by threat actors. Using a recent taxonomy, we contrast the diverse levels of detail given by national authorities. The threat is commonly acknowledged, but guidance is disjointed. NIST SP 800-161 aligns closely with the taxonomy and offers a potential pathway towards a common set of principles.
[ { "created": "Wed, 3 Aug 2022 17:50:16 GMT", "version": "v1" } ]
2022-08-04
[ [ "Topping", "Colin", "" ], [ "Michalec", "Ola", "" ], [ "Rashid", "Awais", "" ] ]
Supply chains are increasingly targeted by threat actors. Using a recent taxonomy, we contrast the diverse levels of detail given by national authorities. The threat is commonly acknowledged, but guidance is disjointed. NIST SP 800-161 aligns closely with the taxonomy and offers a potential pathway towards a common set of principles.
2401.00315
Yifan Su
Yifan Su, Rishi Veerapaneni, Jiaoyang Li
Bidirectional Temporal Plan Graph: Enabling Switchable Passing Orders for More Efficient Multi-Agent Path Finding Plan Execution
Accepted by AAAI-2024
null
null
null
cs.AI cs.MA cs.RO
http://creativecommons.org/licenses/by/4.0/
The Multi-Agent Path Finding (MAPF) problem involves planning collision-free paths for multiple agents in a shared environment. The majority of MAPF solvers rely on the assumption that an agent can arrive at a specific location at a specific timestep. However, real-world execution uncertainties can cause agents to deviate from this assumption, leading to collisions and deadlocks. Prior research solves this problem by having agents follow a Temporal Plan Graph (TPG), enforcing a consistent passing order at every location as defined in the MAPF plan. However, we show that TPGs are overly strict because, in some circumstances, satisfying the passing order requires agents to wait unnecessarily, leading to longer execution time. To overcome this issue, we introduce a new graphical representation called a Bidirectional Temporal Plan Graph (BTPG), which allows switching passing orders during execution to avoid unnecessary waiting time. We design two anytime algorithms for constructing a BTPG: BTPG-na\"ive and BTPG-optimized. Experimental results show that following BTPGs consistently outperforms following TPGs, reducing unnecessary waits by 8-20%.
[ { "created": "Sat, 30 Dec 2023 20:23:27 GMT", "version": "v1" }, { "created": "Sun, 7 Jan 2024 01:23:49 GMT", "version": "v2" } ]
2024-01-09
[ [ "Su", "Yifan", "" ], [ "Veerapaneni", "Rishi", "" ], [ "Li", "Jiaoyang", "" ] ]
The Multi-Agent Path Finding (MAPF) problem involves planning collision-free paths for multiple agents in a shared environment. The majority of MAPF solvers rely on the assumption that an agent can arrive at a specific location at a specific timestep. However, real-world execution uncertainties can cause agents to deviate from this assumption, leading to collisions and deadlocks. Prior research solves this problem by having agents follow a Temporal Plan Graph (TPG), enforcing a consistent passing order at every location as defined in the MAPF plan. However, we show that TPGs are overly strict because, in some circumstances, satisfying the passing order requires agents to wait unnecessarily, leading to longer execution time. To overcome this issue, we introduce a new graphical representation called a Bidirectional Temporal Plan Graph (BTPG), which allows switching passing orders during execution to avoid unnecessary waiting time. We design two anytime algorithms for constructing a BTPG: BTPG-na\"ive and BTPG-optimized. Experimental results show that following BTPGs consistently outperforms following TPGs, reducing unnecessary waits by 8-20%.
2204.01193
Nu Hoang
Thien-Nu Hoang, Daehee Kim
Detecting In-vehicle Intrusion via Semi-supervised Learning-based Convolutional Adversarial Autoencoders
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
With the development of autonomous vehicle technology, the controller area network (CAN) bus has become the de facto standard for an in-vehicle communication system because of its simplicity and efficiency. However, without any encryption and authentication mechanisms, the in-vehicle network using the CAN protocol is susceptible to a wide range of attacks. Many studies, which are mostly based on machine learning, have proposed installing an intrusion detection system (IDS) for anomaly detection in the CAN bus system. Although machine learning methods have many advantages for IDS, previous models usually require a large amount of labeled data, which results in high time and labor costs. To handle this problem, we propose a novel semi-supervised learning-based convolutional adversarial autoencoder model in this paper. The proposed model combines two popular deep learning models: autoencoder and generative adversarial networks. First, the model is trained with unlabeled data to learn the manifolds of normal and attack patterns. Then, only a small number of labeled samples are used in supervised training. The proposed model can detect various kinds of message injection attacks, such as DoS, fuzzy, and spoofing, as well as unknown attacks. The experimental results show that the proposed model achieves the highest F1 score of 0.99 and a low error rate of 0.1\% with limited labeled data compared to other supervised methods. In addition, we show that the model can meet the real-time requirement by analyzing the model complexity in terms of the number of trainable parameters and inference time. This study successfully reduced the number of model parameters by five times and the inference time by eight times, compared to a state-of-the-art model.
[ { "created": "Mon, 4 Apr 2022 00:50:27 GMT", "version": "v1" } ]
2022-04-05
[ [ "Hoang", "Thien-Nu", "" ], [ "Kim", "Daehee", "" ] ]
With the development of autonomous vehicle technology, the controller area network (CAN) bus has become the de facto standard for an in-vehicle communication system because of its simplicity and efficiency. However, without any encryption and authentication mechanisms, the in-vehicle network using the CAN protocol is susceptible to a wide range of attacks. Many studies, which are mostly based on machine learning, have proposed installing an intrusion detection system (IDS) for anomaly detection in the CAN bus system. Although machine learning methods have many advantages for IDS, previous models usually require a large amount of labeled data, which results in high time and labor costs. To handle this problem, we propose a novel semi-supervised learning-based convolutional adversarial autoencoder model in this paper. The proposed model combines two popular deep learning models: autoencoder and generative adversarial networks. First, the model is trained with unlabeled data to learn the manifolds of normal and attack patterns. Then, only a small number of labeled samples are used in supervised training. The proposed model can detect various kinds of message injection attacks, such as DoS, fuzzy, and spoofing, as well as unknown attacks. The experimental results show that the proposed model achieves the highest F1 score of 0.99 and a low error rate of 0.1\% with limited labeled data compared to other supervised methods. In addition, we show that the model can meet the real-time requirement by analyzing the model complexity in terms of the number of trainable parameters and inference time. This study successfully reduced the number of model parameters by five times and the inference time by eight times, compared to a state-of-the-art model.
2110.10151
Miko Stulajter
Miko M. Stulajter and Ronald M. Caplan and Jon A. Linker
Can Fortran's 'do concurrent' replace directives for accelerated computing?
18 pages, 2 figures, Accepted for publication at WACCPD 2021
null
null
null
cs.MS cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, there has been growing interest in using standard language constructs (e.g. C++'s Parallel Algorithms and Fortran's do concurrent) for accelerated computing as an alternative to directive-based APIs (e.g. OpenMP and OpenACC). These constructs have the potential to be more portable, and some compilers already (or have plans to) support such standards. Here, we look at the current capabilities, portability, and performance of replacing directives with Fortran's do concurrent using a mini-app that currently implements OpenACC for GPU-acceleration and OpenMP for multi-core CPU parallelism. We replace as many directives as possible with do concurrent, testing various configurations and compiler options within three major compilers: GNU's gfortran, NVIDIA's nvfortran, and Intel's ifort. We find that with the right compiler versions and flags, many directives can be replaced without loss of performance or portability, and, in the case of nvfortran, they can all be replaced. We discuss limitations that may apply to more complicated codes and future language additions that may mitigate them. The software and Singularity containers are publicly provided to allow the results to be reproduced.
[ { "created": "Mon, 18 Oct 2021 23:01:07 GMT", "version": "v1" } ]
2021-10-22
[ [ "Stulajter", "Miko M.", "" ], [ "Caplan", "Ronald M.", "" ], [ "Linker", "Jon A.", "" ] ]
Recently, there has been growing interest in using standard language constructs (e.g. C++'s Parallel Algorithms and Fortran's do concurrent) for accelerated computing as an alternative to directive-based APIs (e.g. OpenMP and OpenACC). These constructs have the potential to be more portable, and some compilers already (or have plans to) support such standards. Here, we look at the current capabilities, portability, and performance of replacing directives with Fortran's do concurrent using a mini-app that currently implements OpenACC for GPU-acceleration and OpenMP for multi-core CPU parallelism. We replace as many directives as possible with do concurrent, testing various configurations and compiler options within three major compilers: GNU's gfortran, NVIDIA's nvfortran, and Intel's ifort. We find that with the right compiler versions and flags, many directives can be replaced without loss of performance or portability, and, in the case of nvfortran, they can all be replaced. We discuss limitations that may apply to more complicated codes and future language additions that may mitigate them. The software and Singularity containers are publicly provided to allow the results to be reproduced.
2006.05044
Baocheng Zhu
Baocheng Zhu, Shijun Wang and James Zhang
Neural Physicist: Learning Physical Dynamics from Image Sequences
19 pages, 20 figures
null
null
null
cs.LG cs.AI stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present a novel architecture named Neural Physicist (NeurPhy) to learn physical dynamics directly from image sequences using deep neural networks. For any physical system, given the global system parameters, the time evolution of states is governed by the underlying physical laws. How to learn meaningful system representations in an end-to-end way and estimate accurate state transition dynamics facilitating long-term prediction have been long-standing challenges. In this paper, by leveraging recent progresses in representation learning and state space models (SSMs), we propose NeurPhy, which uses variational auto-encoder (VAE) to extract underlying Markovian dynamic state at each time step, neural process (NP) to extract the global system parameters, and a non-linear non-recurrent stochastic state space model to learn the physical dynamic transition. We apply NeurPhy to two physical experimental environments, i.e., damped pendulum and planetary orbits motion, and achieve promising results. Our model can not only extract the physically meaningful state representations, but also learn the state transition dynamics enabling long-term predictions for unseen image sequences. Furthermore, from the manifold dimension of the latent state space, we can easily identify the degree of freedom (DoF) of the underlying physical systems.
[ { "created": "Tue, 9 Jun 2020 04:36:51 GMT", "version": "v1" } ]
2020-06-11
[ [ "Zhu", "Baocheng", "" ], [ "Wang", "Shijun", "" ], [ "Zhang", "James", "" ] ]
We present a novel architecture named Neural Physicist (NeurPhy) to learn physical dynamics directly from image sequences using deep neural networks. For any physical system, given the global system parameters, the time evolution of states is governed by the underlying physical laws. How to learn meaningful system representations in an end-to-end way and estimate accurate state transition dynamics facilitating long-term prediction have been long-standing challenges. In this paper, by leveraging recent progresses in representation learning and state space models (SSMs), we propose NeurPhy, which uses variational auto-encoder (VAE) to extract underlying Markovian dynamic state at each time step, neural process (NP) to extract the global system parameters, and a non-linear non-recurrent stochastic state space model to learn the physical dynamic transition. We apply NeurPhy to two physical experimental environments, i.e., damped pendulum and planetary orbits motion, and achieve promising results. Our model can not only extract the physically meaningful state representations, but also learn the state transition dynamics enabling long-term predictions for unseen image sequences. Furthermore, from the manifold dimension of the latent state space, we can easily identify the degree of freedom (DoF) of the underlying physical systems.
1612.00478
Noranart Vesdapunt
Jonathan Shen, Noranart Vesdapunt, Vishnu N. Boddeti, Kris M. Kitani
In Teacher We Trust: Learning Compressed Models for Pedestrian Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep convolutional neural networks continue to advance the state-of-the-art in many domains as they grow bigger and more complex. It has been observed that many of the parameters of a large network are redundant, allowing for the possibility of learning a smaller network that mimics the outputs of the large network through a process called Knowledge Distillation. We show, however, that standard Knowledge Distillation is not effective for learning small models for the task of pedestrian detection. To improve this process, we introduce a higher-dimensional hint layer to increase information flow. We also estimate the variance in the outputs of the large network and propose a loss function to incorporate this uncertainty. Finally, we attempt to boost the complexity of the small network without increasing its size by using as input hand-designed features that have been demonstrated to be effective for pedestrian detection. We succeed in training a model that contains $400\times$ fewer parameters than the large network while outperforming AlexNet on the Caltech Pedestrian Dataset.
[ { "created": "Thu, 1 Dec 2016 21:37:19 GMT", "version": "v1" } ]
2016-12-05
[ [ "Shen", "Jonathan", "" ], [ "Vesdapunt", "Noranart", "" ], [ "Boddeti", "Vishnu N.", "" ], [ "Kitani", "Kris M.", "" ] ]
Deep convolutional neural networks continue to advance the state-of-the-art in many domains as they grow bigger and more complex. It has been observed that many of the parameters of a large network are redundant, allowing for the possibility of learning a smaller network that mimics the outputs of the large network through a process called Knowledge Distillation. We show, however, that standard Knowledge Distillation is not effective for learning small models for the task of pedestrian detection. To improve this process, we introduce a higher-dimensional hint layer to increase information flow. We also estimate the variance in the outputs of the large network and propose a loss function to incorporate this uncertainty. Finally, we attempt to boost the complexity of the small network without increasing its size by using as input hand-designed features that have been demonstrated to be effective for pedestrian detection. We succeed in training a model that contains $400\times$ fewer parameters than the large network while outperforming AlexNet on the Caltech Pedestrian Dataset.
2311.02405
Yo-Seb Jeon
Seonjung Kim, Yongjeong Oh, and Yo-Seb Jeon
SplitMAC: Wireless Split Learning over Multiple Access Channels
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel split learning (SL) framework, referred to as SplitMAC, which reduces the latency of SL by leveraging simultaneous uplink transmission over multiple access channels. The key strategy is to divide devices into multiple groups and allow the devices within the same group to simultaneously transmit their smashed data and device-side models over the multiple access channels. The optimization problem of device grouping to minimize SL latency is formulated, and the benefit of device grouping in reducing the uplink latency of SL is theoretically derived. By examining a two-device grouping case, two asymptotically-optimal algorithms are devised for device grouping in low and high signal-to-noise ratio (SNR) scenarios, respectively, while providing proofs of their optimality. By merging these algorithms, a near-optimal device grouping algorithm is proposed to cover a wide range of SNR. Our SL framework is also extended to consider practical fading channels and to support a general group size. Simulation results demonstrate that our SL framework with the proposed device grouping algorithm is superior to existing SL frameworks in reducing SL latency.
[ { "created": "Sat, 4 Nov 2023 13:59:26 GMT", "version": "v1" }, { "created": "Tue, 19 Mar 2024 12:46:02 GMT", "version": "v2" } ]
2024-03-20
[ [ "Kim", "Seonjung", "" ], [ "Oh", "Yongjeong", "" ], [ "Jeon", "Yo-Seb", "" ] ]
This paper presents a novel split learning (SL) framework, referred to as SplitMAC, which reduces the latency of SL by leveraging simultaneous uplink transmission over multiple access channels. The key strategy is to divide devices into multiple groups and allow the devices within the same group to simultaneously transmit their smashed data and device-side models over the multiple access channels. The optimization problem of device grouping to minimize SL latency is formulated, and the benefit of device grouping in reducing the uplink latency of SL is theoretically derived. By examining a two-device grouping case, two asymptotically-optimal algorithms are devised for device grouping in low and high signal-to-noise ratio (SNR) scenarios, respectively, while providing proofs of their optimality. By merging these algorithms, a near-optimal device grouping algorithm is proposed to cover a wide range of SNR. Our SL framework is also extended to consider practical fading channels and to support a general group size. Simulation results demonstrate that our SL framework with the proposed device grouping algorithm is superior to existing SL frameworks in reducing SL latency.
2406.02726
Sanghyun Lee
Sanghyun Lee, Chanyoung Park
Temporal Graph Learning Recurrent Neural Network for Traffic Forecasting
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate traffic flow forecasting is a crucial research topic in transportation management. However, it is a challenging problem due to rapidly changing traffic conditions, high nonlinearity of traffic flow, and complex spatial and temporal correlations of road networks. Most existing studies either try to capture the spatial dependencies between roads using the same semantic graph over different time steps, or assume all sensors on the roads are equally likely to be connected regardless of the distance between them. However, we observe that the spatial dependencies between roads indeed change over time, and two distant roads are not likely to be helpful to each other when predicting the traffic flow, both of which limit the performance of existing studies. In this paper, we propose Temporal Graph Learning Recurrent Neural Network (TGLRN) to address these problems. More precisely, to effectively model the nature of time series, we leverage Recurrent Neural Networks (RNNs) to dynamically construct a graph at each time step, thereby capturing the time-evolving spatial dependencies between roads (i.e., microscopic view). Simultaneously, we provide the Adaptive Structure Information to the model, ensuring that close and consecutive sensors are considered to be more important for predicting the traffic flow (i.e., macroscopic view). Furthermore, to endow TGLRN with robustness, we introduce an edge sampling strategy when constructing the graph at each time step, which eventually leads to further improvements on the model performance. Experimental results on four commonly used real-world benchmark datasets show the effectiveness of TGLRN.
[ { "created": "Tue, 4 Jun 2024 19:08:40 GMT", "version": "v1" } ]
2024-06-06
[ [ "Lee", "Sanghyun", "" ], [ "Park", "Chanyoung", "" ] ]
Accurate traffic flow forecasting is a crucial research topic in transportation management. However, it is a challenging problem due to rapidly changing traffic conditions, high nonlinearity of traffic flow, and complex spatial and temporal correlations of road networks. Most existing studies either try to capture the spatial dependencies between roads using the same semantic graph over different time steps, or assume all sensors on the roads are equally likely to be connected regardless of the distance between them. However, we observe that the spatial dependencies between roads indeed change over time, and two distant roads are not likely to be helpful to each other when predicting the traffic flow, both of which limit the performance of existing studies. In this paper, we propose Temporal Graph Learning Recurrent Neural Network (TGLRN) to address these problems. More precisely, to effectively model the nature of time series, we leverage Recurrent Neural Networks (RNNs) to dynamically construct a graph at each time step, thereby capturing the time-evolving spatial dependencies between roads (i.e., microscopic view). Simultaneously, we provide the Adaptive Structure Information to the model, ensuring that close and consecutive sensors are considered to be more important for predicting the traffic flow (i.e., macroscopic view). Furthermore, to endow TGLRN with robustness, we introduce an edge sampling strategy when constructing the graph at each time step, which eventually leads to further improvements on the model performance. Experimental results on four commonly used real-world benchmark datasets show the effectiveness of TGLRN.
1808.06088
Jingfeng Wu
Bing Yu, Jingfeng Wu, Jinwen Ma and Zhanxing Zhu
Tangent-Normal Adversarial Regularization for Semi-supervised Learning
CVPR 2019
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compared with standard supervised learning, the key difficulty in semi-supervised learning is how to make full use of the unlabeled data. A recently proposed method, virtual adversarial training (VAT), smartly performs adversarial training without label information to impose a local smoothness on the classifier, which is especially beneficial to semi-supervised learning. In this work, we propose tangent-normal adversarial regularization (TNAR) as an extension of VAT by taking the data manifold into consideration. The proposed TNAR is composed by two complementary parts, the tangent adversarial regularization (TAR) and the normal adversarial regularization (NAR). In TAR, VAT is applied along the tangent space of the data manifold, aiming to enforce local invariance of the classifier on the manifold, while in NAR, VAT is performed on the normal space orthogonal to the tangent space, intending to impose robustness on the classifier against the noise causing the observed data deviating from the underlying data manifold. Demonstrated by experiments on both artificial and practical datasets, our proposed TAR and NAR complement with each other, and jointly outperforms other state-of-the-art methods for semi-supervised learning.
[ { "created": "Sat, 18 Aug 2018 14:30:57 GMT", "version": "v1" }, { "created": "Sat, 24 Nov 2018 13:44:47 GMT", "version": "v2" }, { "created": "Fri, 1 Mar 2019 14:57:07 GMT", "version": "v3" } ]
2019-03-04
[ [ "Yu", "Bing", "" ], [ "Wu", "Jingfeng", "" ], [ "Ma", "Jinwen", "" ], [ "Zhu", "Zhanxing", "" ] ]
Compared with standard supervised learning, the key difficulty in semi-supervised learning is how to make full use of the unlabeled data. A recently proposed method, virtual adversarial training (VAT), smartly performs adversarial training without label information to impose a local smoothness on the classifier, which is especially beneficial to semi-supervised learning. In this work, we propose tangent-normal adversarial regularization (TNAR) as an extension of VAT by taking the data manifold into consideration. The proposed TNAR is composed by two complementary parts, the tangent adversarial regularization (TAR) and the normal adversarial regularization (NAR). In TAR, VAT is applied along the tangent space of the data manifold, aiming to enforce local invariance of the classifier on the manifold, while in NAR, VAT is performed on the normal space orthogonal to the tangent space, intending to impose robustness on the classifier against the noise causing the observed data deviating from the underlying data manifold. Demonstrated by experiments on both artificial and practical datasets, our proposed TAR and NAR complement with each other, and jointly outperforms other state-of-the-art methods for semi-supervised learning.
2211.12320
Jun Liang
Jun Liang, Songsen Yu, Huan Yang
A Cross-Residual Learning for Image Recognition
After being added into fine training tricks and several key components from the current SOTA, the performance of C-ResNet may can be greatly improved
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
ResNets and its variants play an important role in various fields of image recognition. This paper gives another variant of ResNets, a kind of cross-residual learning networks called C-ResNets, which has less computation and parameters than ResNets. C-ResNets increases the information interaction between modules by densifying jumpers and enriches the role of jumpers. In addition, some meticulous designs on jumpers and channels counts can further reduce the resource consumption of C-ResNets and increase its classification performance. In order to test the effectiveness of C-ResNets, we use the same hyperparameter settings as fine-tuned ResNets in the experiments. We test our C-ResNets on datasets MNIST, FashionMnist, CIFAR-10, CIFAR-100, CALTECH-101 and SVHN. Compared with fine-tuned ResNets, C-ResNets not only maintains the classification performance, but also enormously reduces the amount of calculations and parameters which greatly save the utilization rate of GPUs and GPU memory resources. Therefore, our C-ResNets is competitive and viable alternatives to ResNets in various scenarios. Code is available at https://github.com/liangjunhello/C-ResNet
[ { "created": "Tue, 22 Nov 2022 15:12:55 GMT", "version": "v1" } ]
2022-11-23
[ [ "Liang", "Jun", "" ], [ "Yu", "Songsen", "" ], [ "Yang", "Huan", "" ] ]
ResNets and its variants play an important role in various fields of image recognition. This paper gives another variant of ResNets, a kind of cross-residual learning networks called C-ResNets, which has less computation and parameters than ResNets. C-ResNets increases the information interaction between modules by densifying jumpers and enriches the role of jumpers. In addition, some meticulous designs on jumpers and channels counts can further reduce the resource consumption of C-ResNets and increase its classification performance. In order to test the effectiveness of C-ResNets, we use the same hyperparameter settings as fine-tuned ResNets in the experiments. We test our C-ResNets on datasets MNIST, FashionMnist, CIFAR-10, CIFAR-100, CALTECH-101 and SVHN. Compared with fine-tuned ResNets, C-ResNets not only maintains the classification performance, but also enormously reduces the amount of calculations and parameters which greatly save the utilization rate of GPUs and GPU memory resources. Therefore, our C-ResNets is competitive and viable alternatives to ResNets in various scenarios. Code is available at https://github.com/liangjunhello/C-ResNet
2404.15721
Ankit Vani
Ankit Vani, Bac Nguyen, Samuel Lavoie, Ranjay Krishna, Aaron Courville
SPARO: Selective Attention for Robust and Compositional Transformer Encodings for Vision
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Selective attention helps us focus on task-relevant aspects in the constant flood of our sensory input. This constraint in our perception allows us to robustly generalize under distractions and to new compositions of perceivable concepts. Transformers employ a similar notion of attention in their architecture, but representation learning models with transformer backbones like CLIP and DINO often fail to demonstrate robustness and compositionality. We highlight a missing architectural prior: unlike human perception, transformer encodings do not separately attend over individual concepts. In response, we propose SPARO, a read-out mechanism that partitions encodings into separately-attended slots, each produced by a single attention head. Using SPARO with CLIP imparts an inductive bias that the vision and text modalities are different views of a shared compositional world with the same corresponding concepts. Using SPARO, we demonstrate improvements on downstream recognition, robustness, retrieval, and compositionality benchmarks with CLIP (up to +14% for ImageNet, +4% for SugarCrepe), and on nearest neighbors and linear probe for ImageNet with DINO (+3% each). We also showcase a powerful ability to intervene and select individual SPARO concepts to further improve downstream task performance (up from +4% to +9% for SugarCrepe) and use this ability to study the robustness of SPARO's representation structure. Finally, we provide insights through ablation experiments and visualization of learned concepts.
[ { "created": "Wed, 24 Apr 2024 08:15:36 GMT", "version": "v1" } ]
2024-04-25
[ [ "Vani", "Ankit", "" ], [ "Nguyen", "Bac", "" ], [ "Lavoie", "Samuel", "" ], [ "Krishna", "Ranjay", "" ], [ "Courville", "Aaron", "" ] ]
Selective attention helps us focus on task-relevant aspects in the constant flood of our sensory input. This constraint in our perception allows us to robustly generalize under distractions and to new compositions of perceivable concepts. Transformers employ a similar notion of attention in their architecture, but representation learning models with transformer backbones like CLIP and DINO often fail to demonstrate robustness and compositionality. We highlight a missing architectural prior: unlike human perception, transformer encodings do not separately attend over individual concepts. In response, we propose SPARO, a read-out mechanism that partitions encodings into separately-attended slots, each produced by a single attention head. Using SPARO with CLIP imparts an inductive bias that the vision and text modalities are different views of a shared compositional world with the same corresponding concepts. Using SPARO, we demonstrate improvements on downstream recognition, robustness, retrieval, and compositionality benchmarks with CLIP (up to +14% for ImageNet, +4% for SugarCrepe), and on nearest neighbors and linear probe for ImageNet with DINO (+3% each). We also showcase a powerful ability to intervene and select individual SPARO concepts to further improve downstream task performance (up from +4% to +9% for SugarCrepe) and use this ability to study the robustness of SPARO's representation structure. Finally, we provide insights through ablation experiments and visualization of learned concepts.
2301.11792
Yunjie He
Yunjie He, Philip John Gorinski, Ieva Staliunaite, Pontus Stenetorp
Graph Attention with Hierarchies for Multi-hop Question Answering
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-hop QA (Question Answering) is the task of finding the answer to a question across multiple documents. In recent years, a number of Deep Learning-based approaches have been proposed to tackle this complex task, as well as a few standard benchmarks to assess models Multi-hop QA capabilities. In this paper, we focus on the well-established HotpotQA benchmark dataset, which requires models to perform answer span extraction as well as support sentence prediction. We present two extensions to the SOTA Graph Neural Network (GNN) based model for HotpotQA, Hierarchical Graph Network (HGN): (i) we complete the original hierarchical structure by introducing new edges between the query and context sentence nodes; (ii) in the graph propagation step, we propose a novel extension to Hierarchical Graph Attention Network GATH (Graph ATtention with Hierarchies) that makes use of the graph hierarchy to update the node representations in a sequential fashion. Experiments on HotpotQA demonstrate the efficiency of the proposed modifications and support our assumptions about the effects of model related variables.
[ { "created": "Fri, 27 Jan 2023 15:49:50 GMT", "version": "v1" } ]
2023-01-30
[ [ "He", "Yunjie", "" ], [ "Gorinski", "Philip John", "" ], [ "Staliunaite", "Ieva", "" ], [ "Stenetorp", "Pontus", "" ] ]
Multi-hop QA (Question Answering) is the task of finding the answer to a question across multiple documents. In recent years, a number of Deep Learning-based approaches have been proposed to tackle this complex task, as well as a few standard benchmarks to assess models Multi-hop QA capabilities. In this paper, we focus on the well-established HotpotQA benchmark dataset, which requires models to perform answer span extraction as well as support sentence prediction. We present two extensions to the SOTA Graph Neural Network (GNN) based model for HotpotQA, Hierarchical Graph Network (HGN): (i) we complete the original hierarchical structure by introducing new edges between the query and context sentence nodes; (ii) in the graph propagation step, we propose a novel extension to Hierarchical Graph Attention Network GATH (Graph ATtention with Hierarchies) that makes use of the graph hierarchy to update the node representations in a sequential fashion. Experiments on HotpotQA demonstrate the efficiency of the proposed modifications and support our assumptions about the effects of model related variables.
2305.17592
Shubhendu Trivedi
Mircea Petrache, Shubhendu Trivedi
Approximation-Generalization Trade-offs under (Approximate) Group Equivariance
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The explicit incorporation of task-specific inductive biases through symmetry has emerged as a general design precept in the development of high-performance machine learning models. For example, group equivariant neural networks have demonstrated impressive performance across various domains and applications such as protein and drug design. A prevalent intuition about such models is that the integration of relevant symmetry results in enhanced generalization. Moreover, it is posited that when the data and/or the model may only exhibit $\textit{approximate}$ or $\textit{partial}$ symmetry, the optimal or best-performing model is one where the model symmetry aligns with the data symmetry. In this paper, we conduct a formal unified investigation of these intuitions. To begin, we present general quantitative bounds that demonstrate how models capturing task-specific symmetries lead to improved generalization. In fact, our results do not require the transformations to be finite or even form a group and can work with partial or approximate equivariance. Utilizing this quantification, we examine the more general question of model mis-specification i.e. when the model symmetries don't align with the data symmetries. We establish, for a given symmetry group, a quantitative comparison between the approximate/partial equivariance of the model and that of the data distribution, precisely connecting model equivariance error and data equivariance error. Our result delineates conditions under which the model equivariance error is optimal, thereby yielding the best-performing model for the given task and data.
[ { "created": "Sat, 27 May 2023 22:53:37 GMT", "version": "v1" } ]
2023-05-30
[ [ "Petrache", "Mircea", "" ], [ "Trivedi", "Shubhendu", "" ] ]
The explicit incorporation of task-specific inductive biases through symmetry has emerged as a general design precept in the development of high-performance machine learning models. For example, group equivariant neural networks have demonstrated impressive performance across various domains and applications such as protein and drug design. A prevalent intuition about such models is that the integration of relevant symmetry results in enhanced generalization. Moreover, it is posited that when the data and/or the model may only exhibit $\textit{approximate}$ or $\textit{partial}$ symmetry, the optimal or best-performing model is one where the model symmetry aligns with the data symmetry. In this paper, we conduct a formal unified investigation of these intuitions. To begin, we present general quantitative bounds that demonstrate how models capturing task-specific symmetries lead to improved generalization. In fact, our results do not require the transformations to be finite or even form a group and can work with partial or approximate equivariance. Utilizing this quantification, we examine the more general question of model mis-specification i.e. when the model symmetries don't align with the data symmetries. We establish, for a given symmetry group, a quantitative comparison between the approximate/partial equivariance of the model and that of the data distribution, precisely connecting model equivariance error and data equivariance error. Our result delineates conditions under which the model equivariance error is optimal, thereby yielding the best-performing model for the given task and data.
2206.11843
Zhixuan Zhou
Kyrie Zhixuan Zhou, Bohui Shen, Franziska Zimmer, Chuanli Xia, Xin Tong
More Than a Wife and a Mom: A Study of Mom Vlogging Practices in China
26th ACM Conference On Computer-Supported Cooperative Work And Social Computing
null
null
null
cs.HC cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mom vloggers are stay-at-home moms who record and share their daily life through short videos. In this exploratory study, we aspire to understand mom vloggers' motivations, practices, and challenges. Our mixed-methods inspection contained interviews with 4 mom vloggers in China and a content analysis of mom vlogs of 5 other mom vloggers. Mom vloggers' primary motivations are to make money, record daily life, and seek their individual identities and values, well meeting their financial and social needs after leaving their paid employment. When creating vlog content, mom bloggers encounter various challenges, such as a lack of video visibility, being stretched by both intensive motherhood and heavy digital work, privacy and self-presentation concerns, and so on. Based on the findings, we propose design implications toward resolving these challenges and benefiting mom vloggers' experiences.
[ { "created": "Thu, 23 Jun 2022 17:18:52 GMT", "version": "v1" }, { "created": "Wed, 27 Sep 2023 07:40:10 GMT", "version": "v2" } ]
2023-09-28
[ [ "Zhou", "Kyrie Zhixuan", "" ], [ "Shen", "Bohui", "" ], [ "Zimmer", "Franziska", "" ], [ "Xia", "Chuanli", "" ], [ "Tong", "Xin", "" ] ]
Mom vloggers are stay-at-home moms who record and share their daily life through short videos. In this exploratory study, we aspire to understand mom vloggers' motivations, practices, and challenges. Our mixed-methods inspection contained interviews with 4 mom vloggers in China and a content analysis of mom vlogs of 5 other mom vloggers. Mom vloggers' primary motivations are to make money, record daily life, and seek their individual identities and values, well meeting their financial and social needs after leaving their paid employment. When creating vlog content, mom bloggers encounter various challenges, such as a lack of video visibility, being stretched by both intensive motherhood and heavy digital work, privacy and self-presentation concerns, and so on. Based on the findings, we propose design implications toward resolving these challenges and benefiting mom vloggers' experiences.
1802.02295
Mengshi Zhang
Mengshi Zhang, Yuqun Zhang, Lingming Zhang, Cong Liu, Sarfraz Khurshid
DeepRoad: GAN-based Metamorphic Autonomous Driving System Testing
7 pages
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While Deep Neural Networks (DNNs) have established the fundamentals of DNN-based autonomous driving systems, they may exhibit erroneous behaviors and cause fatal accidents. To resolve the safety issues of autonomous driving systems, a recent set of testing techniques have been designed to automatically generate test cases, e.g., new input images transformed from the original ones. Unfortunately, many such generated input images often render inferior authenticity, lacking accurate semantic information of the driving scenes and hence compromising the resulting efficacy and reliability. In this paper, we propose DeepRoad, an unsupervised framework to automatically generate large amounts of accurate driving scenes to test the consistency of DNN-based autonomous driving systems across different scenes. In particular, DeepRoad delivers driving scenes with various weather conditions (including those with rather extreme conditions) by applying the Generative Adversarial Networks (GANs) along with the corresponding real-world weather scenes. Moreover, we have implemented DeepRoad to test three well-recognized DNN-based autonomous driving systems. Experimental results demonstrate that DeepRoad can detect thousands of behavioral inconsistencies in these systems.
[ { "created": "Wed, 7 Feb 2018 03:18:44 GMT", "version": "v1" }, { "created": "Wed, 7 Mar 2018 02:30:58 GMT", "version": "v2" } ]
2018-03-08
[ [ "Zhang", "Mengshi", "" ], [ "Zhang", "Yuqun", "" ], [ "Zhang", "Lingming", "" ], [ "Liu", "Cong", "" ], [ "Khurshid", "Sarfraz", "" ] ]
While Deep Neural Networks (DNNs) have established the fundamentals of DNN-based autonomous driving systems, they may exhibit erroneous behaviors and cause fatal accidents. To resolve the safety issues of autonomous driving systems, a recent set of testing techniques have been designed to automatically generate test cases, e.g., new input images transformed from the original ones. Unfortunately, many such generated input images often render inferior authenticity, lacking accurate semantic information of the driving scenes and hence compromising the resulting efficacy and reliability. In this paper, we propose DeepRoad, an unsupervised framework to automatically generate large amounts of accurate driving scenes to test the consistency of DNN-based autonomous driving systems across different scenes. In particular, DeepRoad delivers driving scenes with various weather conditions (including those with rather extreme conditions) by applying the Generative Adversarial Networks (GANs) along with the corresponding real-world weather scenes. Moreover, we have implemented DeepRoad to test three well-recognized DNN-based autonomous driving systems. Experimental results demonstrate that DeepRoad can detect thousands of behavioral inconsistencies in these systems.
2003.13256
Tobias Glasmachers
Tobias Glasmachers, Oswin Krause
The Hessian Estimation Evolution Strategy
null
null
null
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel black box optimization algorithm called Hessian Estimation Evolution Strategy. The algorithm updates the covariance matrix of its sampling distribution by directly estimating the curvature of the objective function. This algorithm design is targeted at twice continuously differentiable problems. For this, we extend the cumulative step-size adaptation algorithm of the CMA-ES to mirrored sampling. We demonstrate that our approach to covariance matrix adaptation is efficient by evaluation it on the BBOB/COCO testbed. We also show that the algorithm is surprisingly robust when its core assumption of a twice continuously differentiable objective function is violated. The approach yields a new evolution strategy with competitive performance, and at the same time it also offers an interesting alternative to the usual covariance matrix update mechanism.
[ { "created": "Mon, 30 Mar 2020 08:01:16 GMT", "version": "v1" }, { "created": "Tue, 9 Jun 2020 07:30:53 GMT", "version": "v2" } ]
2020-06-11
[ [ "Glasmachers", "Tobias", "" ], [ "Krause", "Oswin", "" ] ]
We present a novel black box optimization algorithm called Hessian Estimation Evolution Strategy. The algorithm updates the covariance matrix of its sampling distribution by directly estimating the curvature of the objective function. This algorithm design is targeted at twice continuously differentiable problems. For this, we extend the cumulative step-size adaptation algorithm of the CMA-ES to mirrored sampling. We demonstrate that our approach to covariance matrix adaptation is efficient by evaluation it on the BBOB/COCO testbed. We also show that the algorithm is surprisingly robust when its core assumption of a twice continuously differentiable objective function is violated. The approach yields a new evolution strategy with competitive performance, and at the same time it also offers an interesting alternative to the usual covariance matrix update mechanism.
1804.02528
Iraklis Klampanos
Iraklis A. Klampanos, Athanasios Davvetas, Antonis Koukourikos, Vangelis Karkaletsis
ANNETT-O: An Ontology for Describing Artificial Neural Network Evaluation, Topology and Training
null
null
null
null
cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning models, while effective and versatile, are becoming increasingly complex, often including multiple overlapping networks of arbitrary depths, multiple objectives and non-intuitive training methodologies. This makes it increasingly difficult for researchers and practitioners to design, train and understand them. In this paper we present ANNETT-O, a much-needed, generic and computer-actionable vocabulary for researchers and practitioners to describe their deep learning configurations, training procedures and experiments. The proposed ontology focuses on topological, training and evaluation aspects of complex deep neural configurations, while keeping peripheral entities more succinct. Knowledge bases implementing ANNETT-O can support a wide variety of queries, providing relevant insights to users. In addition to a detailed description of the ontology, we demonstrate its suitability to the task via a number of hypothetical use-cases of increasing complexity.
[ { "created": "Sat, 7 Apr 2018 07:56:29 GMT", "version": "v1" }, { "created": "Thu, 10 May 2018 09:04:59 GMT", "version": "v2" } ]
2018-05-11
[ [ "Klampanos", "Iraklis A.", "" ], [ "Davvetas", "Athanasios", "" ], [ "Koukourikos", "Antonis", "" ], [ "Karkaletsis", "Vangelis", "" ] ]
Deep learning models, while effective and versatile, are becoming increasingly complex, often including multiple overlapping networks of arbitrary depths, multiple objectives and non-intuitive training methodologies. This makes it increasingly difficult for researchers and practitioners to design, train and understand them. In this paper we present ANNETT-O, a much-needed, generic and computer-actionable vocabulary for researchers and practitioners to describe their deep learning configurations, training procedures and experiments. The proposed ontology focuses on topological, training and evaluation aspects of complex deep neural configurations, while keeping peripheral entities more succinct. Knowledge bases implementing ANNETT-O can support a wide variety of queries, providing relevant insights to users. In addition to a detailed description of the ontology, we demonstrate its suitability to the task via a number of hypothetical use-cases of increasing complexity.
1211.6468
Mike Stannett
Mike Stannett and Istv\'an N\'emeti
Using Isabelle to verify special relativity, with application to hypercomputation theory
14 pages, reformatted with minor corrections
Journal of Automated Reasoning, 52,4 (2014), 361-378
10.1007/s10817-013-9292-7
null
cs.LO gr-qc
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Logicians at the R\'enyi Mathematical Institute in Budapest have spent several years developing versions of relativity theory (special, general, and other variants) based wholly on first order logic, and have argued in favour of the physical decidability, via exploitation of cosmological phenomena, of formally undecidable questions such as the Halting Problem and the consistency of set theory. The Hungarian theories are very extensive, and their associated proofs are intuitively very satisfying, but this brings its own risks since intuition can sometimes be misleading. As part of a joint project, researchers at Sheffield have recently started generating rigorous machine-verified versions of the Hungarian proofs, so as to demonstrate the soundness of their work. In this paper, we explain the background to the project and demonstrate an Isabelle proof of the theorem "No inertial observer can travel faster than light". This approach to physical theories and physical computability has several pay-offs: (a) we can be certain our intuition hasn't led us astray (or if it has, we can identify where this has happened); (b) we can identify which axioms are specifically required in the proof of each theorem and to what extent those axioms can be weakened (the fewer assumptions we make up-front, the stronger the results); and (c) we can identify whether new formal proof techniques and tactics are needed when tackling physical as opposed to mathematical theories.
[ { "created": "Tue, 27 Nov 2012 22:29:05 GMT", "version": "v1" }, { "created": "Fri, 18 Jan 2013 11:09:06 GMT", "version": "v2" } ]
2018-03-30
[ [ "Stannett", "Mike", "" ], [ "Németi", "István", "" ] ]
Logicians at the R\'enyi Mathematical Institute in Budapest have spent several years developing versions of relativity theory (special, general, and other variants) based wholly on first order logic, and have argued in favour of the physical decidability, via exploitation of cosmological phenomena, of formally undecidable questions such as the Halting Problem and the consistency of set theory. The Hungarian theories are very extensive, and their associated proofs are intuitively very satisfying, but this brings its own risks since intuition can sometimes be misleading. As part of a joint project, researchers at Sheffield have recently started generating rigorous machine-verified versions of the Hungarian proofs, so as to demonstrate the soundness of their work. In this paper, we explain the background to the project and demonstrate an Isabelle proof of the theorem "No inertial observer can travel faster than light". This approach to physical theories and physical computability has several pay-offs: (a) we can be certain our intuition hasn't led us astray (or if it has, we can identify where this has happened); (b) we can identify which axioms are specifically required in the proof of each theorem and to what extent those axioms can be weakened (the fewer assumptions we make up-front, the stronger the results); and (c) we can identify whether new formal proof techniques and tactics are needed when tackling physical as opposed to mathematical theories.
1911.07273
Zhigang Chang
Zhigang Chang, Qin Zhou, Mingyang Yu, Shibao Zheng, Hua Yang, Tai-Pang Wu
Distribution Context Aware Loss for Person Re-identification
IEEE VCIP
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To learn the optimal similarity function between probe and gallery images in Person re-identification, effective deep metric learning methods have been extensively explored to obtain discriminative feature embedding. However, existing metric loss like triplet loss and its variants always emphasize pair-wise relations but ignore the distribution context in feature space, leading to inconsistency and sub-optimal. In fact, the similarity of one pair not only decides the match of this pair, but also has potential impacts on other sample pairs. In this paper, we propose a novel Distribution Context Aware (DCA) loss based on triplet loss to combine both numerical similarity and relation similarity in feature space for better clustering. Extensive experiments on three benchmarks including Market-1501, DukeMTMC-reID and MSMT17, evidence the favorable performance of our method against the corresponding baseline and other state-of-the-art methods.
[ { "created": "Sun, 17 Nov 2019 16:28:35 GMT", "version": "v1" } ]
2019-11-19
[ [ "Chang", "Zhigang", "" ], [ "Zhou", "Qin", "" ], [ "Yu", "Mingyang", "" ], [ "Zheng", "Shibao", "" ], [ "Yang", "Hua", "" ], [ "Wu", "Tai-Pang", "" ] ]
To learn the optimal similarity function between probe and gallery images in Person re-identification, effective deep metric learning methods have been extensively explored to obtain discriminative feature embedding. However, existing metric loss like triplet loss and its variants always emphasize pair-wise relations but ignore the distribution context in feature space, leading to inconsistency and sub-optimal. In fact, the similarity of one pair not only decides the match of this pair, but also has potential impacts on other sample pairs. In this paper, we propose a novel Distribution Context Aware (DCA) loss based on triplet loss to combine both numerical similarity and relation similarity in feature space for better clustering. Extensive experiments on three benchmarks including Market-1501, DukeMTMC-reID and MSMT17, evidence the favorable performance of our method against the corresponding baseline and other state-of-the-art methods.
2303.07154
Yun-Da Tsai
Yun-Da Tsai, Tzu-Hsien Tsai, Shou-De Lin
Differential Good Arm Identification
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
This paper targets a variant of the stochastic multi-armed bandit problem called good arm identification (GAI). GAI is a pure-exploration bandit problem with the goal to output as many good arms using as few samples as possible, where a good arm is defined as an arm whose expected reward is greater than a given threshold. In this work, we propose DGAI - a differentiable good arm identification algorithm to improve the sample complexity of the state-of-the-art HDoC algorithm in a data-driven fashion. We also showed that the DGAI can further boost the performance of a general multi-arm bandit (MAB) problem given a threshold as a prior knowledge to the arm set. Extensive experiments confirm that our algorithm outperform the baseline algorithms significantly in both synthetic and real world datasets for both GAI and MAB tasks.
[ { "created": "Mon, 13 Mar 2023 14:28:21 GMT", "version": "v1" }, { "created": "Thu, 17 Aug 2023 04:09:23 GMT", "version": "v2" }, { "created": "Fri, 16 Feb 2024 00:24:32 GMT", "version": "v3" } ]
2024-02-19
[ [ "Tsai", "Yun-Da", "" ], [ "Tsai", "Tzu-Hsien", "" ], [ "Lin", "Shou-De", "" ] ]
This paper targets a variant of the stochastic multi-armed bandit problem called good arm identification (GAI). GAI is a pure-exploration bandit problem with the goal to output as many good arms using as few samples as possible, where a good arm is defined as an arm whose expected reward is greater than a given threshold. In this work, we propose DGAI - a differentiable good arm identification algorithm to improve the sample complexity of the state-of-the-art HDoC algorithm in a data-driven fashion. We also showed that the DGAI can further boost the performance of a general multi-arm bandit (MAB) problem given a threshold as a prior knowledge to the arm set. Extensive experiments confirm that our algorithm outperform the baseline algorithms significantly in both synthetic and real world datasets for both GAI and MAB tasks.
1112.3787
Salvador Abreu
Dario Campagna, Beata Sarna-Starosta and Tom Schrijvers
Approximating Constraint Propagation in Datalog
Online Proceedings of the 11th International Colloquium on Implementation of Constraint LOgic Programming Systems (CICLOPS 2011), Lexington, KY, U.S.A., July 10, 2011
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a technique exploiting Datalog with aggregates to improve the performance of programs with arithmetic (in)equalities. Our approach employs a source-to-source program transformation which approximates the propagation technique from Constraint Programming. The experimental evaluation of the approach shows good run time speed-ups on a range of non-recursive as well as recursive programs. Furthermore, our technique improves upon the previously reported in the literature constraint magic set transformation approach.
[ { "created": "Fri, 16 Dec 2011 12:26:59 GMT", "version": "v1" } ]
2011-12-19
[ [ "Campagna", "Dario", "" ], [ "Sarna-Starosta", "Beata", "" ], [ "Schrijvers", "Tom", "" ] ]
We present a technique exploiting Datalog with aggregates to improve the performance of programs with arithmetic (in)equalities. Our approach employs a source-to-source program transformation which approximates the propagation technique from Constraint Programming. The experimental evaluation of the approach shows good run time speed-ups on a range of non-recursive as well as recursive programs. Furthermore, our technique improves upon the previously reported in the literature constraint magic set transformation approach.
2405.13039
Arnav Chavan
Arnav Chavan, Nahush Lele, Deepak Gupta
Surgical Feature-Space Decomposition of LLMs: Why, When and How?
Accepted at ACL 2024
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Low-rank approximations, of the weight and feature space can enhance the performance of deep learning models, whether in terms of improving generalization or reducing the latency of inference. However, there is no clear consensus yet on \emph{how}, \emph{when} and \emph{why} these approximations are helpful for large language models (LLMs). In this work, we empirically study the efficacy of weight and feature space decomposition in transformer-based LLMs. We demonstrate that surgical decomposition not only provides critical insights into the trade-off between compression and language modelling performance, but also sometimes enhances commonsense reasoning performance of LLMs. Our empirical analysis identifies specific network segments that intrinsically exhibit a low-rank structure. Furthermore, we extend our investigation to the implications of low-rank approximations on model bias. Overall, our findings offer a novel perspective on optimizing LLMs, presenting the low-rank approximation not only as a tool for performance enhancements, but also as a means to potentially rectify biases within these models. Our code is available at \href{https://github.com/nyunAI/SFSD-LLM}{GitHub}.
[ { "created": "Fri, 17 May 2024 07:34:03 GMT", "version": "v1" } ]
2024-05-24
[ [ "Chavan", "Arnav", "" ], [ "Lele", "Nahush", "" ], [ "Gupta", "Deepak", "" ] ]
Low-rank approximations, of the weight and feature space can enhance the performance of deep learning models, whether in terms of improving generalization or reducing the latency of inference. However, there is no clear consensus yet on \emph{how}, \emph{when} and \emph{why} these approximations are helpful for large language models (LLMs). In this work, we empirically study the efficacy of weight and feature space decomposition in transformer-based LLMs. We demonstrate that surgical decomposition not only provides critical insights into the trade-off between compression and language modelling performance, but also sometimes enhances commonsense reasoning performance of LLMs. Our empirical analysis identifies specific network segments that intrinsically exhibit a low-rank structure. Furthermore, we extend our investigation to the implications of low-rank approximations on model bias. Overall, our findings offer a novel perspective on optimizing LLMs, presenting the low-rank approximation not only as a tool for performance enhancements, but also as a means to potentially rectify biases within these models. Our code is available at \href{https://github.com/nyunAI/SFSD-LLM}{GitHub}.
2109.13037
Federico Bianchi
Federico Bianchi, Debora Nozza, Dirk Hovy
Language Invariant Properties in Natural Language Processing
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Meaning is context-dependent, but many properties of language (should) remain the same even if we transform the context. For example, sentiment, entailment, or speaker properties should be the same in a translation and original of a text. We introduce language invariant properties: i.e., properties that should not change when we transform text, and how they can be used to quantitatively evaluate the robustness of transformation algorithms. We use translation and paraphrasing as transformation examples, but our findings apply more broadly to any transformation. Our results indicate that many NLP transformations change properties like author characteristics, i.e., make them sound more male. We believe that studying these properties will allow NLP to address both social factors and pragmatic aspects of language. We also release an application suite that can be used to evaluate the invariance of transformation applications.
[ { "created": "Mon, 27 Sep 2021 13:23:05 GMT", "version": "v1" }, { "created": "Fri, 1 Oct 2021 14:10:30 GMT", "version": "v2" } ]
2021-10-04
[ [ "Bianchi", "Federico", "" ], [ "Nozza", "Debora", "" ], [ "Hovy", "Dirk", "" ] ]
Meaning is context-dependent, but many properties of language (should) remain the same even if we transform the context. For example, sentiment, entailment, or speaker properties should be the same in a translation and original of a text. We introduce language invariant properties: i.e., properties that should not change when we transform text, and how they can be used to quantitatively evaluate the robustness of transformation algorithms. We use translation and paraphrasing as transformation examples, but our findings apply more broadly to any transformation. Our results indicate that many NLP transformations change properties like author characteristics, i.e., make them sound more male. We believe that studying these properties will allow NLP to address both social factors and pragmatic aspects of language. We also release an application suite that can be used to evaluate the invariance of transformation applications.
2212.00479
Hansang Lee
Hansang Lee, Haeil Lee, Helen Hong, and Junmo Kim
Noisy Label Classification using Label Noise Selection with Test-Time Augmentation Cross-Entropy and NoiseMix Learning
Accepted at the 2nd MICCAI workshop on Data Augmentation, Labeling, and Imperfections (DALI @ MICCAI 2022)
null
10.1007/978-3-031-17027-0_8
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
As the size of the dataset used in deep learning tasks increases, the noisy label problem, which is a task of making deep learning robust to the incorrectly labeled data, has become an important task. In this paper, we propose a method of learning noisy label data using the label noise selection with test-time augmentation (TTA) cross-entropy and classifier learning with the NoiseMix method. In the label noise selection, we propose TTA cross-entropy by measuring the cross-entropy to predict the test-time augmented training data. In the classifier learning, we propose the NoiseMix method based on MixUp and BalancedMix methods by mixing the samples from the noisy and the clean label data. In experiments on the ISIC-18 public skin lesion diagnosis dataset, the proposed TTA cross-entropy outperformed the conventional cross-entropy and the TTA uncertainty in detecting label noise data in the label noise selection process. Moreover, the proposed NoiseMix not only outperformed the state-of-the-art methods in the classification performance but also showed the most robustness to the label noise in the classifier learning.
[ { "created": "Thu, 1 Dec 2022 13:05:20 GMT", "version": "v1" }, { "created": "Wed, 17 Jul 2024 05:28:13 GMT", "version": "v2" } ]
2024-07-18
[ [ "Lee", "Hansang", "" ], [ "Lee", "Haeil", "" ], [ "Hong", "Helen", "" ], [ "Kim", "Junmo", "" ] ]
As the size of the dataset used in deep learning tasks increases, the noisy label problem, which is a task of making deep learning robust to the incorrectly labeled data, has become an important task. In this paper, we propose a method of learning noisy label data using the label noise selection with test-time augmentation (TTA) cross-entropy and classifier learning with the NoiseMix method. In the label noise selection, we propose TTA cross-entropy by measuring the cross-entropy to predict the test-time augmented training data. In the classifier learning, we propose the NoiseMix method based on MixUp and BalancedMix methods by mixing the samples from the noisy and the clean label data. In experiments on the ISIC-18 public skin lesion diagnosis dataset, the proposed TTA cross-entropy outperformed the conventional cross-entropy and the TTA uncertainty in detecting label noise data in the label noise selection process. Moreover, the proposed NoiseMix not only outperformed the state-of-the-art methods in the classification performance but also showed the most robustness to the label noise in the classifier learning.
2003.05864
Zhaoji Zhang
Zhaoji Zhang, Ying Li, Guanghui Song, Chau Yuen, and Yong Liang Guan
Random NOMA With Cross-Slot Successive Interference Cancellation Packet Recovery
accepted by IEEE Wireless Communications Letters, 5 pages, 4 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional power-domain non-orthogonal multiple access (NOMA) relies on precise power control, which requires real-time channel state information at transmitters. This requirement severely limits its application to future wireless communication systems. To address this problem, we consider NOMA without power allocation, where we exploit the random channel fading and opportunistically perform successive interference cancellation (SIC) detection. To mitigate the multi-user interference, we propose a random NOMA where users randomly transmit their data packets with a certain probability. Then a cross-slot SIC packet recovery scheme is proposed to recover transmitted data packets. We model the cross-slot SIC packet recovery as a Markov process, and provide a throughput analysis, based on which the sum rate is maximized by jointly optimizing the transmission probability and the encoding rate of users.
[ { "created": "Thu, 12 Mar 2020 15:56:06 GMT", "version": "v1" } ]
2020-03-13
[ [ "Zhang", "Zhaoji", "" ], [ "Li", "Ying", "" ], [ "Song", "Guanghui", "" ], [ "Yuen", "Chau", "" ], [ "Guan", "Yong Liang", "" ] ]
Conventional power-domain non-orthogonal multiple access (NOMA) relies on precise power control, which requires real-time channel state information at transmitters. This requirement severely limits its application to future wireless communication systems. To address this problem, we consider NOMA without power allocation, where we exploit the random channel fading and opportunistically perform successive interference cancellation (SIC) detection. To mitigate the multi-user interference, we propose a random NOMA where users randomly transmit their data packets with a certain probability. Then a cross-slot SIC packet recovery scheme is proposed to recover transmitted data packets. We model the cross-slot SIC packet recovery as a Markov process, and provide a throughput analysis, based on which the sum rate is maximized by jointly optimizing the transmission probability and the encoding rate of users.
1703.08985
Michele Polese
Michele Polese, Rittwik Jana, Michele Zorzi
TCP in 5G mmWave Networks: Link Level Retransmissions and MP-TCP
6 pages, 11 figures, accepted for presentation at the 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)
null
10.1109/INFCOMW.2017.8116400
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
MmWave communications, one of the cornerstones of future 5G mobile networks, are characterized at the same time by a potential multi-gigabit capacity and by a very dynamic channel, sensitive to blockage, wide fluctuations in the received signal quality, and possibly also sudden link disruption. While the performance of physical and MAC layer schemes that address these issues has been thoroughly investigated in the literature, the complex interactions between mmWave links and transport layer protocols such as TCP are still relatively unexplored. This paper uses the ns-3 mmWave module, with its channel model based on real measurements in New York City, to analyze the performance of the Linux TCP/IP stack (i) with and without link-layer retransmissions, showing that they are fundamental to reach a high TCP throughput on mmWave links and (ii) with Multipath TCP (MP-TCP) over multiple LTE and mmWave links, illustrating which are the throughput-optimal combinations of secondary paths and congestion control algorithms in different conditions.
[ { "created": "Mon, 27 Mar 2017 09:50:20 GMT", "version": "v1" } ]
2018-09-06
[ [ "Polese", "Michele", "" ], [ "Jana", "Rittwik", "" ], [ "Zorzi", "Michele", "" ] ]
MmWave communications, one of the cornerstones of future 5G mobile networks, are characterized at the same time by a potential multi-gigabit capacity and by a very dynamic channel, sensitive to blockage, wide fluctuations in the received signal quality, and possibly also sudden link disruption. While the performance of physical and MAC layer schemes that address these issues has been thoroughly investigated in the literature, the complex interactions between mmWave links and transport layer protocols such as TCP are still relatively unexplored. This paper uses the ns-3 mmWave module, with its channel model based on real measurements in New York City, to analyze the performance of the Linux TCP/IP stack (i) with and without link-layer retransmissions, showing that they are fundamental to reach a high TCP throughput on mmWave links and (ii) with Multipath TCP (MP-TCP) over multiple LTE and mmWave links, illustrating which are the throughput-optimal combinations of secondary paths and congestion control algorithms in different conditions.
2210.16947
Mohamed Suliman
Mohamed Suliman, Douglas Leith
Two Models are Better than One: Federated Learning Is Not Private For Google GBoard Next Word Prediction
ESORICS 2023
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
In this paper we present new attacks against federated learning when used to train natural language text models. We illustrate the effectiveness of the attacks against the next word prediction model used in Google's GBoard app, a widely used mobile keyboard app that has been an early adopter of federated learning for production use. We demonstrate that the words a user types on their mobile handset, e.g. when sending text messages, can be recovered with high accuracy under a wide range of conditions and that counter-measures such a use of mini-batches and adding local noise are ineffective. We also show that the word order (and so the actual sentences typed) can be reconstructed with high fidelity. This raises obvious privacy concerns, particularly since GBoard is in production use.
[ { "created": "Sun, 30 Oct 2022 20:58:34 GMT", "version": "v1" }, { "created": "Mon, 9 Oct 2023 21:05:32 GMT", "version": "v2" } ]
2023-10-11
[ [ "Suliman", "Mohamed", "" ], [ "Leith", "Douglas", "" ] ]
In this paper we present new attacks against federated learning when used to train natural language text models. We illustrate the effectiveness of the attacks against the next word prediction model used in Google's GBoard app, a widely used mobile keyboard app that has been an early adopter of federated learning for production use. We demonstrate that the words a user types on their mobile handset, e.g. when sending text messages, can be recovered with high accuracy under a wide range of conditions and that counter-measures such a use of mini-batches and adding local noise are ineffective. We also show that the word order (and so the actual sentences typed) can be reconstructed with high fidelity. This raises obvious privacy concerns, particularly since GBoard is in production use.
2305.10736
Chenhe Dong
Chenhe Dong, Yuexiang Xie, Yaliang Li, Ying Shen
Counterfactual Debiasing for Generating Factually Consistent Text Summaries
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite substantial progress in abstractive text summarization to generate fluent and informative texts, the factual inconsistency in the generated summaries remains an important yet challenging problem to be solved. In this paper, we construct causal graphs for abstractive text summarization and identify the intrinsic causes of the factual inconsistency, i.e., the language bias and irrelevancy bias, and further propose a debiasing framework, named CoFactSum, to alleviate the causal effects of these biases by counterfactual estimation. Specifically, the proposed CoFactSum provides two counterfactual estimation strategies, i.e., Explicit Counterfactual Masking with an explicit dynamic masking strategy, and Implicit Counterfactual Training with an implicit discriminative cross-attention mechanism. Meanwhile, we design a Debiasing Degree Adjustment mechanism to dynamically adapt the debiasing degree at each decoding step. Extensive experiments on two widely-used summarization datasets demonstrate the effectiveness of CoFactSum in enhancing the factual consistency of generated summaries compared with several baselines.
[ { "created": "Thu, 18 May 2023 06:15:45 GMT", "version": "v1" } ]
2023-05-19
[ [ "Dong", "Chenhe", "" ], [ "Xie", "Yuexiang", "" ], [ "Li", "Yaliang", "" ], [ "Shen", "Ying", "" ] ]
Despite substantial progress in abstractive text summarization to generate fluent and informative texts, the factual inconsistency in the generated summaries remains an important yet challenging problem to be solved. In this paper, we construct causal graphs for abstractive text summarization and identify the intrinsic causes of the factual inconsistency, i.e., the language bias and irrelevancy bias, and further propose a debiasing framework, named CoFactSum, to alleviate the causal effects of these biases by counterfactual estimation. Specifically, the proposed CoFactSum provides two counterfactual estimation strategies, i.e., Explicit Counterfactual Masking with an explicit dynamic masking strategy, and Implicit Counterfactual Training with an implicit discriminative cross-attention mechanism. Meanwhile, we design a Debiasing Degree Adjustment mechanism to dynamically adapt the debiasing degree at each decoding step. Extensive experiments on two widely-used summarization datasets demonstrate the effectiveness of CoFactSum in enhancing the factual consistency of generated summaries compared with several baselines.
1810.04783
Gopal Krishna Kamath
Sreelakshmi Manjunath, Gopal Krishna Kamath and Gaurav Raina
Stability, convergence, and limit cycles in some human physiological processes
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical models for physiological processes aid qualitative understanding of the impact of various parameters on the underlying process. We analyse two such models for human physiological processes: the Mackey-Glass and the Lasota equations, which model the change in the concentration of blood cells in the human body. We first study the local stability of these models, and derive bounds on various model parameters and the feedback delay for the concentration to equilibrate. We then deduce conditions for non-oscillatory convergence of the solutions, which could ensure that the blood cell concentration does not oscillate. Further, we define the convergence characteristics of the solutions which govern the rate at which the concentration equilibrates when the system is stable. Owing to the possibility that physiological parameters can seldom be estimated precisely, we also derive bounds for robust stability\textemdash which enable one to ensure that the blood cell concentration equilibrates despite parametric uncertainty. We also highlight that when the necessary and sufficient condition for local stability is violated, the system transits into instability via a Hopf bifurcation, leading to limit cycles in the blood cell concentration. We then outline a framework to characterise the type of the Hopf bifurcation and determine the asymptotic orbital stability of limit cycles. The analysis is complemented with numerical examples, stability charts and bifurcation diagrams. The insights into the dynamical properties of the mathematical models may serve to guide the study of dynamical diseases.
[ { "created": "Tue, 9 Oct 2018 12:47:59 GMT", "version": "v1" } ]
2018-10-12
[ [ "Manjunath", "Sreelakshmi", "" ], [ "Kamath", "Gopal Krishna", "" ], [ "Raina", "Gaurav", "" ] ]
Mathematical models for physiological processes aid qualitative understanding of the impact of various parameters on the underlying process. We analyse two such models for human physiological processes: the Mackey-Glass and the Lasota equations, which model the change in the concentration of blood cells in the human body. We first study the local stability of these models, and derive bounds on various model parameters and the feedback delay for the concentration to equilibrate. We then deduce conditions for non-oscillatory convergence of the solutions, which could ensure that the blood cell concentration does not oscillate. Further, we define the convergence characteristics of the solutions which govern the rate at which the concentration equilibrates when the system is stable. Owing to the possibility that physiological parameters can seldom be estimated precisely, we also derive bounds for robust stability\textemdash which enable one to ensure that the blood cell concentration equilibrates despite parametric uncertainty. We also highlight that when the necessary and sufficient condition for local stability is violated, the system transits into instability via a Hopf bifurcation, leading to limit cycles in the blood cell concentration. We then outline a framework to characterise the type of the Hopf bifurcation and determine the asymptotic orbital stability of limit cycles. The analysis is complemented with numerical examples, stability charts and bifurcation diagrams. The insights into the dynamical properties of the mathematical models may serve to guide the study of dynamical diseases.
2405.08238
Katie Seaborn
Takao Fujii, Katie Seaborn, Madeleine Steeds
Silver-Tongued and Sundry: Exploring Intersectional Pronouns with ChatGPT
Honorable Mention award (top 5%) at CHI '24
CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems (2024), Article No. 511, 1-14
10.1145/3613904.3642303
null
cs.HC cs.AI cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
ChatGPT is a conversational agent built on a large language model. Trained on a significant portion of human output, ChatGPT can mimic people to a degree. As such, we need to consider what social identities ChatGPT simulates (or can be designed to simulate). In this study, we explored the case of identity simulation through Japanese first-person pronouns, which are tightly connected to social identities in intersectional ways, i.e., intersectional pronouns. We conducted a controlled online experiment where people from two regions in Japan (Kanto and Kinki) witnessed interactions with ChatGPT using ten sets of first-person pronouns. We discovered that pronouns alone can evoke perceptions of social identities in ChatGPT at the intersections of gender, age, region, and formality, with caveats. This work highlights the importance of pronoun use for social identity simulation, provides a language-based methodology for culturally-sensitive persona development, and advances the potential of intersectional identities in intelligent agents.
[ { "created": "Mon, 13 May 2024 23:38:50 GMT", "version": "v1" } ]
2024-05-15
[ [ "Fujii", "Takao", "" ], [ "Seaborn", "Katie", "" ], [ "Steeds", "Madeleine", "" ] ]
ChatGPT is a conversational agent built on a large language model. Trained on a significant portion of human output, ChatGPT can mimic people to a degree. As such, we need to consider what social identities ChatGPT simulates (or can be designed to simulate). In this study, we explored the case of identity simulation through Japanese first-person pronouns, which are tightly connected to social identities in intersectional ways, i.e., intersectional pronouns. We conducted a controlled online experiment where people from two regions in Japan (Kanto and Kinki) witnessed interactions with ChatGPT using ten sets of first-person pronouns. We discovered that pronouns alone can evoke perceptions of social identities in ChatGPT at the intersections of gender, age, region, and formality, with caveats. This work highlights the importance of pronoun use for social identity simulation, provides a language-based methodology for culturally-sensitive persona development, and advances the potential of intersectional identities in intelligent agents.
1210.3846
Igor Konnov
Annu John, Igor Konnov, Ulrich Schmid, Helmut Veith, Josef Widder
Counter Attack on Byzantine Generals: Parameterized Model Checking of Fault-tolerant Distributed Algorithms
null
null
null
null
cs.LO cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce an automated parameterized verification method for fault-tolerant distributed algorithms (FTDA). FTDAs are parameterized by both the number of processes and the assumed maximum number of Byzantine faulty processes. At the center of our technique is a parametric interval abstraction (PIA) where the interval boundaries are arithmetic expressions over parameters. Using PIA for both data abstraction and a new form of counter abstraction, we reduce the parameterized problem to finite-state model checking. We demonstrate the practical feasibility of our method by verifying several variants of the well-known distributed algorithm by Srikanth and Toueg. Our semi-decision procedures are complemented and motivated by an undecidability proof for FTDA verification which holds even in the absence of interprocess communication. To the best of our knowledge, this is the first paper to achieve parameterized automated verification of Byzantine FTDA.
[ { "created": "Sun, 14 Oct 2012 21:31:23 GMT", "version": "v1" }, { "created": "Sun, 3 Feb 2013 19:26:53 GMT", "version": "v2" } ]
2013-02-05
[ [ "John", "Annu", "" ], [ "Konnov", "Igor", "" ], [ "Schmid", "Ulrich", "" ], [ "Veith", "Helmut", "" ], [ "Widder", "Josef", "" ] ]
We introduce an automated parameterized verification method for fault-tolerant distributed algorithms (FTDA). FTDAs are parameterized by both the number of processes and the assumed maximum number of Byzantine faulty processes. At the center of our technique is a parametric interval abstraction (PIA) where the interval boundaries are arithmetic expressions over parameters. Using PIA for both data abstraction and a new form of counter abstraction, we reduce the parameterized problem to finite-state model checking. We demonstrate the practical feasibility of our method by verifying several variants of the well-known distributed algorithm by Srikanth and Toueg. Our semi-decision procedures are complemented and motivated by an undecidability proof for FTDA verification which holds even in the absence of interprocess communication. To the best of our knowledge, this is the first paper to achieve parameterized automated verification of Byzantine FTDA.
1912.07319
Micha{\l} Idzik
Micha{\l} Idzik
Multi-Objective Evolutionary Algorithms platform with support for flexible hybridization tools
null
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Working with complex, high-level MOEA meta-models such as Multiobjec-tive Optimization Hierarchic Genetic Strategy (MO-mHGS) with multi-deme support usually requires dedicated implementation and configuration for each internal (single-deme) algorithm variant. If we generalize meta-model, we can simplify whole simulation process and bind any internal algorithm (we denote it as a driver), without providing redundant meta-model implementations. This idea has become a fundamental of Evogil platform. Our aim was to allow construct-ing custom hybrid models or combine existing solutions in runtime simulation environment. We define hybrid solution as a composition of a meta-model and a driver (or multiple drivers). Meta-model uses drivers to perform evolutionary calculations and process their results. Moreover, Evogil provides set of ready-made solutions divided into two groups (multi-deme meta-models and single-deme drivers), as well as processing tools (quality metrics, statistics and plotting scripts), simulation management and results persistence layer.
[ { "created": "Mon, 16 Dec 2019 12:32:21 GMT", "version": "v1" } ]
2019-12-17
[ [ "Idzik", "Michał", "" ] ]
Working with complex, high-level MOEA meta-models such as Multiobjec-tive Optimization Hierarchic Genetic Strategy (MO-mHGS) with multi-deme support usually requires dedicated implementation and configuration for each internal (single-deme) algorithm variant. If we generalize meta-model, we can simplify whole simulation process and bind any internal algorithm (we denote it as a driver), without providing redundant meta-model implementations. This idea has become a fundamental of Evogil platform. Our aim was to allow construct-ing custom hybrid models or combine existing solutions in runtime simulation environment. We define hybrid solution as a composition of a meta-model and a driver (or multiple drivers). Meta-model uses drivers to perform evolutionary calculations and process their results. Moreover, Evogil provides set of ready-made solutions divided into two groups (multi-deme meta-models and single-deme drivers), as well as processing tools (quality metrics, statistics and plotting scripts), simulation management and results persistence layer.
2007.08860
Rachmad Vidya Wicaksana Putra
Rachmad Vidya Wicaksana Putra, Muhammad Shafique
FSpiNN: An Optimization Framework for Memory- and Energy-Efficient Spiking Neural Networks
To appear at the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (IEEE-TCAD), as part of the ESWEEK-TCAD Special Issue, September 2020
null
10.1109/TCAD.2020.3013049
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking Neural Networks (SNNs) are gaining interest due to their event-driven processing which potentially consumes low power/energy computations in hardware platforms, while offering unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule. However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy, thereby making them difficult to be deployed on embedded systems, for instance on battery-powered mobile devices and IoT Edge nodes. Towards this, we propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing, with unsupervised learning capability while maintaining accuracy. It is achieved by (1) reducing the computational requirements of neuronal and STDP operations, (2) improving the accuracy of STDP-based learning, (3) compressing the SNN through a fixed-point quantization, and (4) incorporating the memory and energy requirements in the optimization process. FSpiNN reduces the computational requirements by reducing the number of neuronal operations, the STDP-based synaptic weight updates, and the STDP complexity. To improve the accuracy of learning, FSpiNN employs timestep-based synaptic weight updates, and adaptively determines the STDP potentiation factor and the effective inhibition strength. The experimental results show that, as compared to the state-of-the-art work, FSpiNN achieves 7.5x memory saving, and improves the energy-efficiency by 3.5x on average for training and by 1.8x on average for inference, across MNIST and Fashion MNIST datasets, with no accuracy loss for a network with 4900 excitatory neurons, thereby enabling energy-efficient SNNs for edge devices/embedded systems.
[ { "created": "Fri, 17 Jul 2020 09:40:26 GMT", "version": "v1" } ]
2023-03-06
[ [ "Putra", "Rachmad Vidya Wicaksana", "" ], [ "Shafique", "Muhammad", "" ] ]
Spiking Neural Networks (SNNs) are gaining interest due to their event-driven processing which potentially consumes low power/energy computations in hardware platforms, while offering unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule. However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy, thereby making them difficult to be deployed on embedded systems, for instance on battery-powered mobile devices and IoT Edge nodes. Towards this, we propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing, with unsupervised learning capability while maintaining accuracy. It is achieved by (1) reducing the computational requirements of neuronal and STDP operations, (2) improving the accuracy of STDP-based learning, (3) compressing the SNN through a fixed-point quantization, and (4) incorporating the memory and energy requirements in the optimization process. FSpiNN reduces the computational requirements by reducing the number of neuronal operations, the STDP-based synaptic weight updates, and the STDP complexity. To improve the accuracy of learning, FSpiNN employs timestep-based synaptic weight updates, and adaptively determines the STDP potentiation factor and the effective inhibition strength. The experimental results show that, as compared to the state-of-the-art work, FSpiNN achieves 7.5x memory saving, and improves the energy-efficiency by 3.5x on average for training and by 1.8x on average for inference, across MNIST and Fashion MNIST datasets, with no accuracy loss for a network with 4900 excitatory neurons, thereby enabling energy-efficient SNNs for edge devices/embedded systems.
2102.03011
Oliver Wang
Felix Klose and Oliver Wang and Jean-Charles Bazin and Marcus Magnor and Alexander Sorkine-Hornung
Sampling Based Scene-Space Video Processing
null
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Many compelling video processing effects can be achieved if per-pixel depth information and 3D camera calibrations are known. However, the success of such methods is highly dependent on the accuracy of this "scene-space" information. We present a novel, sampling-based framework for processing video that enables high-quality scene-space video effects in the presence of inevitable errors in depth and camera pose estimation. Instead of trying to improve the explicit 3D scene representation, the key idea of our method is to exploit the high redundancy of approximate scene information that arises due to most scene points being visible multiple times across many frames of video. Based on this observation, we propose a novel pixel gathering and filtering approach. The gathering step is general and collects pixel samples in scene-space, while the filtering step is application-specific and computes a desired output video from the gathered sample sets. Our approach is easily parallelizable and has been implemented on GPU, allowing us to take full advantage of large volumes of video data and facilitating practical runtimes on HD video using a standard desktop computer. Our generic scene-space formulation is able to comprehensively describe a multitude of video processing applications such as denoising, deblurring, super resolution, object removal, computational shutter functions, and other scene-space camera effects. We present results for various casually captured, hand-held, moving, compressed, monocular videos depicting challenging scenes recorded in uncontrolled environments.
[ { "created": "Fri, 5 Feb 2021 05:55:04 GMT", "version": "v1" } ]
2021-02-08
[ [ "Klose", "Felix", "" ], [ "Wang", "Oliver", "" ], [ "Bazin", "Jean-Charles", "" ], [ "Magnor", "Marcus", "" ], [ "Sorkine-Hornung", "Alexander", "" ] ]
Many compelling video processing effects can be achieved if per-pixel depth information and 3D camera calibrations are known. However, the success of such methods is highly dependent on the accuracy of this "scene-space" information. We present a novel, sampling-based framework for processing video that enables high-quality scene-space video effects in the presence of inevitable errors in depth and camera pose estimation. Instead of trying to improve the explicit 3D scene representation, the key idea of our method is to exploit the high redundancy of approximate scene information that arises due to most scene points being visible multiple times across many frames of video. Based on this observation, we propose a novel pixel gathering and filtering approach. The gathering step is general and collects pixel samples in scene-space, while the filtering step is application-specific and computes a desired output video from the gathered sample sets. Our approach is easily parallelizable and has been implemented on GPU, allowing us to take full advantage of large volumes of video data and facilitating practical runtimes on HD video using a standard desktop computer. Our generic scene-space formulation is able to comprehensively describe a multitude of video processing applications such as denoising, deblurring, super resolution, object removal, computational shutter functions, and other scene-space camera effects. We present results for various casually captured, hand-held, moving, compressed, monocular videos depicting challenging scenes recorded in uncontrolled environments.
2205.14806
Mayra Samaniego Mrs
Mayra Samaniego
Data Trust and IoT
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
People IoT surroundings have become valuable information sources that can positively impact individuals and society. A user IoT data can be used for different purposes. For instance, research and improvement of public services. However, individuals lack the governance power to share their IoT data. Data trust is a concept that brings opportunities to address data sharing in IoT. This research reviews the idea of data trust. Then, we review IoT and its unique characteristics that implement data trust a challenge. We further discuss blockchain technology and how it can be used to enable data trust in IoT. Finally, we introduce a blockchain-based solution for data trust in IoT.
[ { "created": "Mon, 30 May 2022 02:05:36 GMT", "version": "v1" } ]
2022-05-31
[ [ "Samaniego", "Mayra", "" ] ]
People IoT surroundings have become valuable information sources that can positively impact individuals and society. A user IoT data can be used for different purposes. For instance, research and improvement of public services. However, individuals lack the governance power to share their IoT data. Data trust is a concept that brings opportunities to address data sharing in IoT. This research reviews the idea of data trust. Then, we review IoT and its unique characteristics that implement data trust a challenge. We further discuss blockchain technology and how it can be used to enable data trust in IoT. Finally, we introduce a blockchain-based solution for data trust in IoT.
1905.01752
Shivangi Srivastava
Shivangi Srivastava and John E. Vargas-Mu\~noz and Devis Tuia
Understanding urban landuse from the above and ground perspectives: a deep learning, multimodal solution
null
Remote Sensing of Environment, 228, pages 129 - 143, 2019
10.1016/j.rse.2019.04.014
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Landuse characterization is important for urban planning. It is traditionally performed with field surveys or manual photo interpretation, two practices that are time-consuming and labor-intensive. Therefore, we aim to automate landuse mapping at the urban-object level with a deep learning approach based on data from multiple sources (or modalities). We consider two image modalities: overhead imagery from Google Maps and ensembles of ground-based pictures (side-views) per urban-object from Google Street View (GSV). These modalities bring complementary visual information pertaining to the urban-objects. We propose an end-to-end trainable model, which uses OpenStreetMap annotations as labels. The model can accommodate a variable number of GSV pictures for the ground-based branch and can also function in the absence of ground pictures at prediction time. We test the effectiveness of our model over the area of \^Ile-de-France, France, and test its generalization abilities on a set of urban-objects from the city of Nantes, France. Our proposed multimodal Convolutional Neural Network achieves considerably higher accuracies than methods that use a single image modality, making it suitable for automatic landuse map updates. Additionally, our approach could be easily scaled to multiple cities, because it is based on data sources available for many cities worldwide.
[ { "created": "Sun, 5 May 2019 21:36:59 GMT", "version": "v1" } ]
2019-05-07
[ [ "Srivastava", "Shivangi", "" ], [ "Vargas-Muñoz", "John E.", "" ], [ "Tuia", "Devis", "" ] ]
Landuse characterization is important for urban planning. It is traditionally performed with field surveys or manual photo interpretation, two practices that are time-consuming and labor-intensive. Therefore, we aim to automate landuse mapping at the urban-object level with a deep learning approach based on data from multiple sources (or modalities). We consider two image modalities: overhead imagery from Google Maps and ensembles of ground-based pictures (side-views) per urban-object from Google Street View (GSV). These modalities bring complementary visual information pertaining to the urban-objects. We propose an end-to-end trainable model, which uses OpenStreetMap annotations as labels. The model can accommodate a variable number of GSV pictures for the ground-based branch and can also function in the absence of ground pictures at prediction time. We test the effectiveness of our model over the area of \^Ile-de-France, France, and test its generalization abilities on a set of urban-objects from the city of Nantes, France. Our proposed multimodal Convolutional Neural Network achieves considerably higher accuracies than methods that use a single image modality, making it suitable for automatic landuse map updates. Additionally, our approach could be easily scaled to multiple cities, because it is based on data sources available for many cities worldwide.
2207.08997
Neil Nie
Neil Nie, Samir Yitzhak Gadre, Kiana Ehsani, Shuran Song
Structure from Action: Learning Interactions for Articulated Object 3D Structure Discovery
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We introduce Structure from Action (SfA), a framework to discover 3D part geometry and joint parameters of unseen articulated objects via a sequence of inferred interactions. Our key insight is that 3D interaction and perception should be considered in conjunction to construct 3D articulated CAD models, especially for categories not seen during training. By selecting informative interactions, SfA discovers parts and reveals occluded surfaces, like the inside of a closed drawer. By aggregating visual observations in 3D, SfA accurately segments multiple parts, reconstructs part geometry, and infers all joint parameters in a canonical coordinate frame. Our experiments demonstrate that a SfA model trained in simulation can generalize to many unseen object categories with diverse structures and to real-world objects. Empirically, SfA outperforms a pipeline of state-of-the-art components by 25.4 3D IoU percentage points on unseen categories, while matching already performant joint estimation baselines.
[ { "created": "Tue, 19 Jul 2022 00:27:36 GMT", "version": "v1" }, { "created": "Fri, 7 Apr 2023 16:49:33 GMT", "version": "v2" } ]
2023-04-10
[ [ "Nie", "Neil", "" ], [ "Gadre", "Samir Yitzhak", "" ], [ "Ehsani", "Kiana", "" ], [ "Song", "Shuran", "" ] ]
We introduce Structure from Action (SfA), a framework to discover 3D part geometry and joint parameters of unseen articulated objects via a sequence of inferred interactions. Our key insight is that 3D interaction and perception should be considered in conjunction to construct 3D articulated CAD models, especially for categories not seen during training. By selecting informative interactions, SfA discovers parts and reveals occluded surfaces, like the inside of a closed drawer. By aggregating visual observations in 3D, SfA accurately segments multiple parts, reconstructs part geometry, and infers all joint parameters in a canonical coordinate frame. Our experiments demonstrate that a SfA model trained in simulation can generalize to many unseen object categories with diverse structures and to real-world objects. Empirically, SfA outperforms a pipeline of state-of-the-art components by 25.4 3D IoU percentage points on unseen categories, while matching already performant joint estimation baselines.
2307.13679
Luca Bennett
Luca A. Bennett and Zahraa S. Abdallah
RED CoMETS: An ensemble classifier for symbolically represented multivariate time series
Accepted by AALTD 2023; fixed typos and minor error in Table 2
In proceedings of the 8th Workshop on Advanced Analytics and Learning on Temporal Data (AALTD 2023), pages 76-91, 2023
10.1007/978-3-031-49896-1_6
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multivariate time series classification is a rapidly growing research field with practical applications in finance, healthcare, engineering, and more. The complexity of classifying multivariate time series data arises from its high dimensionality, temporal dependencies, and varying lengths. This paper introduces a novel ensemble classifier called RED CoMETS (Random Enhanced Co-eye for Multivariate Time Series), which addresses these challenges. RED CoMETS builds upon the success of Co-eye, an ensemble classifier specifically designed for symbolically represented univariate time series, and extends its capabilities to handle multivariate data. The performance of RED CoMETS is evaluated on benchmark datasets from the UCR archive, where it demonstrates competitive accuracy when compared to state-of-the-art techniques in multivariate settings. Notably, it achieves the highest reported accuracy in the literature for the 'HandMovementDirection' dataset. Moreover, the proposed method significantly reduces computation time compared to Co-eye, making it an efficient and effective choice for multivariate time series classification.
[ { "created": "Tue, 25 Jul 2023 17:36:34 GMT", "version": "v1" }, { "created": "Sat, 16 Sep 2023 20:11:40 GMT", "version": "v2" } ]
2024-02-06
[ [ "Bennett", "Luca A.", "" ], [ "Abdallah", "Zahraa S.", "" ] ]
Multivariate time series classification is a rapidly growing research field with practical applications in finance, healthcare, engineering, and more. The complexity of classifying multivariate time series data arises from its high dimensionality, temporal dependencies, and varying lengths. This paper introduces a novel ensemble classifier called RED CoMETS (Random Enhanced Co-eye for Multivariate Time Series), which addresses these challenges. RED CoMETS builds upon the success of Co-eye, an ensemble classifier specifically designed for symbolically represented univariate time series, and extends its capabilities to handle multivariate data. The performance of RED CoMETS is evaluated on benchmark datasets from the UCR archive, where it demonstrates competitive accuracy when compared to state-of-the-art techniques in multivariate settings. Notably, it achieves the highest reported accuracy in the literature for the 'HandMovementDirection' dataset. Moreover, the proposed method significantly reduces computation time compared to Co-eye, making it an efficient and effective choice for multivariate time series classification.
2107.04011
Jawad Haqbeen
J. Haqbeen, T. Ito, S. Sahab, R. Hadfi, T. Sato, S. Okuhara
Meeting the SDGs : Enabling the Goals by Cooperation with Crowd using a Conversational AI Platform
7 pages, 6 figures, 1 table, To appear as a conference paper at KICSS 2020
null
null
null
cs.CY cs.CL
http://creativecommons.org/licenses/by/4.0/
In this paper, we report about a large-scale online discussion with 1099 citizens on the Afghanistan Sustainable Development Goals.
[ { "created": "Wed, 9 Jun 2021 04:14:19 GMT", "version": "v1" } ]
2021-07-09
[ [ "Haqbeen", "J.", "" ], [ "Ito", "T.", "" ], [ "Sahab", "S.", "" ], [ "Hadfi", "R.", "" ], [ "Sato", "T.", "" ], [ "Okuhara", "S.", "" ] ]
In this paper, we report about a large-scale online discussion with 1099 citizens on the Afghanistan Sustainable Development Goals.
1704.02703
Lei Bi
Lei Bi, Jinman Kim, Ashnil Kumar, Dagan Feng
Automatic Liver Lesion Detection using Cascaded Deep Residual Networks
Submission for 2017 ISBI LiTS Challenge
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic segmentation of liver lesions is a fundamental requirement towards the creation of computer aided diagnosis (CAD) and decision support systems (CDS). Traditional segmentation approaches depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to adopt within a clinical environment. Recently, deep learning methods based on fully convolutional networks (FCNs) have been successful in many segmentation problems primarily because they leverage a large labelled dataset to hierarchically learn the features that best correspond to the shallow visual appearance as well as the deep semantics of the areas to be segmented. However, FCNs based on a 16 layer VGGNet architecture have limited capacity to add additional layers. Therefore, it is challenging to learn more discriminative features among different classes for FCNs. In this study, we overcome these limitations using deep residual networks (ResNet) to segment liver lesions. ResNet contain skip connections between convolutional layers, which solved the problem of the training degradation of training accuracy in very deep networks and thereby enables the use of additional layers for learning more discriminative features. In addition, we achieve more precise boundary definitions through a novel cascaded ResNet architecture with multi-scale fusion to gradually learn and infer the boundaries of both the liver and the liver lesions. Our proposed method achieved 4th place in the ISBI 2017 Liver Tumor Segmentation Challenge by the submission deadline.
[ { "created": "Mon, 10 Apr 2017 04:05:50 GMT", "version": "v1" }, { "created": "Sun, 21 May 2017 02:58:40 GMT", "version": "v2" } ]
2017-05-23
[ [ "Bi", "Lei", "" ], [ "Kim", "Jinman", "" ], [ "Kumar", "Ashnil", "" ], [ "Feng", "Dagan", "" ] ]
Automatic segmentation of liver lesions is a fundamental requirement towards the creation of computer aided diagnosis (CAD) and decision support systems (CDS). Traditional segmentation approaches depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to adopt within a clinical environment. Recently, deep learning methods based on fully convolutional networks (FCNs) have been successful in many segmentation problems primarily because they leverage a large labelled dataset to hierarchically learn the features that best correspond to the shallow visual appearance as well as the deep semantics of the areas to be segmented. However, FCNs based on a 16 layer VGGNet architecture have limited capacity to add additional layers. Therefore, it is challenging to learn more discriminative features among different classes for FCNs. In this study, we overcome these limitations using deep residual networks (ResNet) to segment liver lesions. ResNet contain skip connections between convolutional layers, which solved the problem of the training degradation of training accuracy in very deep networks and thereby enables the use of additional layers for learning more discriminative features. In addition, we achieve more precise boundary definitions through a novel cascaded ResNet architecture with multi-scale fusion to gradually learn and infer the boundaries of both the liver and the liver lesions. Our proposed method achieved 4th place in the ISBI 2017 Liver Tumor Segmentation Challenge by the submission deadline.
1208.3205
Manas Gaur
Manas Gaur
Software Security analysis, static and dynamic testing in java and C environment, a comparative study
the research paper consists of 11 figures and 7 tabular comparison
null
null
null
cs.CR cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main stretch in the paper is buffer overflow anomaly occurring in major source codes, designed in various programming language. It describes the various as to how to improve your code and increase its strength to withstand security theft occurring at vulnerable areas in the code. The main language used is JAVA, regarded as one of the most object oriented language still create lot of error like stack overflow, illegal/inappropriate method overriding. I used tools confined to JAVA to test as how weak points in the code can be rectified before compiled. The byte code theft is difficult to be conquered, so it's a better to get rid of it in the plain java code itself. The tools used in the research are PMD(Programming mistake detector), it helps to detect line of code that make pop out error in near future like defect in hashcode(memory maps) overriding due to which the java code will not function correctly. Another tool is FindBUGS which provide the tester of the code to analyze the weak points in the code like infinite loop, unsynchronized wait, deadlock situation, null referring and dereferencing. Another tool which provides the base to above tools is JaCoCo code coverage analysis used to detect unreachable part and unused conditions of the code which improves the space complexity and helps in easy clarification of errors. Through this paper, we design an algorithm to prevent the loss of data. The main audience is the white box tester who might leave out essential line of code like, index variables, infinite loop, and inappropriate hashcode in the major source program. This algorithm serves to reduce the damage in case of buffer overflow
[ { "created": "Wed, 15 Aug 2012 20:08:59 GMT", "version": "v1" } ]
2012-08-17
[ [ "Gaur", "Manas", "" ] ]
The main stretch in the paper is buffer overflow anomaly occurring in major source codes, designed in various programming language. It describes the various as to how to improve your code and increase its strength to withstand security theft occurring at vulnerable areas in the code. The main language used is JAVA, regarded as one of the most object oriented language still create lot of error like stack overflow, illegal/inappropriate method overriding. I used tools confined to JAVA to test as how weak points in the code can be rectified before compiled. The byte code theft is difficult to be conquered, so it's a better to get rid of it in the plain java code itself. The tools used in the research are PMD(Programming mistake detector), it helps to detect line of code that make pop out error in near future like defect in hashcode(memory maps) overriding due to which the java code will not function correctly. Another tool is FindBUGS which provide the tester of the code to analyze the weak points in the code like infinite loop, unsynchronized wait, deadlock situation, null referring and dereferencing. Another tool which provides the base to above tools is JaCoCo code coverage analysis used to detect unreachable part and unused conditions of the code which improves the space complexity and helps in easy clarification of errors. Through this paper, we design an algorithm to prevent the loss of data. The main audience is the white box tester who might leave out essential line of code like, index variables, infinite loop, and inappropriate hashcode in the major source program. This algorithm serves to reduce the damage in case of buffer overflow
1912.06185
Himanshu Rai
Yichao Lu, Cheng Chang, Himanshu Rai, Guangwei Yu, Maksims Volkovs
Learning Effective Visual Relationship Detector on 1 GPU
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present our winning solution to the Open Images 2019 Visual Relationship challenge. This is the largest challenge of its kind to date with nearly 9 million training images. Challenge task consists of detecting objects and identifying relationships between them in complex scenes. Our solution has three stages, first object detection model is fine-tuned for the challenge classes using a novel weight transfer approach. Then, spatio-semantic and visual relationship models are trained on candidate object pairs. Finally, features and model predictions are combined to generate the final relationship prediction. Throughout the challenge we focused on minimizing the hardware requirements of our architecture. Specifically, our weight transfer approach enables much faster optimization, allowing the entire architecture to be trained on a single GPU in under two days. In addition to efficient optimization, our approach also achieves superior accuracy winning first place out of over 200 teams, and outperforming the second place team by over $5\%$ on the held-out private leaderboard.
[ { "created": "Thu, 12 Dec 2019 19:59:41 GMT", "version": "v1" } ]
2019-12-16
[ [ "Lu", "Yichao", "" ], [ "Chang", "Cheng", "" ], [ "Rai", "Himanshu", "" ], [ "Yu", "Guangwei", "" ], [ "Volkovs", "Maksims", "" ] ]
We present our winning solution to the Open Images 2019 Visual Relationship challenge. This is the largest challenge of its kind to date with nearly 9 million training images. Challenge task consists of detecting objects and identifying relationships between them in complex scenes. Our solution has three stages, first object detection model is fine-tuned for the challenge classes using a novel weight transfer approach. Then, spatio-semantic and visual relationship models are trained on candidate object pairs. Finally, features and model predictions are combined to generate the final relationship prediction. Throughout the challenge we focused on minimizing the hardware requirements of our architecture. Specifically, our weight transfer approach enables much faster optimization, allowing the entire architecture to be trained on a single GPU in under two days. In addition to efficient optimization, our approach also achieves superior accuracy winning first place out of over 200 teams, and outperforming the second place team by over $5\%$ on the held-out private leaderboard.
2403.19935
Daniel Oliveira Dantas
Artur Santos Nascimento, Valter Guilherme Silva de Souza, Daniel Oliveira Dantas, Beatriz Trinch\~ao Andrade
CP HDR: A feature point detection and description library for LDR and HDR images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In computer vision, characteristics refer to image regions with unique properties, such as corners, edges, textures, or areas with high contrast. These regions can be represented through feature points (FPs). FP detection and description are fundamental steps to many computer vision tasks. Most FP detection and description methods use low dynamic range (LDR) images, sufficient for most applications involving digital images. However, LDR images may have saturated pixels in scenes with extreme light conditions, which degrade FP detection. On the other hand, high dynamic range (HDR) images usually present a greater dynamic range but FP detection algorithms do not take advantage of all the information in such images. In this study, we present a systematic review of image detection and description algorithms that use HDR images as input. We developed a library called CP_HDR that implements the Harris corner detector, SIFT detector and descriptor, and two modifications of those algorithms specialized in HDR images, called SIFT for HDR (SfHDR) and Harris for HDR (HfHDR). Previous studies investigated the use of HDR images in FP detection, but we did not find studies investigating the use of HDR images in FP description. Using uniformity, repeatability rate, mean average precision, and matching rate metrics, we compared the performance of the CP_HDR algorithms using LDR and HDR images. We observed an increase in the uniformity of the distribution of FPs among the high-light, mid-light, and low-light areas of the images. The results show that using HDR images as input to detection algorithms improves performance and that SfHDR and HfHDR enhance FP description.
[ { "created": "Fri, 29 Mar 2024 02:42:22 GMT", "version": "v1" } ]
2024-04-01
[ [ "Nascimento", "Artur Santos", "" ], [ "de Souza", "Valter Guilherme Silva", "" ], [ "Dantas", "Daniel Oliveira", "" ], [ "Andrade", "Beatriz Trinchão", "" ] ]
In computer vision, characteristics refer to image regions with unique properties, such as corners, edges, textures, or areas with high contrast. These regions can be represented through feature points (FPs). FP detection and description are fundamental steps to many computer vision tasks. Most FP detection and description methods use low dynamic range (LDR) images, sufficient for most applications involving digital images. However, LDR images may have saturated pixels in scenes with extreme light conditions, which degrade FP detection. On the other hand, high dynamic range (HDR) images usually present a greater dynamic range but FP detection algorithms do not take advantage of all the information in such images. In this study, we present a systematic review of image detection and description algorithms that use HDR images as input. We developed a library called CP_HDR that implements the Harris corner detector, SIFT detector and descriptor, and two modifications of those algorithms specialized in HDR images, called SIFT for HDR (SfHDR) and Harris for HDR (HfHDR). Previous studies investigated the use of HDR images in FP detection, but we did not find studies investigating the use of HDR images in FP description. Using uniformity, repeatability rate, mean average precision, and matching rate metrics, we compared the performance of the CP_HDR algorithms using LDR and HDR images. We observed an increase in the uniformity of the distribution of FPs among the high-light, mid-light, and low-light areas of the images. The results show that using HDR images as input to detection algorithms improves performance and that SfHDR and HfHDR enhance FP description.
1206.1969
Iztok Fister
Iztok Fister Jr., Marjan Mernik, Iztok Fister, Dejan Hrn\v{c}i\v{c}
Implementation of EasyTime Formal Semantics using a LISA Compiler Generator
null
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A manual measuring time tool in mass sporting competitions would not be imaginable nowadays, because many modern disciplines, such as IRONMAN, last a long-time and, therefore, demand additional reliability. Moreover, automatic timing-devices based on RFID technology, have become cheaper. However, these devices cannot operate as stand-alone because they need a computer measuring system that is capable of processing incoming events, encoding the results, assigning them to the correct competitor, sorting the results according to the achieved times, and then providing a printout of the results. This article presents the domain-specific language EasyTime, which enables the controlling of an agent by writing the events within a database. It focuses, in particular, on the implementation of EasyTime with a LISA tool that enables the automatic construction of compilers from language specifications, using Attribute Grammars.
[ { "created": "Sat, 9 Jun 2012 20:10:16 GMT", "version": "v1" } ]
2012-06-12
[ [ "Fister", "Iztok", "Jr." ], [ "Mernik", "Marjan", "" ], [ "Fister", "Iztok", "" ], [ "Hrnčič", "Dejan", "" ] ]
A manual measuring time tool in mass sporting competitions would not be imaginable nowadays, because many modern disciplines, such as IRONMAN, last a long-time and, therefore, demand additional reliability. Moreover, automatic timing-devices based on RFID technology, have become cheaper. However, these devices cannot operate as stand-alone because they need a computer measuring system that is capable of processing incoming events, encoding the results, assigning them to the correct competitor, sorting the results according to the achieved times, and then providing a printout of the results. This article presents the domain-specific language EasyTime, which enables the controlling of an agent by writing the events within a database. It focuses, in particular, on the implementation of EasyTime with a LISA tool that enables the automatic construction of compilers from language specifications, using Attribute Grammars.
2003.03645
Nabiha Asghar
Nabiha Asghar, Ivan Kobyzev, Jesse Hoey, Pascal Poupart, and Muhammad Bilal Sheikh
Generating Emotionally Aligned Responses in Dialogues using Affect Control Theory
null
null
null
null
cs.CL cs.AI cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art neural dialogue systems excel at syntactic and semantic modelling of language, but often have a hard time establishing emotional alignment with the human interactant during a conversation. In this work, we bring Affect Control Theory (ACT), a socio-mathematical model of emotions for human-human interactions, to the neural dialogue generation setting. ACT makes predictions about how humans respond to emotional stimuli in social situations. Due to this property, ACT and its derivative probabilistic models have been successfully deployed in several applications of Human-Computer Interaction, including empathetic tutoring systems, assistive healthcare devices and two-person social dilemma games. We investigate how ACT can be used to develop affect-aware neural conversational agents, which produce emotionally aligned responses to prompts and take into consideration the affective identities of the interactants.
[ { "created": "Sat, 7 Mar 2020 19:31:08 GMT", "version": "v1" }, { "created": "Thu, 16 Apr 2020 06:46:25 GMT", "version": "v2" } ]
2020-04-17
[ [ "Asghar", "Nabiha", "" ], [ "Kobyzev", "Ivan", "" ], [ "Hoey", "Jesse", "" ], [ "Poupart", "Pascal", "" ], [ "Sheikh", "Muhammad Bilal", "" ] ]
State-of-the-art neural dialogue systems excel at syntactic and semantic modelling of language, but often have a hard time establishing emotional alignment with the human interactant during a conversation. In this work, we bring Affect Control Theory (ACT), a socio-mathematical model of emotions for human-human interactions, to the neural dialogue generation setting. ACT makes predictions about how humans respond to emotional stimuli in social situations. Due to this property, ACT and its derivative probabilistic models have been successfully deployed in several applications of Human-Computer Interaction, including empathetic tutoring systems, assistive healthcare devices and two-person social dilemma games. We investigate how ACT can be used to develop affect-aware neural conversational agents, which produce emotionally aligned responses to prompts and take into consideration the affective identities of the interactants.
1809.00043
Bin Han
Bin Han, Antonio De Domenico, Ghina Dandachi, Anastasios Drosou, Dimitrios Tzovaras, Roberto Querio, Fabrizio Moggio, \"Omer Bulakci, Hans D. Schotten
Admission and Congestion Control for 5G Network Slicing
Submitted to 2018 IEEE Conference on Standards for Communications and Networking (CSCN)
2018 IEEE Conference on Standards for Communications and Networking (CSCN)
10.1109/CSCN.2018.8581773
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network Slicing has been widely accepted as essential feature of future 5th Generation (5G) mobile communication networks. Accounting the potentially dense demand of network slices as a cloud service and the limited resource of mobile network operators (MNOs), an efficient inter-slice management and orchestration plays a key role in 5G networks. This calls advanced solutions for slice admission and congestion control. This paper proposes a novel approach of inter-slice control that well copes with existing pre-standardized 5G architectures
[ { "created": "Fri, 31 Aug 2018 20:08:17 GMT", "version": "v1" } ]
2021-11-30
[ [ "Han", "Bin", "" ], [ "De Domenico", "Antonio", "" ], [ "Dandachi", "Ghina", "" ], [ "Drosou", "Anastasios", "" ], [ "Tzovaras", "Dimitrios", "" ], [ "Querio", "Roberto", "" ], [ "Moggio", "Fabrizio", "" ], [ "Bulakci", "Ömer", "" ], [ "Schotten", "Hans D.", "" ] ]
Network Slicing has been widely accepted as essential feature of future 5th Generation (5G) mobile communication networks. Accounting the potentially dense demand of network slices as a cloud service and the limited resource of mobile network operators (MNOs), an efficient inter-slice management and orchestration plays a key role in 5G networks. This calls advanced solutions for slice admission and congestion control. This paper proposes a novel approach of inter-slice control that well copes with existing pre-standardized 5G architectures
1302.1848
Delgado Lopez-Cozar emilio
Emilio Delgado Lopez-Cozar, Manuel Ramirez Sanchez
H Index of History journals published in Spain according to Google Scholar Metrics (2007-2011)
7 pages, 2 tables
null
null
EC3 Working Papers 10
cs.DL
http://creativecommons.org/licenses/by/3.0/
Google Scholar Metrics (GSM), which was recently launched in April 2012, features new bibliometric systems for gauging scientific journals by counting the number of citations obtained in Google Scholar. This way, it opens new possibilities for measuring journal impacts in the field of Humanities. The present article intends to evaluate the scope of this tool through analysing GSM searches, from the 5th through 6th of December 2012, of History journals published in Spain. In sum, 69 journals were identified, accounting for only 24% of the History journals published in Spain. The ranges of H index values for this field are so small that the ranking can no longer be said to show a discriminating potential. In the light of this, we would like to propose a change in the way Google Scholar Metrics is designed so that it could also accommodate production and citation patterns in the particular field of History, and, in a broader scope, in the area of Humanities as well.
[ { "created": "Thu, 7 Feb 2013 20:16:17 GMT", "version": "v1" }, { "created": "Wed, 20 Feb 2013 09:16:17 GMT", "version": "v2" } ]
2013-02-21
[ [ "Lopez-Cozar", "Emilio Delgado", "" ], [ "Sanchez", "Manuel Ramirez", "" ] ]
Google Scholar Metrics (GSM), which was recently launched in April 2012, features new bibliometric systems for gauging scientific journals by counting the number of citations obtained in Google Scholar. This way, it opens new possibilities for measuring journal impacts in the field of Humanities. The present article intends to evaluate the scope of this tool through analysing GSM searches, from the 5th through 6th of December 2012, of History journals published in Spain. In sum, 69 journals were identified, accounting for only 24% of the History journals published in Spain. The ranges of H index values for this field are so small that the ranking can no longer be said to show a discriminating potential. In the light of this, we would like to propose a change in the way Google Scholar Metrics is designed so that it could also accommodate production and citation patterns in the particular field of History, and, in a broader scope, in the area of Humanities as well.
1904.01784
Yuning Chai
Yuning Chai
Patchwork: A Patch-wise Attention Network for Efficient Object Detection and Segmentation in Video Streams
ICCV 2019 Camera Ready + Supplementary
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in single-frame object detection and segmentation techniques have motivated a wide range of works to extend these methods to process video streams. In this paper, we explore the idea of hard attention aimed for latency-sensitive applications. Instead of reasoning about every frame separately, our method selects and only processes a small sub-window of the frame. Our technique then makes predictions for the full frame based on the sub-windows from previous frames and the update from the current sub-window. The latency reduction by this hard attention mechanism comes at the cost of degraded accuracy. We made two contributions to address this. First, we propose a specialized memory cell that recovers lost context when processing sub-windows. Secondly, we adopt a Q-learning-based policy training strategy that enables our approach to intelligently select the sub-windows such that the staleness in the memory hurts the performance the least. Our experiments suggest that our approach reduces the latency by approximately four times without significantly sacrificing the accuracy on the ImageNet VID video object detection dataset and the DAVIS video object segmentation dataset. We further demonstrate that we can reinvest the saved computation into other parts of the network, and thus resulting in an accuracy increase at a comparable computational cost as the original system and beating other recently proposed state-of-the-art methods in the low latency range.
[ { "created": "Wed, 3 Apr 2019 05:58:42 GMT", "version": "v1" }, { "created": "Tue, 20 Aug 2019 17:11:31 GMT", "version": "v2" } ]
2019-08-21
[ [ "Chai", "Yuning", "" ] ]
Recent advances in single-frame object detection and segmentation techniques have motivated a wide range of works to extend these methods to process video streams. In this paper, we explore the idea of hard attention aimed for latency-sensitive applications. Instead of reasoning about every frame separately, our method selects and only processes a small sub-window of the frame. Our technique then makes predictions for the full frame based on the sub-windows from previous frames and the update from the current sub-window. The latency reduction by this hard attention mechanism comes at the cost of degraded accuracy. We made two contributions to address this. First, we propose a specialized memory cell that recovers lost context when processing sub-windows. Secondly, we adopt a Q-learning-based policy training strategy that enables our approach to intelligently select the sub-windows such that the staleness in the memory hurts the performance the least. Our experiments suggest that our approach reduces the latency by approximately four times without significantly sacrificing the accuracy on the ImageNet VID video object detection dataset and the DAVIS video object segmentation dataset. We further demonstrate that we can reinvest the saved computation into other parts of the network, and thus resulting in an accuracy increase at a comparable computational cost as the original system and beating other recently proposed state-of-the-art methods in the low latency range.
2003.07311
Johannes C. Paetzold
Suprosanna Shit, Johannes C. Paetzold, Anjany Sekuboyina, Ivan Ezhov, Alexander Unger, Andrey Zhylka, Josien P. W. Pluim, Ulrich Bauer, Bjoern H. Menze
clDice -- A Novel Topology-Preserving Loss Function for Tubular Structure Segmentation
* The authors Suprosanna Shit and Johannes C. Paetzold contributed equally to the work
null
10.1109/CVPR46437.2021.01629
CVPR 2021
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate segmentation of tubular, network-like structures, such as vessels, neurons, or roads, is relevant to many fields of research. For such structures, the topology is their most important characteristic; particularly preserving connectedness: in the case of vascular networks, missing a connected vessel entirely alters the blood-flow dynamics. We introduce a novel similarity measure termed centerlineDice (short clDice), which is calculated on the intersection of the segmentation masks and their (morphological) skeleta. We theoretically prove that clDice guarantees topology preservation up to homotopy equivalence for binary 2D and 3D segmentation. Extending this, we propose a computationally efficient, differentiable loss function (soft-clDice) for training arbitrary neural segmentation networks. We benchmark the soft-clDice loss on five public datasets, including vessels, roads and neurons (2D and 3D). Training on soft-clDice leads to segmentation with more accurate connectivity information, higher graph similarity, and better volumetric scores.
[ { "created": "Mon, 16 Mar 2020 16:27:49 GMT", "version": "v1" }, { "created": "Mon, 23 Mar 2020 20:45:16 GMT", "version": "v2" }, { "created": "Sun, 29 Mar 2020 22:46:43 GMT", "version": "v3" }, { "created": "Thu, 3 Dec 2020 19:53:43 GMT", "version": "v4" }, { "created": "Mon, 29 Mar 2021 13:36:28 GMT", "version": "v5" }, { "created": "Tue, 30 Mar 2021 11:51:21 GMT", "version": "v6" }, { "created": "Fri, 15 Jul 2022 10:39:38 GMT", "version": "v7" } ]
2022-07-18
[ [ "Shit", "Suprosanna", "" ], [ "Paetzold", "Johannes C.", "" ], [ "Sekuboyina", "Anjany", "" ], [ "Ezhov", "Ivan", "" ], [ "Unger", "Alexander", "" ], [ "Zhylka", "Andrey", "" ], [ "Pluim", "Josien P. W.", "" ], [ "Bauer", "Ulrich", "" ], [ "Menze", "Bjoern H.", "" ] ]
Accurate segmentation of tubular, network-like structures, such as vessels, neurons, or roads, is relevant to many fields of research. For such structures, the topology is their most important characteristic; particularly preserving connectedness: in the case of vascular networks, missing a connected vessel entirely alters the blood-flow dynamics. We introduce a novel similarity measure termed centerlineDice (short clDice), which is calculated on the intersection of the segmentation masks and their (morphological) skeleta. We theoretically prove that clDice guarantees topology preservation up to homotopy equivalence for binary 2D and 3D segmentation. Extending this, we propose a computationally efficient, differentiable loss function (soft-clDice) for training arbitrary neural segmentation networks. We benchmark the soft-clDice loss on five public datasets, including vessels, roads and neurons (2D and 3D). Training on soft-clDice leads to segmentation with more accurate connectivity information, higher graph similarity, and better volumetric scores.
2002.11869
Anurag Sarkar
Anurag Sarkar, Zhihan Yang, Seth Cooper
Controllable Level Blending between Games using Variational Autoencoders
6 pages, 11 figures, Sixth Experimental AI in Games Workshop at AIIDE
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous work explored blending levels from existing games to create levels for a new game that mixes properties of the original games. In this paper, we use Variational Autoencoders (VAEs) for improving upon such techniques. VAEs are artificial neural networks that learn and use latent representations of datasets to generate novel outputs. We train a VAE on level data from Super Mario Bros. and Kid Icarus, enabling it to capture the latent space spanning both games. We then use this space to generate level segments that combine properties of levels from both games. Moreover, by applying evolutionary search in the latent space, we evolve level segments satisfying specific constraints. We argue that these affordances make the VAE-based approach especially suitable for co-creative level design and compare its performance with similar generative models like the GAN and the VAE-GAN.
[ { "created": "Thu, 27 Feb 2020 01:38:35 GMT", "version": "v1" } ]
2020-02-28
[ [ "Sarkar", "Anurag", "" ], [ "Yang", "Zhihan", "" ], [ "Cooper", "Seth", "" ] ]
Previous work explored blending levels from existing games to create levels for a new game that mixes properties of the original games. In this paper, we use Variational Autoencoders (VAEs) for improving upon such techniques. VAEs are artificial neural networks that learn and use latent representations of datasets to generate novel outputs. We train a VAE on level data from Super Mario Bros. and Kid Icarus, enabling it to capture the latent space spanning both games. We then use this space to generate level segments that combine properties of levels from both games. Moreover, by applying evolutionary search in the latent space, we evolve level segments satisfying specific constraints. We argue that these affordances make the VAE-based approach especially suitable for co-creative level design and compare its performance with similar generative models like the GAN and the VAE-GAN.
2204.04399
Hieu Hughes Le-Au
Rubab Hussain, Rigo Vargas, Hieu Hughes Le-Au, Will Gass, Melissa Fenn, Briseyda Serna-Marquez, Jongwook Woo
Crime Patterns in Los Angeles County Before and After Covid19 (2018-2021)
Keywords: Pandemic, Crime Rate Los Angeles, Data Analysis, Data Science, Predictive Analysis
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
The objective of our research is to present the change in crime rates in Los Angeles post-Covid19. Using data analysis with Geo-Mapping, bubbles, Marimekko, and a time series charts, we can illustrate which areas have the largest crime rate, and how it has changed. Through regression modeling, we can interpret which locations may also have a correlation to crime versus income, race, type of crime, and gender. The story will help to uncover whether the areas associated with crime are due to demographic or income variance. In showing the details of crimes in Los Angeles along with the factors at play we hope to see a compelling relationship between crime rates and recent events from 2020 to the present, along with changes in crime type trends during these periods. We use Excel to clean the data for SAP SAC to model effectively, as well as resources from other studies a comparison.
[ { "created": "Sat, 9 Apr 2022 06:03:05 GMT", "version": "v1" } ]
2022-04-12
[ [ "Hussain", "Rubab", "" ], [ "Vargas", "Rigo", "" ], [ "Le-Au", "Hieu Hughes", "" ], [ "Gass", "Will", "" ], [ "Fenn", "Melissa", "" ], [ "Serna-Marquez", "Briseyda", "" ], [ "Woo", "Jongwook", "" ] ]
The objective of our research is to present the change in crime rates in Los Angeles post-Covid19. Using data analysis with Geo-Mapping, bubbles, Marimekko, and a time series charts, we can illustrate which areas have the largest crime rate, and how it has changed. Through regression modeling, we can interpret which locations may also have a correlation to crime versus income, race, type of crime, and gender. The story will help to uncover whether the areas associated with crime are due to demographic or income variance. In showing the details of crimes in Los Angeles along with the factors at play we hope to see a compelling relationship between crime rates and recent events from 2020 to the present, along with changes in crime type trends during these periods. We use Excel to clean the data for SAP SAC to model effectively, as well as resources from other studies a comparison.
2112.05941
Xinyi Zhang
Xinyi Zhang, Yukiyasu Domae, Weiwei Wan and Kensuke Harada
Learning Efficient Policies for Picking Entangled Wire Harnesses: An Approach to Industrial Bin Picking
8 pages, IEEE RA-L
null
10.1109/LRA.2022.3222995
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wire harnesses are essential connecting components in manufacturing industry but are challenging to be automated in industrial tasks such as bin picking. They are long, flexible and tend to get entangled when randomly placed in a bin. This makes it difficult for the robot to grasp a single one in dense clutter. Besides, training or collecting data in simulation is challenging due to the difficulties in modeling the combination of deformable and rigid components for wire harnesses. In this work, instead of directly lifting wire harnesses, we propose to grasp and extract the target following a circle-like trajectory until it is untangled. We learn a policy from real-world data that can infer grasps and separation actions from visual observation. Our policy enables the robot to efficiently pick and separate entangled wire harnesses by maximizing success rates and reducing execution time. To evaluate our policy, we present a set of real-world experiments on picking wire harnesses. Our policy achieves an overall 84.6% success rate compared with 49.2% in baseline. We also evaluate the effectiveness of our policy under different clutter scenarios using unseen types of wire harnesses. Results suggest that our approach is feasible for handling wire harnesses in industrial bin picking.
[ { "created": "Sat, 11 Dec 2021 10:01:39 GMT", "version": "v1" }, { "created": "Sun, 10 Jul 2022 11:38:49 GMT", "version": "v2" }, { "created": "Mon, 21 Nov 2022 07:13:01 GMT", "version": "v3" }, { "created": "Sat, 7 Jan 2023 05:54:15 GMT", "version": "v4" } ]
2023-01-10
[ [ "Zhang", "Xinyi", "" ], [ "Domae", "Yukiyasu", "" ], [ "Wan", "Weiwei", "" ], [ "Harada", "Kensuke", "" ] ]
Wire harnesses are essential connecting components in manufacturing industry but are challenging to be automated in industrial tasks such as bin picking. They are long, flexible and tend to get entangled when randomly placed in a bin. This makes it difficult for the robot to grasp a single one in dense clutter. Besides, training or collecting data in simulation is challenging due to the difficulties in modeling the combination of deformable and rigid components for wire harnesses. In this work, instead of directly lifting wire harnesses, we propose to grasp and extract the target following a circle-like trajectory until it is untangled. We learn a policy from real-world data that can infer grasps and separation actions from visual observation. Our policy enables the robot to efficiently pick and separate entangled wire harnesses by maximizing success rates and reducing execution time. To evaluate our policy, we present a set of real-world experiments on picking wire harnesses. Our policy achieves an overall 84.6% success rate compared with 49.2% in baseline. We also evaluate the effectiveness of our policy under different clutter scenarios using unseen types of wire harnesses. Results suggest that our approach is feasible for handling wire harnesses in industrial bin picking.
1812.00769
Aditya Gangrade
Aditya Gangrade, Praveen Venkatesh, Bobak Nazer and Venkatesh Saligrama
Testing Changes in Communities for the Stochastic Block Model
Version 3 includes material on unbalanced but linearly sized communities. This version is to appear in NeurIPS 2019
null
null
null
cs.IT cs.LG cs.SI math.IT math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose and analyze the problems of \textit{community goodness-of-fit and two-sample testing} for stochastic block models (SBM), where changes arise due to modification in community memberships of nodes. Motivated by practical applications, we consider the challenging sparse regime, where expected node degrees are constant, and the inter-community mean degree ($b$) scales proportionally to intra-community mean degree ($a$). Prior work has sharply characterized partial or full community recovery in terms of a "signal-to-noise ratio" ($\mathrm{SNR}$) based on $a$ and $b$. For both problems, we propose computationally-efficient tests that can succeed far beyond the regime where recovery of community membership is even possible. Overall, for large changes, $s \gg \sqrt{n}$, we need only $\mathrm{SNR}= O(1)$ whereas a na\"ive test based on community recovery with $O(s)$ errors requires $\mathrm{SNR}= \Theta(\log n)$. Conversely, in the small change regime, $s \ll \sqrt{n}$, via an information-theoretic lower bound, we show that, surprisingly, no algorithm can do better than the na\"ive algorithm that first estimates the community up to $O(s)$ errors and then detects changes. We validate these phenomena numerically on SBMs and on real-world datasets as well as Markov Random Fields where we only observe node data rather than the existence of links.
[ { "created": "Thu, 29 Nov 2018 20:09:21 GMT", "version": "v1" }, { "created": "Tue, 11 Jun 2019 05:12:21 GMT", "version": "v2" }, { "created": "Thu, 31 Oct 2019 03:20:52 GMT", "version": "v3" } ]
2019-11-01
[ [ "Gangrade", "Aditya", "" ], [ "Venkatesh", "Praveen", "" ], [ "Nazer", "Bobak", "" ], [ "Saligrama", "Venkatesh", "" ] ]
We propose and analyze the problems of \textit{community goodness-of-fit and two-sample testing} for stochastic block models (SBM), where changes arise due to modification in community memberships of nodes. Motivated by practical applications, we consider the challenging sparse regime, where expected node degrees are constant, and the inter-community mean degree ($b$) scales proportionally to intra-community mean degree ($a$). Prior work has sharply characterized partial or full community recovery in terms of a "signal-to-noise ratio" ($\mathrm{SNR}$) based on $a$ and $b$. For both problems, we propose computationally-efficient tests that can succeed far beyond the regime where recovery of community membership is even possible. Overall, for large changes, $s \gg \sqrt{n}$, we need only $\mathrm{SNR}= O(1)$ whereas a na\"ive test based on community recovery with $O(s)$ errors requires $\mathrm{SNR}= \Theta(\log n)$. Conversely, in the small change regime, $s \ll \sqrt{n}$, via an information-theoretic lower bound, we show that, surprisingly, no algorithm can do better than the na\"ive algorithm that first estimates the community up to $O(s)$ errors and then detects changes. We validate these phenomena numerically on SBMs and on real-world datasets as well as Markov Random Fields where we only observe node data rather than the existence of links.
2107.03688
Longyu Ma
Longyu Ma, Chiu-Wing Sham, Chun Yan Lo, and Xinchao Zhong
An Embedded Iris Recognition System Optimization using Dynamically ReconfigurableDecoder with LDPC Codes
8 pages, 6 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Extracting and analyzing iris textures for biometric recognition has been extensively studied. As the transition of iris recognition from lab technology to nation-scale applications, most systems are facing high complexity in either time or space, leading to unfitness for embedded devices. In this paper, the proposed design includes a minimal set of computer vision modules and multi-mode QC-LDPC decoder which can alleviate variability and noise caused by iris acquisition and follow-up process. Several classes of QC-LDPC code from IEEE 802.16 are tested for the validity of accuracy improvement. Some of the codes mentioned above are used for further QC-LDPC decoder quantization, validation and comparison to each other. We show that we can apply Dynamic Partial Reconfiguration technology to implement the multi-mode QC-LDPC decoder for the iris recognition system. The results show that the implementation is power-efficient and good for edge applications.
[ { "created": "Thu, 8 Jul 2021 09:04:11 GMT", "version": "v1" } ]
2021-07-09
[ [ "Ma", "Longyu", "" ], [ "Sham", "Chiu-Wing", "" ], [ "Lo", "Chun Yan", "" ], [ "Zhong", "Xinchao", "" ] ]
Extracting and analyzing iris textures for biometric recognition has been extensively studied. As the transition of iris recognition from lab technology to nation-scale applications, most systems are facing high complexity in either time or space, leading to unfitness for embedded devices. In this paper, the proposed design includes a minimal set of computer vision modules and multi-mode QC-LDPC decoder which can alleviate variability and noise caused by iris acquisition and follow-up process. Several classes of QC-LDPC code from IEEE 802.16 are tested for the validity of accuracy improvement. Some of the codes mentioned above are used for further QC-LDPC decoder quantization, validation and comparison to each other. We show that we can apply Dynamic Partial Reconfiguration technology to implement the multi-mode QC-LDPC decoder for the iris recognition system. The results show that the implementation is power-efficient and good for edge applications.
1202.5012
Matthew Patitz
Jennifer E. Padilla and Matthew J. Patitz and Raul Pena and Robert T. Schweller and Nadrian C. Seeman and Robert Sheline and Scott M. Summers and Xingsi Zhong
Asynchronous Signal Passing for Tile Self-Assembly: Fuel Efficient Computation and Efficient Assembly of Shapes
This version contains the appendices omitted from the version appearing in the UCNC 2013 proceedings
null
null
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we demonstrate the power of a model of tile self-assembly based on active glues which can dynamically change state. We formulate the Signal-passing Tile Assembly Model (STAM), based on the model of Padilla, Liu, and Seeman to be asynchronous, allowing any action of turning a glue on or off, attaching a new tile, or breaking apart an assembly to happen in any order. Within this highly generalized model we provide three new solutions to tile self-assembly problems that have been addressed within the abstract Tile Assembly Model and its variants, showing that signal passing tiles allow for substantial improvement across multiple complexity metrics. Our first result utilizes a recursive assembly process to achieve tile-type efficient assembly of linear structures, using provably fewer tile types than what is possible in standard tile assembly models. Our second system of signal-passing tiles simulates any Turing machine with high fuel efficiency by using only a constant number of tiles per computation step. Our third system assembles the discrete Sierpinski triangle, demonstrating that this pattern can be strictly self-assembled within the STAM. This result is of particular interest in that it is known that this pattern cannot self-assemble within a number of well studied tile self-assembly models. Notably, all of our constructions are at temperature 1, further demonstrating that signal-passing confers the power to bypass many restrictions found in standard tile assembly models.
[ { "created": "Wed, 22 Feb 2012 19:16:38 GMT", "version": "v1" }, { "created": "Wed, 3 Oct 2012 06:18:58 GMT", "version": "v2" }, { "created": "Thu, 14 Nov 2013 01:15:06 GMT", "version": "v3" } ]
2015-03-20
[ [ "Padilla", "Jennifer E.", "" ], [ "Patitz", "Matthew J.", "" ], [ "Pena", "Raul", "" ], [ "Schweller", "Robert T.", "" ], [ "Seeman", "Nadrian C.", "" ], [ "Sheline", "Robert", "" ], [ "Summers", "Scott M.", "" ], [ "Zhong", "Xingsi", "" ] ]
In this paper we demonstrate the power of a model of tile self-assembly based on active glues which can dynamically change state. We formulate the Signal-passing Tile Assembly Model (STAM), based on the model of Padilla, Liu, and Seeman to be asynchronous, allowing any action of turning a glue on or off, attaching a new tile, or breaking apart an assembly to happen in any order. Within this highly generalized model we provide three new solutions to tile self-assembly problems that have been addressed within the abstract Tile Assembly Model and its variants, showing that signal passing tiles allow for substantial improvement across multiple complexity metrics. Our first result utilizes a recursive assembly process to achieve tile-type efficient assembly of linear structures, using provably fewer tile types than what is possible in standard tile assembly models. Our second system of signal-passing tiles simulates any Turing machine with high fuel efficiency by using only a constant number of tiles per computation step. Our third system assembles the discrete Sierpinski triangle, demonstrating that this pattern can be strictly self-assembled within the STAM. This result is of particular interest in that it is known that this pattern cannot self-assemble within a number of well studied tile self-assembly models. Notably, all of our constructions are at temperature 1, further demonstrating that signal-passing confers the power to bypass many restrictions found in standard tile assembly models.
2301.10540
David W. Romero
David M. Knigge, David W. Romero, Albert Gu, Efstratios Gavves, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn, Jan-Jakob Sonke
Modelling Long Range Dependencies in $N$D: From Task-Specific to a General Purpose CNN
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Performant Convolutional Neural Network (CNN) architectures must be tailored to specific tasks in order to consider the length, resolution, and dimensionality of the input data. In this work, we tackle the need for problem-specific CNN architectures. We present the Continuous Convolutional Neural Network (CCNN): a single CNN able to process data of arbitrary resolution, dimensionality and length without any structural changes. Its key component are its continuous convolutional kernels which model long-range dependencies at every layer, and thus remove the need of current CNN architectures for task-dependent downsampling and depths. We showcase the generality of our method by using the same architecture for tasks on sequential ($1{\rm D}$), visual ($2{\rm D}$) and point-cloud ($3{\rm D}$) data. Our CCNN matches and often outperforms the current state-of-the-art across all tasks considered.
[ { "created": "Wed, 25 Jan 2023 12:12:47 GMT", "version": "v1" }, { "created": "Sun, 16 Apr 2023 08:55:36 GMT", "version": "v2" } ]
2023-04-18
[ [ "Knigge", "David M.", "" ], [ "Romero", "David W.", "" ], [ "Gu", "Albert", "" ], [ "Gavves", "Efstratios", "" ], [ "Bekkers", "Erik J.", "" ], [ "Tomczak", "Jakub M.", "" ], [ "Hoogendoorn", "Mark", "" ], [ "Sonke", "Jan-Jakob", "" ] ]
Performant Convolutional Neural Network (CNN) architectures must be tailored to specific tasks in order to consider the length, resolution, and dimensionality of the input data. In this work, we tackle the need for problem-specific CNN architectures. We present the Continuous Convolutional Neural Network (CCNN): a single CNN able to process data of arbitrary resolution, dimensionality and length without any structural changes. Its key component are its continuous convolutional kernels which model long-range dependencies at every layer, and thus remove the need of current CNN architectures for task-dependent downsampling and depths. We showcase the generality of our method by using the same architecture for tasks on sequential ($1{\rm D}$), visual ($2{\rm D}$) and point-cloud ($3{\rm D}$) data. Our CCNN matches and often outperforms the current state-of-the-art across all tasks considered.
2212.03404
Lola Burgue\~no
Meriem Ben Chaaben and Lola Burgue\~no and Houari Sahraoui
Towards using Few-Shot Prompt Learning for Automating Model Completion
null
null
null
null
cs.SE cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose a simple yet a novel approach to improve completion in domain modeling activities. Our approach exploits the power of large language models by using few-shot prompt learning without the need to train or fine-tune those models with large datasets that are scarce in this field. We implemented our approach and tested it on the completion of static and dynamic domain diagrams. Our initial evaluation shows that such an approach is effective and can be integrated in different ways during the modeling activities.
[ { "created": "Wed, 7 Dec 2022 02:11:26 GMT", "version": "v1" } ]
2022-12-08
[ [ "Chaaben", "Meriem Ben", "" ], [ "Burgueño", "Lola", "" ], [ "Sahraoui", "Houari", "" ] ]
We propose a simple yet a novel approach to improve completion in domain modeling activities. Our approach exploits the power of large language models by using few-shot prompt learning without the need to train or fine-tune those models with large datasets that are scarce in this field. We implemented our approach and tested it on the completion of static and dynamic domain diagrams. Our initial evaluation shows that such an approach is effective and can be integrated in different ways during the modeling activities.
2207.08391
Hiep Nguyen
Hiep Nguyen, Lam Phan, Harikrishna Warrier and Yogesh Gupta
Federated Learning for Non-IID Data via Client Variance Reduction and Adaptive Server Update
null
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Federated learning (FL) is an emerging technique used to collaboratively train a global machine learning model while keeping the data localized on the user devices. The main obstacle to FL's practical implementation is the Non-Independent and Identical (Non-IID) data distribution across users, which slows convergence and degrades performance. To tackle this fundamental issue, we propose a method (ComFed) that enhances the whole training process on both the client and server sides. The key idea of ComFed is to simultaneously utilize client-variance reduction techniques to facilitate server aggregation and global adaptive update techniques to accelerate learning. Our experiments on the Cifar-10 classification task show that ComFed can improve state-of-the-art algorithms dedicated to Non-IID data.
[ { "created": "Mon, 18 Jul 2022 05:58:19 GMT", "version": "v1" }, { "created": "Fri, 29 Jul 2022 10:28:52 GMT", "version": "v2" } ]
2022-08-01
[ [ "Nguyen", "Hiep", "" ], [ "Phan", "Lam", "" ], [ "Warrier", "Harikrishna", "" ], [ "Gupta", "Yogesh", "" ] ]
Federated learning (FL) is an emerging technique used to collaboratively train a global machine learning model while keeping the data localized on the user devices. The main obstacle to FL's practical implementation is the Non-Independent and Identical (Non-IID) data distribution across users, which slows convergence and degrades performance. To tackle this fundamental issue, we propose a method (ComFed) that enhances the whole training process on both the client and server sides. The key idea of ComFed is to simultaneously utilize client-variance reduction techniques to facilitate server aggregation and global adaptive update techniques to accelerate learning. Our experiments on the Cifar-10 classification task show that ComFed can improve state-of-the-art algorithms dedicated to Non-IID data.
1706.02061
Nir Levine
Nir Levine, Haggai Roitman, and Doron Cohen
An Extended Relevance Model for Session Search
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The session search task aims at best serving the user's information need given her previous search behavior during the session. We propose an extended relevance model that captures the user's dynamic information need in the session. Our relevance modelling approach is directly driven by the user's query reformulation (change) decisions and the estimate of how much the user's search behavior affects such decisions. Overall, we demonstrate that, the proposed approach significantly boosts session search performance.
[ { "created": "Wed, 7 Jun 2017 06:57:25 GMT", "version": "v1" } ]
2017-06-08
[ [ "Levine", "Nir", "" ], [ "Roitman", "Haggai", "" ], [ "Cohen", "Doron", "" ] ]
The session search task aims at best serving the user's information need given her previous search behavior during the session. We propose an extended relevance model that captures the user's dynamic information need in the session. Our relevance modelling approach is directly driven by the user's query reformulation (change) decisions and the estimate of how much the user's search behavior affects such decisions. Overall, we demonstrate that, the proposed approach significantly boosts session search performance.
1310.2665
Emilio Ferrara
Emilio Ferrara, Mohsen JafariAsbagh, Onur Varol, Vahed Qazvinian, Filippo Menczer, Alessandro Flammini
Clustering Memes in Social Media
Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM'13), 2013
Advances in social networks analysis and mining (ASONAM), 2013 IEEE/ACM international conference on (pp. 548-555). IEEE
10.1145/2492517.2492530
null
cs.SI cs.CY physics.data-an physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing pervasiveness of social media creates new opportunities to study human social behavior, while challenging our capability to analyze their massive data streams. One of the emerging tasks is to distinguish between different kinds of activities, for example engineered misinformation campaigns versus spontaneous communication. Such detection problems require a formal definition of meme, or unit of information that can spread from person to person through the social network. Once a meme is identified, supervised learning methods can be applied to classify different types of communication. The appropriate granularity of a meme, however, is hardly captured from existing entities such as tags and keywords. Here we present a framework for the novel task of detecting memes by clustering messages from large streams of social data. We evaluate various similarity measures that leverage content, metadata, network features, and their combinations. We also explore the idea of pre-clustering on the basis of existing entities. A systematic evaluation is carried out using a manually curated dataset as ground truth. Our analysis shows that pre-clustering and a combination of heterogeneous features yield the best trade-off between number of clusters and their quality, demonstrating that a simple combination based on pairwise maximization of similarity is as effective as a non-trivial optimization of parameters. Our approach is fully automatic, unsupervised, and scalable for real-time detection of memes in streaming data.
[ { "created": "Thu, 10 Oct 2013 00:10:46 GMT", "version": "v1" } ]
2017-03-07
[ [ "Ferrara", "Emilio", "" ], [ "JafariAsbagh", "Mohsen", "" ], [ "Varol", "Onur", "" ], [ "Qazvinian", "Vahed", "" ], [ "Menczer", "Filippo", "" ], [ "Flammini", "Alessandro", "" ] ]
The increasing pervasiveness of social media creates new opportunities to study human social behavior, while challenging our capability to analyze their massive data streams. One of the emerging tasks is to distinguish between different kinds of activities, for example engineered misinformation campaigns versus spontaneous communication. Such detection problems require a formal definition of meme, or unit of information that can spread from person to person through the social network. Once a meme is identified, supervised learning methods can be applied to classify different types of communication. The appropriate granularity of a meme, however, is hardly captured from existing entities such as tags and keywords. Here we present a framework for the novel task of detecting memes by clustering messages from large streams of social data. We evaluate various similarity measures that leverage content, metadata, network features, and their combinations. We also explore the idea of pre-clustering on the basis of existing entities. A systematic evaluation is carried out using a manually curated dataset as ground truth. Our analysis shows that pre-clustering and a combination of heterogeneous features yield the best trade-off between number of clusters and their quality, demonstrating that a simple combination based on pairwise maximization of similarity is as effective as a non-trivial optimization of parameters. Our approach is fully automatic, unsupervised, and scalable for real-time detection of memes in streaming data.
2111.06230
Vukosi Marivate
Mack Makgatho, Vukosi Marivate, Tshephisho Sefara, Valencia Wagner
Training Cross-Lingual embeddings for Setswana and Sepedi
Accepted (to appear) for the 2nd Workshop on Resources for African Indigenous Languages
Vol. 3 No. 03 (2021): Proceedings of the 2nd workshop on Resources for African Indigenous Language (RAIL) at DHASA 2021
10.55492/dhasa.v3i03.3822
null
cs.CL stat.AP
http://creativecommons.org/licenses/by/4.0/
African languages still lag in the advances of Natural Language Processing techniques, one reason being the lack of representative data, having a technique that can transfer information between languages can help mitigate against the lack of data problem. This paper trains Setswana and Sepedi monolingual word vectors and uses VecMap to create cross-lingual embeddings for Setswana-Sepedi in order to do a cross-lingual transfer. Word embeddings are word vectors that represent words as continuous floating numbers where semantically similar words are mapped to nearby points in n-dimensional space. The idea of word embeddings is based on the distribution hypothesis that states, semantically similar words are distributed in similar contexts (Harris, 1954). Cross-lingual embeddings leverages monolingual embeddings by learning a shared vector space for two separately trained monolingual vectors such that words with similar meaning are represented by similar vectors. In this paper, we investigate cross-lingual embeddings for Setswana-Sepedi monolingual word vector. We use the unsupervised cross lingual embeddings in VecMap to train the Setswana-Sepedi cross-language word embeddings. We evaluate the quality of the Setswana-Sepedi cross-lingual word representation using a semantic evaluation task. For the semantic similarity task, we translated the WordSim and SimLex tasks into Setswana and Sepedi. We release this dataset as part of this work for other researchers. We evaluate the intrinsic quality of the embeddings to determine if there is improvement in the semantic representation of the word embeddings.
[ { "created": "Thu, 11 Nov 2021 14:26:15 GMT", "version": "v1" } ]
2022-03-01
[ [ "Makgatho", "Mack", "" ], [ "Marivate", "Vukosi", "" ], [ "Sefara", "Tshephisho", "" ], [ "Wagner", "Valencia", "" ] ]
African languages still lag in the advances of Natural Language Processing techniques, one reason being the lack of representative data, having a technique that can transfer information between languages can help mitigate against the lack of data problem. This paper trains Setswana and Sepedi monolingual word vectors and uses VecMap to create cross-lingual embeddings for Setswana-Sepedi in order to do a cross-lingual transfer. Word embeddings are word vectors that represent words as continuous floating numbers where semantically similar words are mapped to nearby points in n-dimensional space. The idea of word embeddings is based on the distribution hypothesis that states, semantically similar words are distributed in similar contexts (Harris, 1954). Cross-lingual embeddings leverages monolingual embeddings by learning a shared vector space for two separately trained monolingual vectors such that words with similar meaning are represented by similar vectors. In this paper, we investigate cross-lingual embeddings for Setswana-Sepedi monolingual word vector. We use the unsupervised cross lingual embeddings in VecMap to train the Setswana-Sepedi cross-language word embeddings. We evaluate the quality of the Setswana-Sepedi cross-lingual word representation using a semantic evaluation task. For the semantic similarity task, we translated the WordSim and SimLex tasks into Setswana and Sepedi. We release this dataset as part of this work for other researchers. We evaluate the intrinsic quality of the embeddings to determine if there is improvement in the semantic representation of the word embeddings.
2109.11821
Ming Liu
Ming Liu, Zhi Xue, Xiangjian He, and Jinjun Chen
SCADS: A Scalable Approach Using Spark in Cloud for Host-based Intrusion Detection System with System Calls
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Following the current big data trend, the scale of real-time system call traces generated by Linux applications in a contemporary data center may increase excessively. Due to the deficiency of scalability, it is challenging for traditional host-based intrusion detection systems deployed on every single host to collect, maintain, and manipulate those large-scale accumulated system call traces. It is inflexible to build data mining models on one physical host that has static computing capability and limited storage capacity. To address this issue, we propose SCADS, a corresponding solution using Apache Spark in the Google cloud environment. A set of Spark algorithms are developed to achieve the computational scalability. The experiment results demonstrate that the efficiency of intrusion detection can be enhanced, which indicates that the proposed method can apply to the design of next-generation host-based intrusion detection systems with system calls.
[ { "created": "Fri, 24 Sep 2021 09:10:21 GMT", "version": "v1" }, { "created": "Fri, 7 Jan 2022 05:23:14 GMT", "version": "v2" } ]
2022-01-10
[ [ "Liu", "Ming", "" ], [ "Xue", "Zhi", "" ], [ "He", "Xiangjian", "" ], [ "Chen", "Jinjun", "" ] ]
Following the current big data trend, the scale of real-time system call traces generated by Linux applications in a contemporary data center may increase excessively. Due to the deficiency of scalability, it is challenging for traditional host-based intrusion detection systems deployed on every single host to collect, maintain, and manipulate those large-scale accumulated system call traces. It is inflexible to build data mining models on one physical host that has static computing capability and limited storage capacity. To address this issue, we propose SCADS, a corresponding solution using Apache Spark in the Google cloud environment. A set of Spark algorithms are developed to achieve the computational scalability. The experiment results demonstrate that the efficiency of intrusion detection can be enhanced, which indicates that the proposed method can apply to the design of next-generation host-based intrusion detection systems with system calls.
2407.03131
Yanjie Cui
Yanjie Cui, Xiaohong Liu, Jing Liang, Yamin Fu
MVGT: A Multi-view Graph Transformer Based on Spatial Relations for EEG Emotion Recognition
null
null
null
null
cs.NE cs.AI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electroencephalography (EEG), a medical imaging technique that captures scalp electrical activity of brain structures via electrodes, has been widely used in affective computing. The spatial domain of EEG is rich in affective information. However, few of the existing studies have simultaneously analyzed EEG signals from multiple perspectives of geometric and anatomical structures in spatial domain. In this paper, we propose a multi-view Graph Transformer (MVGT) based on spatial relations, which integrates information from the temporal, frequency and spatial domains, including geometric and anatomical structures, so as to enhance the expressive power of the model comprehensively. We incorporate the spatial information of EEG channels into the model as encoding, thereby improving its ability to perceive the spatial structure of the channels. Meanwhile, experimental results based on publicly available datasets demonstrate that our proposed model outperforms state-of-the-art methods in recent years. In addition, the results also show that the MVGT could extract information from multiple domains and capture inter-channel relationships in EEG emotion recognition tasks effectively.
[ { "created": "Wed, 3 Jul 2024 14:13:00 GMT", "version": "v1" }, { "created": "Mon, 8 Jul 2024 13:11:53 GMT", "version": "v2" }, { "created": "Tue, 6 Aug 2024 09:21:47 GMT", "version": "v3" } ]
2024-08-07
[ [ "Cui", "Yanjie", "" ], [ "Liu", "Xiaohong", "" ], [ "Liang", "Jing", "" ], [ "Fu", "Yamin", "" ] ]
Electroencephalography (EEG), a medical imaging technique that captures scalp electrical activity of brain structures via electrodes, has been widely used in affective computing. The spatial domain of EEG is rich in affective information. However, few of the existing studies have simultaneously analyzed EEG signals from multiple perspectives of geometric and anatomical structures in spatial domain. In this paper, we propose a multi-view Graph Transformer (MVGT) based on spatial relations, which integrates information from the temporal, frequency and spatial domains, including geometric and anatomical structures, so as to enhance the expressive power of the model comprehensively. We incorporate the spatial information of EEG channels into the model as encoding, thereby improving its ability to perceive the spatial structure of the channels. Meanwhile, experimental results based on publicly available datasets demonstrate that our proposed model outperforms state-of-the-art methods in recent years. In addition, the results also show that the MVGT could extract information from multiple domains and capture inter-channel relationships in EEG emotion recognition tasks effectively.
1601.00082
Geraldo A. Barbosa
Geraldo A. Barbosa
A wireless physically secure key distribution system
6 pages,10 figures, 1 table
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A secure key distribution protocol protected by light's noise was introduced in 2003 [Phys. Rev. A 68, 052307 (2003)]. That protocol utilized the shot noise of light present in the optical channel (eg., an optical fiber) to restrict information leaks to an adversary. An initial shared information between the legitimate users allowed them to extract more information from the channel than the one obtained by the adversary. That original paper recognized the need for a privacy amplification step but no specific protocol was presented. More recently that original idea was improved with a specific privacy amplification protocol [arXiv:1406.1543v2 [cs.CR] 8 Jul 2015] while keeping the use of an optical communication channel. This work merges main ideas of the protection given by the light's noise in a protocol applied to wireless channels. The use of a wireless channels together with recorded physical noise was introduced from 2005 to 2007 (see eg, arXiv:quant-ph/0510011 v2 16 Nov 2005 and arXiv:0705.2243v2 [quant-ph] 17 May 2007). This work improves those embrionary ideas of wireless channels secured by recorded optical noise. The need for specific optical channels is eliminated with the wireless variation and opens up the possibility to apply the technique to mobile devices. This work introduces this new scheme and calculates the associated security level.
[ { "created": "Fri, 1 Jan 2016 14:55:47 GMT", "version": "v1" }, { "created": "Mon, 25 Jul 2016 20:06:45 GMT", "version": "v2" } ]
2016-07-27
[ [ "Barbosa", "Geraldo A.", "" ] ]
A secure key distribution protocol protected by light's noise was introduced in 2003 [Phys. Rev. A 68, 052307 (2003)]. That protocol utilized the shot noise of light present in the optical channel (eg., an optical fiber) to restrict information leaks to an adversary. An initial shared information between the legitimate users allowed them to extract more information from the channel than the one obtained by the adversary. That original paper recognized the need for a privacy amplification step but no specific protocol was presented. More recently that original idea was improved with a specific privacy amplification protocol [arXiv:1406.1543v2 [cs.CR] 8 Jul 2015] while keeping the use of an optical communication channel. This work merges main ideas of the protection given by the light's noise in a protocol applied to wireless channels. The use of a wireless channels together with recorded physical noise was introduced from 2005 to 2007 (see eg, arXiv:quant-ph/0510011 v2 16 Nov 2005 and arXiv:0705.2243v2 [quant-ph] 17 May 2007). This work improves those embrionary ideas of wireless channels secured by recorded optical noise. The need for specific optical channels is eliminated with the wireless variation and opens up the possibility to apply the technique to mobile devices. This work introduces this new scheme and calculates the associated security level.
2401.08123
Xinni Jiang
Xinni Jiang, Zengsheng Kuang, Chunle Guo, Ruixun Zhang, Lei Cai, Xiao Fan, Chongyi Li
The Devil is in the Details: Boosting Guided Depth Super-Resolution via Rethinking Cross-Modal Alignment and Aggregation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Guided depth super-resolution (GDSR) involves restoring missing depth details using the high-resolution RGB image of the same scene. Previous approaches have struggled with the heterogeneity and complementarity of the multi-modal inputs, and neglected the issues of modal misalignment, geometrical misalignment, and feature selection. In this study, we rethink some essential components in GDSR networks and propose a simple yet effective Dynamic Dual Alignment and Aggregation network (D2A2). D2A2 mainly consists of 1) a dynamic dual alignment module that adapts to alleviate the modal misalignment via a learnable domain alignment block and geometrically align cross-modal features by learning the offset; and 2) a mask-to-pixel feature aggregate module that uses the gated mechanism and pixel attention to filter out irrelevant texture noise from RGB features and combine the useful features with depth features. By combining the strengths of RGB and depth features while minimizing disturbance introduced by the RGB image, our method with simple reuse and redesign of basic components achieves state-of-the-art performance on multiple benchmark datasets. The code is available at https://github.com/JiangXinni/D2A2.
[ { "created": "Tue, 16 Jan 2024 05:37:08 GMT", "version": "v1" } ]
2024-01-17
[ [ "Jiang", "Xinni", "" ], [ "Kuang", "Zengsheng", "" ], [ "Guo", "Chunle", "" ], [ "Zhang", "Ruixun", "" ], [ "Cai", "Lei", "" ], [ "Fan", "Xiao", "" ], [ "Li", "Chongyi", "" ] ]
Guided depth super-resolution (GDSR) involves restoring missing depth details using the high-resolution RGB image of the same scene. Previous approaches have struggled with the heterogeneity and complementarity of the multi-modal inputs, and neglected the issues of modal misalignment, geometrical misalignment, and feature selection. In this study, we rethink some essential components in GDSR networks and propose a simple yet effective Dynamic Dual Alignment and Aggregation network (D2A2). D2A2 mainly consists of 1) a dynamic dual alignment module that adapts to alleviate the modal misalignment via a learnable domain alignment block and geometrically align cross-modal features by learning the offset; and 2) a mask-to-pixel feature aggregate module that uses the gated mechanism and pixel attention to filter out irrelevant texture noise from RGB features and combine the useful features with depth features. By combining the strengths of RGB and depth features while minimizing disturbance introduced by the RGB image, our method with simple reuse and redesign of basic components achieves state-of-the-art performance on multiple benchmark datasets. The code is available at https://github.com/JiangXinni/D2A2.
2403.12818
Hugo Y\`eche
Hugo Y\`eche, Manuel Burger, Dinara Veshchezerova, Gunnar R\"atsch
Dynamic Survival Analysis for Early Event Prediction
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study advances Early Event Prediction (EEP) in healthcare through Dynamic Survival Analysis (DSA), offering a novel approach by integrating risk localization into alarm policies to enhance clinical event metrics. By adapting and evaluating DSA models against traditional EEP benchmarks, our research demonstrates their ability to match EEP models on a time-step level and significantly improve event-level metrics through a new alarm prioritization scheme (up to 11% AuPRC difference). This approach represents a significant step forward in predictive healthcare, providing a more nuanced and actionable framework for early event prediction and management.
[ { "created": "Tue, 19 Mar 2024 15:17:23 GMT", "version": "v1" } ]
2024-03-20
[ [ "Yèche", "Hugo", "" ], [ "Burger", "Manuel", "" ], [ "Veshchezerova", "Dinara", "" ], [ "Rätsch", "Gunnar", "" ] ]
This study advances Early Event Prediction (EEP) in healthcare through Dynamic Survival Analysis (DSA), offering a novel approach by integrating risk localization into alarm policies to enhance clinical event metrics. By adapting and evaluating DSA models against traditional EEP benchmarks, our research demonstrates their ability to match EEP models on a time-step level and significantly improve event-level metrics through a new alarm prioritization scheme (up to 11% AuPRC difference). This approach represents a significant step forward in predictive healthcare, providing a more nuanced and actionable framework for early event prediction and management.
2101.02415
Ying Sheng
Yichao Zhou, Ying Sheng, Nguyen Vo, Nick Edmonds, Sandeep Tata
Simplified DOM Trees for Transferable Attribute Extraction from the Web
10 pages, 9 figures
null
null
null
cs.LG cs.CL
http://creativecommons.org/licenses/by/4.0/
There has been a steady need to precisely extract structured knowledge from the web (i.e. HTML documents). Given a web page, extracting a structured object along with various attributes of interest (e.g. price, publisher, author, and genre for a book) can facilitate a variety of downstream applications such as large-scale knowledge base construction, e-commerce product search, and personalized recommendation. Considering each web page is rendered from an HTML DOM tree, existing approaches formulate the problem as a DOM tree node tagging task. However, they either rely on computationally expensive visual feature engineering or are incapable of modeling the relationship among the tree nodes. In this paper, we propose a novel transferable method, Simplified DOM Trees for Attribute Extraction (SimpDOM), to tackle the problem by efficiently retrieving useful context for each node by leveraging the tree structure. We study two challenging experimental settings: (i) intra-vertical few-shot extraction, and (ii) cross-vertical fewshot extraction with out-of-domain knowledge, to evaluate our approach. Extensive experiments on the SWDE public dataset show that SimpDOM outperforms the state-of-the-art (SOTA) method by 1.44% on the F1 score. We also find that utilizing knowledge from a different vertical (cross-vertical extraction) is surprisingly useful and helps beat the SOTA by a further 1.37%.
[ { "created": "Thu, 7 Jan 2021 07:41:55 GMT", "version": "v1" } ]
2021-01-08
[ [ "Zhou", "Yichao", "" ], [ "Sheng", "Ying", "" ], [ "Vo", "Nguyen", "" ], [ "Edmonds", "Nick", "" ], [ "Tata", "Sandeep", "" ] ]
There has been a steady need to precisely extract structured knowledge from the web (i.e. HTML documents). Given a web page, extracting a structured object along with various attributes of interest (e.g. price, publisher, author, and genre for a book) can facilitate a variety of downstream applications such as large-scale knowledge base construction, e-commerce product search, and personalized recommendation. Considering each web page is rendered from an HTML DOM tree, existing approaches formulate the problem as a DOM tree node tagging task. However, they either rely on computationally expensive visual feature engineering or are incapable of modeling the relationship among the tree nodes. In this paper, we propose a novel transferable method, Simplified DOM Trees for Attribute Extraction (SimpDOM), to tackle the problem by efficiently retrieving useful context for each node by leveraging the tree structure. We study two challenging experimental settings: (i) intra-vertical few-shot extraction, and (ii) cross-vertical fewshot extraction with out-of-domain knowledge, to evaluate our approach. Extensive experiments on the SWDE public dataset show that SimpDOM outperforms the state-of-the-art (SOTA) method by 1.44% on the F1 score. We also find that utilizing knowledge from a different vertical (cross-vertical extraction) is surprisingly useful and helps beat the SOTA by a further 1.37%.
2402.12144
Shay Sapir
Asaf Petruschka, Shay Sapir and Elad Tzalik
Connectivity Labeling in Faulty Colored Graphs
shortened abstract for arxiv
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
Fault-tolerant connectivity labelings are schemes that, given an $n$-vertex graph $G=(V,E)$ and $f\geq 1$, produce succinct yet informative labels for the elements of the graph. Given only the labels of two vertices $u,v$ and of the elements in a faulty-set $F$ with $|F|\leq f$, one can determine if $u,v$ are connected in $G-F$, the surviving graph after removing $F$. For the edge or vertex faults models, i.e., $F\subseteq E$ or $F\subseteq V$, a sequence of recent work established schemes with $poly(f,\log n)$-bit labels. This paper considers the color faults model, recently introduced in the context of spanners [Petruschka, Sapir and Tzalik, ITCS'24], which accounts for known correlations between failures. Here, the edges (or vertices) of the input $G$ are arbitrarily colored, and the faulty elements in $F$ are colors; a failing color causes all edges (vertices) of that color to crash. Our main contribution is settling the label length complexity for connectivity under one color fault ($f=1$). The existing implicit solution, by applying the state-of-the-art scheme for edge faults of [Dory and Parter, PODC'21], might yield labels of $\Omega(n)$ bits. We provide a deterministic scheme with labels of $\tilde{O}(\sqrt{n})$ bits in the worst case, and a matching lower bound. Moreover, our scheme is universally optimal: even schemes tailored to handle only colorings of one specific graph topology cannot produce asymptotically smaller labels. We extend our labeling approach to yield a routing scheme avoiding a single forbidden color. We also consider the centralized setting, and show an $\tilde{O}(n)$-space oracle, answering connectivity queries under one color fault in $\tilde{O}(1)$ time. Turning to $f\geq 2$ color faults, we give a randomized labeling scheme with $\tilde{O}(n^{1-1/2^f})$-bit labels, along with a lower bound of $\Omega(n^{1-1/(f+1)})$ bits.
[ { "created": "Mon, 19 Feb 2024 13:53:13 GMT", "version": "v1" } ]
2024-02-20
[ [ "Petruschka", "Asaf", "" ], [ "Sapir", "Shay", "" ], [ "Tzalik", "Elad", "" ] ]
Fault-tolerant connectivity labelings are schemes that, given an $n$-vertex graph $G=(V,E)$ and $f\geq 1$, produce succinct yet informative labels for the elements of the graph. Given only the labels of two vertices $u,v$ and of the elements in a faulty-set $F$ with $|F|\leq f$, one can determine if $u,v$ are connected in $G-F$, the surviving graph after removing $F$. For the edge or vertex faults models, i.e., $F\subseteq E$ or $F\subseteq V$, a sequence of recent work established schemes with $poly(f,\log n)$-bit labels. This paper considers the color faults model, recently introduced in the context of spanners [Petruschka, Sapir and Tzalik, ITCS'24], which accounts for known correlations between failures. Here, the edges (or vertices) of the input $G$ are arbitrarily colored, and the faulty elements in $F$ are colors; a failing color causes all edges (vertices) of that color to crash. Our main contribution is settling the label length complexity for connectivity under one color fault ($f=1$). The existing implicit solution, by applying the state-of-the-art scheme for edge faults of [Dory and Parter, PODC'21], might yield labels of $\Omega(n)$ bits. We provide a deterministic scheme with labels of $\tilde{O}(\sqrt{n})$ bits in the worst case, and a matching lower bound. Moreover, our scheme is universally optimal: even schemes tailored to handle only colorings of one specific graph topology cannot produce asymptotically smaller labels. We extend our labeling approach to yield a routing scheme avoiding a single forbidden color. We also consider the centralized setting, and show an $\tilde{O}(n)$-space oracle, answering connectivity queries under one color fault in $\tilde{O}(1)$ time. Turning to $f\geq 2$ color faults, we give a randomized labeling scheme with $\tilde{O}(n^{1-1/2^f})$-bit labels, along with a lower bound of $\Omega(n^{1-1/(f+1)})$ bits.