id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2310.10395
Elizabeth Munch
Elizabeth Munch
An Invitation to the Euler Characteristic Transform
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Euler characteristic transform (ECT) is a simple to define yet powerful representation of shape. The idea is to encode an embedded shape using sub-level sets of a a function defined based on a given direction, and then returning the Euler characteristics of these sublevel sets. Because the ECT has been shown to be injective on the space of embedded simplicial complexes, it has been used for applications spanning a range of disciplines, including plant morphology and protein structural analysis. In this survey article, we present a comprehensive overview of the Euler characteristic transform, highlighting the main idea on a simple leaf example, and surveying its its key concepts, theoretical foundations, and available applications.
[ { "created": "Mon, 16 Oct 2023 13:38:48 GMT", "version": "v1" } ]
2023-10-17
[ [ "Munch", "Elizabeth", "" ] ]
The Euler characteristic transform (ECT) is a simple to define yet powerful representation of shape. The idea is to encode an embedded shape using sub-level sets of a a function defined based on a given direction, and then returning the Euler characteristics of these sublevel sets. Because the ECT has been shown to be injective on the space of embedded simplicial complexes, it has been used for applications spanning a range of disciplines, including plant morphology and protein structural analysis. In this survey article, we present a comprehensive overview of the Euler characteristic transform, highlighting the main idea on a simple leaf example, and surveying its its key concepts, theoretical foundations, and available applications.
2012.13990
Mitsuo Yoshida
Kenshin Sekimoto, Yoshifumi Seki, Mitsuo Yoshida, Kyoji Umemura
The metrics of keywords to understand the difference between Retweet and Like in each category
The 2020 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT '20)
null
10.1109/WIIAT50758.2020.00084
null
cs.IR cs.DL cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The purpose of this study is to clarify what kind of news is easily retweeted and what kind of news is easily Liked. We believe these actions, retweeting and Liking, have different meanings for users. Understanding this difference is important for understanding people's interest in Twitter. To analyze the difference between retweets (RT) and Likes on Twitter in detail, we focus on word appearances in news titles. First, we calculate basic statistics and confirm that tweets containing news URLs have different RT and Like tendencies compared to other tweets. Next, we compared RTs and Likes for each category and confirmed that the tendency of categories is different. Therefore, we propose metrics for clarifying the differences in each action for each category used in the $\chi$-square test in order to perform an analysis focusing on the topic. The proposed metrics are more useful than simple counts and TF-IDF for extracting meaningful words to understand the difference between RTs and Likes. We analyzed each category using the proposed metrics and quantitatively confirmed that the difference in the role of retweeting and Liking appeared in the content depending on the category. Moreover, by aggregating tweets chronologically, the results showed the trend of RT and Like as a list of words and clarified how the characteristic words of each week were related to current events for retweeting and Liking.
[ { "created": "Sun, 27 Dec 2020 18:32:19 GMT", "version": "v1" } ]
2021-12-16
[ [ "Sekimoto", "Kenshin", "" ], [ "Seki", "Yoshifumi", "" ], [ "Yoshida", "Mitsuo", "" ], [ "Umemura", "Kyoji", "" ] ]
The purpose of this study is to clarify what kind of news is easily retweeted and what kind of news is easily Liked. We believe these actions, retweeting and Liking, have different meanings for users. Understanding this difference is important for understanding people's interest in Twitter. To analyze the difference between retweets (RT) and Likes on Twitter in detail, we focus on word appearances in news titles. First, we calculate basic statistics and confirm that tweets containing news URLs have different RT and Like tendencies compared to other tweets. Next, we compared RTs and Likes for each category and confirmed that the tendency of categories is different. Therefore, we propose metrics for clarifying the differences in each action for each category used in the $\chi$-square test in order to perform an analysis focusing on the topic. The proposed metrics are more useful than simple counts and TF-IDF for extracting meaningful words to understand the difference between RTs and Likes. We analyzed each category using the proposed metrics and quantitatively confirmed that the difference in the role of retweeting and Liking appeared in the content depending on the category. Moreover, by aggregating tweets chronologically, the results showed the trend of RT and Like as a list of words and clarified how the characteristic words of each week were related to current events for retweeting and Liking.
2203.08189
Elizabeth Coda
Elizabeth Coda, Nico Courts, Colby Wight, Loc Truong, WoongJo Choi, Charles Godfrey, Tegan Emerson, Keerti Kappagantula, Henry Kvinge
Fiber Bundle Morphisms as a Framework for Modeling Many-to-Many Maps
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While it is not generally reflected in the `nice' datasets used for benchmarking machine learning algorithms, the real-world is full of processes that would be best described as many-to-many. That is, a single input can potentially yield many different outputs (whether due to noise, imperfect measurement, or intrinsic stochasticity in the process) and many different inputs can yield the same output (that is, the map is not injective). For example, imagine a sentiment analysis task where, due to linguistic ambiguity, a single statement can have a range of different sentiment interpretations while at the same time many distinct statements can represent the same sentiment. When modeling such a multivalued function $f: X \rightarrow Y$, it is frequently useful to be able to model the distribution on $f(x)$ for specific input $x$ as well as the distribution on fiber $f^{-1}(y)$ for specific output $y$. Such an analysis helps the user (i) better understand the variance intrinsic to the process they are studying and (ii) understand the range of specific input $x$ that can be used to achieve output $y$. Following existing work which used a fiber bundle framework to better model many-to-one processes, we describe how morphisms of fiber bundles provide a template for building models which naturally capture the structure of many-to-many processes.
[ { "created": "Tue, 15 Mar 2022 18:38:56 GMT", "version": "v1" }, { "created": "Fri, 29 Apr 2022 15:40:25 GMT", "version": "v2" } ]
2022-05-02
[ [ "Coda", "Elizabeth", "" ], [ "Courts", "Nico", "" ], [ "Wight", "Colby", "" ], [ "Truong", "Loc", "" ], [ "Choi", "WoongJo", "" ], [ "Godfrey", "Charles", "" ], [ "Emerson", "Tegan", "" ], [ "Kappagantula", "Keerti", "" ], [ "Kvinge", "Henry", "" ] ]
While it is not generally reflected in the `nice' datasets used for benchmarking machine learning algorithms, the real-world is full of processes that would be best described as many-to-many. That is, a single input can potentially yield many different outputs (whether due to noise, imperfect measurement, or intrinsic stochasticity in the process) and many different inputs can yield the same output (that is, the map is not injective). For example, imagine a sentiment analysis task where, due to linguistic ambiguity, a single statement can have a range of different sentiment interpretations while at the same time many distinct statements can represent the same sentiment. When modeling such a multivalued function $f: X \rightarrow Y$, it is frequently useful to be able to model the distribution on $f(x)$ for specific input $x$ as well as the distribution on fiber $f^{-1}(y)$ for specific output $y$. Such an analysis helps the user (i) better understand the variance intrinsic to the process they are studying and (ii) understand the range of specific input $x$ that can be used to achieve output $y$. Following existing work which used a fiber bundle framework to better model many-to-one processes, we describe how morphisms of fiber bundles provide a template for building models which naturally capture the structure of many-to-many processes.
2102.03482
Bo Han
Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan Kankanhalli and Masashi Sugiyama
Understanding the Interaction of Adversarial Training with Noisy Labels
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Noisy labels (NL) and adversarial examples both undermine trained models, but interestingly they have hitherto been studied independently. A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i.e., find an adversarial example in its proximity) is an effective measure of the robustness of this point. Given that natural data are clean, this measure reveals an intrinsic geometric property -- how far a point is from its class boundary. Based on this breakthrough, in this paper, we figure out how AT would interact with NL. Firstly, we find if a point is too close to its noisy-class boundary (e.g., one step is enough to attack it), this point is likely to be mislabeled, which suggests to adopt the number of PGD steps as a new criterion for sample selection for correcting NL. Secondly, we confirm AT with strong smoothing effects suffers less from NL (without NL corrections) than standard training (ST), which suggests AT itself is an NL correction. Hence, AT with NL is helpful for improving even the natural accuracy, which again illustrates the superiority of AT as a general-purpose robust learning criterion.
[ { "created": "Sat, 6 Feb 2021 02:45:03 GMT", "version": "v1" }, { "created": "Tue, 9 Feb 2021 06:12:49 GMT", "version": "v2" } ]
2021-02-10
[ [ "Zhu", "Jianing", "" ], [ "Zhang", "Jingfeng", "" ], [ "Han", "Bo", "" ], [ "Liu", "Tongliang", "" ], [ "Niu", "Gang", "" ], [ "Yang", "Hongxia", "" ], [ "Kankanhalli", "Mohan", "" ], [ "Sugiyama", "Masashi", "" ] ]
Noisy labels (NL) and adversarial examples both undermine trained models, but interestingly they have hitherto been studied independently. A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i.e., find an adversarial example in its proximity) is an effective measure of the robustness of this point. Given that natural data are clean, this measure reveals an intrinsic geometric property -- how far a point is from its class boundary. Based on this breakthrough, in this paper, we figure out how AT would interact with NL. Firstly, we find if a point is too close to its noisy-class boundary (e.g., one step is enough to attack it), this point is likely to be mislabeled, which suggests to adopt the number of PGD steps as a new criterion for sample selection for correcting NL. Secondly, we confirm AT with strong smoothing effects suffers less from NL (without NL corrections) than standard training (ST), which suggests AT itself is an NL correction. Hence, AT with NL is helpful for improving even the natural accuracy, which again illustrates the superiority of AT as a general-purpose robust learning criterion.
2406.13018
William Liem
William Liem, Andrew Berry, Kathryn Macapagal
Reclaiming Power over AI: Equipping Queer Teens as AI Designers for HIV Prevention
In CHI 2024: Designing (with) AI for Wellbeing
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
In this position paper, we explore the potential of generative AI (GenAI) tools in supporting HIV prevention initiatives among LGBTQ+ adolescents. GenAI offers opportunities to bridge information gaps and enhance healthcare access, yet it also risks exacerbating existing inequities through biased AI outputs reflecting heteronormative and cisnormative values. We advocate for the importance of queer adolescent-centered interventions, contend with the promise of GenAI tools while addressing concerns of bias, and position participatory frameworks for empowering queer youth in the design and development of AI tools. Viewing LGBTQ+ adolescents as designers, we propose a community-engaged approach to enable a group of queer teens with sexual health education expertise to design their own GenAI health tools. Through this collaborative effort, we put forward participatory ways to develop processes minimizing the potential iatrogenic harms of biased AI models, while harnessing AI benefits for LGBTQ+ teens. In this workshop, we offer specialized community-engaged knowledge in designing equitable AI tools to improve LGBTQ+ well-being.
[ { "created": "Tue, 18 Jun 2024 19:25:22 GMT", "version": "v1" } ]
2024-06-21
[ [ "Liem", "William", "" ], [ "Berry", "Andrew", "" ], [ "Macapagal", "Kathryn", "" ] ]
In this position paper, we explore the potential of generative AI (GenAI) tools in supporting HIV prevention initiatives among LGBTQ+ adolescents. GenAI offers opportunities to bridge information gaps and enhance healthcare access, yet it also risks exacerbating existing inequities through biased AI outputs reflecting heteronormative and cisnormative values. We advocate for the importance of queer adolescent-centered interventions, contend with the promise of GenAI tools while addressing concerns of bias, and position participatory frameworks for empowering queer youth in the design and development of AI tools. Viewing LGBTQ+ adolescents as designers, we propose a community-engaged approach to enable a group of queer teens with sexual health education expertise to design their own GenAI health tools. Through this collaborative effort, we put forward participatory ways to develop processes minimizing the potential iatrogenic harms of biased AI models, while harnessing AI benefits for LGBTQ+ teens. In this workshop, we offer specialized community-engaged knowledge in designing equitable AI tools to improve LGBTQ+ well-being.
2307.00729
Qilong Yuan
Sheng Zhao, Qilong Yuan, Yibo Duan and Zhuoyue Chen
An End-to-End Multi-Module Audio Deepfake Generation System for ADD Challenge 2023
null
null
null
null
cs.SD cs.CL eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of synthetic speech generation is to generate language content from a given text, then simulating fake human voice.The key factors that determine the effect of synthetic speech generation mainly include speed of generation, accuracy of word segmentation, naturalness of synthesized speech, etc. This paper builds an end-to-end multi-module synthetic speech generation model, including speaker encoder, synthesizer based on Tacotron2, and vocoder based on WaveRNN. In addition, we perform a lot of comparative experiments on different datasets and various model structures. Finally, we won the first place in the ADD 2023 challenge Track 1.1 with the weighted deception success rate (WDSR) of 44.97%.
[ { "created": "Mon, 3 Jul 2023 03:21:23 GMT", "version": "v1" } ]
2023-07-04
[ [ "Zhao", "Sheng", "" ], [ "Yuan", "Qilong", "" ], [ "Duan", "Yibo", "" ], [ "Chen", "Zhuoyue", "" ] ]
The task of synthetic speech generation is to generate language content from a given text, then simulating fake human voice.The key factors that determine the effect of synthetic speech generation mainly include speed of generation, accuracy of word segmentation, naturalness of synthesized speech, etc. This paper builds an end-to-end multi-module synthetic speech generation model, including speaker encoder, synthesizer based on Tacotron2, and vocoder based on WaveRNN. In addition, we perform a lot of comparative experiments on different datasets and various model structures. Finally, we won the first place in the ADD 2023 challenge Track 1.1 with the weighted deception success rate (WDSR) of 44.97%.
1601.04621
Benjamin Chamberlain
Benjamin Paul Chamberlain, Clive Humby, Marc Peter Deisenroth
Probabilistic Inference of Twitter Users' Age based on What They Follow
9 pages, 9 figures
null
null
null
cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Twitter provides an open and rich source of data for studying human behaviour at scale and is widely used in social and network sciences. However, a major criticism of Twitter data is that demographic information is largely absent. Enhancing Twitter data with user ages would advance our ability to study social network structures, information flows and the spread of contagions. Approaches toward age detection of Twitter users typically focus on specific properties of tweets, e.g., linguistic features, which are language dependent. In this paper, we devise a language-independent methodology for determining the age of Twitter users from data that is native to the Twitter ecosystem. The key idea is to use a Bayesian framework to generalise ground-truth age information from a few Twitter users to the entire network based on what/whom they follow. Our approach scales to inferring the age of 700 million Twitter accounts with high accuracy.
[ { "created": "Mon, 18 Jan 2016 17:40:56 GMT", "version": "v1" }, { "created": "Fri, 24 Feb 2017 15:02:37 GMT", "version": "v2" } ]
2017-02-27
[ [ "Chamberlain", "Benjamin Paul", "" ], [ "Humby", "Clive", "" ], [ "Deisenroth", "Marc Peter", "" ] ]
Twitter provides an open and rich source of data for studying human behaviour at scale and is widely used in social and network sciences. However, a major criticism of Twitter data is that demographic information is largely absent. Enhancing Twitter data with user ages would advance our ability to study social network structures, information flows and the spread of contagions. Approaches toward age detection of Twitter users typically focus on specific properties of tweets, e.g., linguistic features, which are language dependent. In this paper, we devise a language-independent methodology for determining the age of Twitter users from data that is native to the Twitter ecosystem. The key idea is to use a Bayesian framework to generalise ground-truth age information from a few Twitter users to the entire network based on what/whom they follow. Our approach scales to inferring the age of 700 million Twitter accounts with high accuracy.
2207.08860
Mathew Schwartz
Xun Zhang, Mathew Schwartz, Muhammad Usman, Petros Faloutsos, Mubbasir Kapadia
Optimizing Indoor Navigation Policies For Spatial Distancing
9 pages, 8 figures, conference-- simulation in architecture and urban design, in-cooperation with ACM SIGSIM
null
null
null
cs.MA cs.AI cs.GR cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we focus on the modification of policies that can lead to movement patterns and directional guidance of occupants, which are represented as agents in a 3D simulation engine. We demonstrate an optimization method that improves a spatial distancing metric by modifying the navigation graph by introducing a measure of spatial distancing of agents as a function of agent density (i.e., occupancy). Our optimization framework utilizes such metrics as the target function, using a hybrid approach of combining genetic algorithm and simulated annealing. We show that within our framework, the simulation-optimization process can help to improve spatial distancing between agents by optimizing the navigation policies for a given indoor environment.
[ { "created": "Sat, 4 Jun 2022 21:57:22 GMT", "version": "v1" } ]
2022-07-20
[ [ "Zhang", "Xun", "" ], [ "Schwartz", "Mathew", "" ], [ "Usman", "Muhammad", "" ], [ "Faloutsos", "Petros", "" ], [ "Kapadia", "Mubbasir", "" ] ]
In this paper, we focus on the modification of policies that can lead to movement patterns and directional guidance of occupants, which are represented as agents in a 3D simulation engine. We demonstrate an optimization method that improves a spatial distancing metric by modifying the navigation graph by introducing a measure of spatial distancing of agents as a function of agent density (i.e., occupancy). Our optimization framework utilizes such metrics as the target function, using a hybrid approach of combining genetic algorithm and simulated annealing. We show that within our framework, the simulation-optimization process can help to improve spatial distancing between agents by optimizing the navigation policies for a given indoor environment.
1610.09610
Ashish Sureka
Vidushi Chaudhary, Vishnu Agrawal and Ashish Sureka
An Experimental Study on the Learning Outcome of Teaching Elementary Level Children using Lego Mindstorms EV3 Robotics Education Kit
Extended version of the accepted and to be published paper in T4E 2016 The 8th IEEE International Conference on Technology for Education
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Skills like computational thinking, problem solving, handling complexity, team-work and project management are essential for future careers and needs to be taught to students at the elementary level itself. Computer programming knowledge and skills, experiencing technology and conducting science and engineering experiments are also important for students at elementary level. However, teaching such skills effectively through active learning can be challenging for educators. In this paper, we present our approach and experiences in teaching such skills to several elementary level children using Lego Mindstorms EV3 robotics education kit. We describe our learning environment consisting of lessons, worksheets, hands-on activities and assessment. We taught students how to design, construct and program robots using components such as motors, sensors, wheels, axles, beams, connectors and gears. Students also gained knowledge on basic programming constructs such as control flow, loops, branches and conditions using a visual programming environment. We carefully observed how students performed various tasks and solved problems. We present experimental results which demonstrates that our teaching methodology consisting of both the course content and pedagogy was effective in imparting the desired skills and knowledge to elementary level children. The students also participated in a competitive World Robot Olympiad India event and qualified during the regional round which is an evidence of the effectiveness of the approach.
[ { "created": "Sun, 30 Oct 2016 07:03:02 GMT", "version": "v1" } ]
2016-11-01
[ [ "Chaudhary", "Vidushi", "" ], [ "Agrawal", "Vishnu", "" ], [ "Sureka", "Ashish", "" ] ]
Skills like computational thinking, problem solving, handling complexity, team-work and project management are essential for future careers and needs to be taught to students at the elementary level itself. Computer programming knowledge and skills, experiencing technology and conducting science and engineering experiments are also important for students at elementary level. However, teaching such skills effectively through active learning can be challenging for educators. In this paper, we present our approach and experiences in teaching such skills to several elementary level children using Lego Mindstorms EV3 robotics education kit. We describe our learning environment consisting of lessons, worksheets, hands-on activities and assessment. We taught students how to design, construct and program robots using components such as motors, sensors, wheels, axles, beams, connectors and gears. Students also gained knowledge on basic programming constructs such as control flow, loops, branches and conditions using a visual programming environment. We carefully observed how students performed various tasks and solved problems. We present experimental results which demonstrates that our teaching methodology consisting of both the course content and pedagogy was effective in imparting the desired skills and knowledge to elementary level children. The students also participated in a competitive World Robot Olympiad India event and qualified during the regional round which is an evidence of the effectiveness of the approach.
1611.08951
Rodrigo de Lamare
C. T. Healy and R. C. de Lamare
Distributed Estimation for Adaptive Networks Based on Serial-Inspired Diffusion
8 figures
null
null
null
cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed estimation and processing in networks modeled by graphs have received a great deal of interest recently, due to the benefits of decentralised processing in terms of performance and robustness to communications link failure between nodes of the network. Diffusion-based algorithms have been demonstrated to be among the most effective for distributed signal processing problems, through the combination of local node estimate updates and sharing of information with neighbour nodes through diffusion. In this work, we develop a serial-inspired approach based on message-passing strategies that provides a significant improvement in performance over prior art. The concept of serial processing in the graph has been successfully applied in sum-product based algorithms and here provides inspiration for an algorithm which makes use of the most up-to-date information in the graph in combination with the diffusion approach to offer improved performance.
[ { "created": "Mon, 28 Nov 2016 01:10:54 GMT", "version": "v1" } ]
2016-11-29
[ [ "Healy", "C. T.", "" ], [ "de Lamare", "R. C.", "" ] ]
Distributed estimation and processing in networks modeled by graphs have received a great deal of interest recently, due to the benefits of decentralised processing in terms of performance and robustness to communications link failure between nodes of the network. Diffusion-based algorithms have been demonstrated to be among the most effective for distributed signal processing problems, through the combination of local node estimate updates and sharing of information with neighbour nodes through diffusion. In this work, we develop a serial-inspired approach based on message-passing strategies that provides a significant improvement in performance over prior art. The concept of serial processing in the graph has been successfully applied in sum-product based algorithms and here provides inspiration for an algorithm which makes use of the most up-to-date information in the graph in combination with the diffusion approach to offer improved performance.
2305.00316
Liangzu Peng
Liangzu Peng, Paris V. Giampouras, Ren\'e Vidal
The Ideal Continual Learner: An Agent That Never Forgets
Accepted to ICML 2023
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of continual learning is to find a model that solves multiple learning tasks which are presented sequentially to the learner. A key challenge in this setting is that the learner may forget how to solve a previous task when learning a new task, a phenomenon known as catastrophic forgetting. To address this challenge, many practical methods have been proposed, including memory-based, regularization-based, and expansion-based methods. However, a rigorous theoretical understanding of these methods remains elusive. This paper aims to bridge this gap between theory and practice by proposing a new continual learning framework called Ideal Continual Learner (ICL), which is guaranteed to avoid catastrophic forgetting by construction. We show that ICL unifies multiple well-established continual learning methods and gives new theoretical insights into the strengths and weaknesses of these methods. We also derive generalization bounds for ICL which allow us to theoretically quantify how rehearsal affects generalization. Finally, we connect ICL to several classic subjects and research topics of modern interest, which allows us to make historical remarks and inspire future directions.
[ { "created": "Sat, 29 Apr 2023 18:06:14 GMT", "version": "v1" }, { "created": "Thu, 8 Jun 2023 03:39:48 GMT", "version": "v2" } ]
2023-06-09
[ [ "Peng", "Liangzu", "" ], [ "Giampouras", "Paris V.", "" ], [ "Vidal", "René", "" ] ]
The goal of continual learning is to find a model that solves multiple learning tasks which are presented sequentially to the learner. A key challenge in this setting is that the learner may forget how to solve a previous task when learning a new task, a phenomenon known as catastrophic forgetting. To address this challenge, many practical methods have been proposed, including memory-based, regularization-based, and expansion-based methods. However, a rigorous theoretical understanding of these methods remains elusive. This paper aims to bridge this gap between theory and practice by proposing a new continual learning framework called Ideal Continual Learner (ICL), which is guaranteed to avoid catastrophic forgetting by construction. We show that ICL unifies multiple well-established continual learning methods and gives new theoretical insights into the strengths and weaknesses of these methods. We also derive generalization bounds for ICL which allow us to theoretically quantify how rehearsal affects generalization. Finally, we connect ICL to several classic subjects and research topics of modern interest, which allows us to make historical remarks and inspire future directions.
2101.09536
James Smith
James Smith, Jonathan Balloch, Yen-Chang Hsu, Zsolt Kira
Memory-Efficient Semi-Supervised Continual Learning: The World is its Own Replay Buffer
Accepted by the 2021 International Joint Conference on Neural Networks (IJCNN 2021)
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rehearsal is a critical component for class-incremental continual learning, yet it requires a substantial memory budget. Our work investigates whether we can significantly reduce this memory budget by leveraging unlabeled data from an agent's environment in a realistic and challenging continual learning paradigm. Specifically, we explore and formalize a novel semi-supervised continual learning (SSCL) setting, where labeled data is scarce yet non-i.i.d. unlabeled data from the agent's environment is plentiful. Importantly, data distributions in the SSCL setting are realistic and therefore reflect object class correlations between, and among, the labeled and unlabeled data distributions. We show that a strategy built on pseudo-labeling, consistency regularization, Out-of-Distribution (OoD) detection, and knowledge distillation reduces forgetting in this setting. Our approach, DistillMatch, increases performance over the state-of-the-art by no less than 8.7% average task accuracy and up to 54.5% average task accuracy in SSCL CIFAR-100 experiments. Moreover, we demonstrate that DistillMatch can save up to 0.23 stored images per processed unlabeled image compared to the next best method which only saves 0.08. Our results suggest that focusing on realistic correlated distributions is a significantly new perspective, which accentuates the importance of leveraging the world's structure as a continual learning strategy.
[ { "created": "Sat, 23 Jan 2021 17:23:08 GMT", "version": "v1" }, { "created": "Thu, 6 May 2021 17:55:20 GMT", "version": "v2" } ]
2021-05-07
[ [ "Smith", "James", "" ], [ "Balloch", "Jonathan", "" ], [ "Hsu", "Yen-Chang", "" ], [ "Kira", "Zsolt", "" ] ]
Rehearsal is a critical component for class-incremental continual learning, yet it requires a substantial memory budget. Our work investigates whether we can significantly reduce this memory budget by leveraging unlabeled data from an agent's environment in a realistic and challenging continual learning paradigm. Specifically, we explore and formalize a novel semi-supervised continual learning (SSCL) setting, where labeled data is scarce yet non-i.i.d. unlabeled data from the agent's environment is plentiful. Importantly, data distributions in the SSCL setting are realistic and therefore reflect object class correlations between, and among, the labeled and unlabeled data distributions. We show that a strategy built on pseudo-labeling, consistency regularization, Out-of-Distribution (OoD) detection, and knowledge distillation reduces forgetting in this setting. Our approach, DistillMatch, increases performance over the state-of-the-art by no less than 8.7% average task accuracy and up to 54.5% average task accuracy in SSCL CIFAR-100 experiments. Moreover, we demonstrate that DistillMatch can save up to 0.23 stored images per processed unlabeled image compared to the next best method which only saves 0.08. Our results suggest that focusing on realistic correlated distributions is a significantly new perspective, which accentuates the importance of leveraging the world's structure as a continual learning strategy.
2206.01934
Phan Hoang
Hoang Phan, Ngoc Tran, Trung Le, Toan Tran, Nhat Ho, Dinh Phung
Stochastic Multiple Target Sampling Gradient Descent
Accepted to Advances in Neural Information Processing Systems (NeurIPS) 2022. 27 pages, 10 figures, 5 tables
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sampling from an unnormalized target distribution is an essential problem with many applications in probabilistic inference. Stein Variational Gradient Descent (SVGD) has been shown to be a powerful method that iteratively updates a set of particles to approximate the distribution of interest. Furthermore, when analysing its asymptotic properties, SVGD reduces exactly to a single-objective optimization problem and can be viewed as a probabilistic version of this single-objective optimization problem. A natural question then arises: "Can we derive a probabilistic version of the multi-objective optimization?". To answer this question, we propose Stochastic Multiple Target Sampling Gradient Descent (MT-SGD), enabling us to sample from multiple unnormalized target distributions. Specifically, our MT-SGD conducts a flow of intermediate distributions gradually orienting to multiple target distributions, which allows the sampled particles to move to the joint high-likelihood region of the target distributions. Interestingly, the asymptotic analysis shows that our approach reduces exactly to the multiple-gradient descent algorithm for multi-objective optimization, as expected. Finally, we conduct comprehensive experiments to demonstrate the merit of our approach to multi-task learning.
[ { "created": "Sat, 4 Jun 2022 07:54:35 GMT", "version": "v1" }, { "created": "Sun, 12 Jun 2022 17:19:29 GMT", "version": "v2" }, { "created": "Fri, 23 Sep 2022 03:00:25 GMT", "version": "v3" }, { "created": "Fri, 10 Feb 2023 16:43:01 GMT", "version": "v4" } ]
2023-02-13
[ [ "Phan", "Hoang", "" ], [ "Tran", "Ngoc", "" ], [ "Le", "Trung", "" ], [ "Tran", "Toan", "" ], [ "Ho", "Nhat", "" ], [ "Phung", "Dinh", "" ] ]
Sampling from an unnormalized target distribution is an essential problem with many applications in probabilistic inference. Stein Variational Gradient Descent (SVGD) has been shown to be a powerful method that iteratively updates a set of particles to approximate the distribution of interest. Furthermore, when analysing its asymptotic properties, SVGD reduces exactly to a single-objective optimization problem and can be viewed as a probabilistic version of this single-objective optimization problem. A natural question then arises: "Can we derive a probabilistic version of the multi-objective optimization?". To answer this question, we propose Stochastic Multiple Target Sampling Gradient Descent (MT-SGD), enabling us to sample from multiple unnormalized target distributions. Specifically, our MT-SGD conducts a flow of intermediate distributions gradually orienting to multiple target distributions, which allows the sampled particles to move to the joint high-likelihood region of the target distributions. Interestingly, the asymptotic analysis shows that our approach reduces exactly to the multiple-gradient descent algorithm for multi-objective optimization, as expected. Finally, we conduct comprehensive experiments to demonstrate the merit of our approach to multi-task learning.
1703.10187
Mohamed El Massad
Mohamed El Massad, Jun Zhang, Siddharth Garg and Mahesh V. Tripunitara
Logic Locking for Secure Outsourced Chip Fabrication: A New Attack and Provably Secure Defense Mechanism
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chip designers outsource chip fabrication to external foundries, but at the risk of IP theft. Logic locking, a promising solution to mitigate this threat, adds extra logic gates (key gates) and inputs (key bits) to the chip so that it functions correctly only when the correct key, known only to the designer but not the foundry, is applied. In this paper, we identify a new vulnerability in all existing logic locking schemes. Prior attacks on logic locking have assumed that, in addition to the design of the locked chip, the attacker has access to a working copy of the chip. Our attack does not require a working copy and yet we successfully recover a significant fraction of key bits from the design of the locked chip only. Empirically, we demonstrate the success of our attack on eight large benchmark circuits from a benchmark suite that has been tailored specifically for logic synthesis research, for two different logic locking schemes. Then, to address this vulnerability, we initiate the study of provably secure logic locking mechanisms. We formalize, for the first time to our knowledge, a precise notion of security for logic locking. We establish that any locking procedure that is secure under our definition is guaranteed to counter our desynthesis attack, and all other such known attacks. We then devise a new logic locking procedure, Meerkat, that guarantees that the locked chip reveals no information about the key or the designer's intended functionality. A main insight behind Meerkat is that canonical representations of boolean functionality via Reduced Ordered Binary Decision Diagrams (ROBDDs) can be leveraged effectively to provide security. We analyze Meerkat with regards to its security properties and the overhead it incurs. As such, our work is a contribution to both the foundations and practice of securing digital ICs.
[ { "created": "Wed, 29 Mar 2017 18:17:55 GMT", "version": "v1" } ]
2017-03-31
[ [ "Massad", "Mohamed El", "" ], [ "Zhang", "Jun", "" ], [ "Garg", "Siddharth", "" ], [ "Tripunitara", "Mahesh V.", "" ] ]
Chip designers outsource chip fabrication to external foundries, but at the risk of IP theft. Logic locking, a promising solution to mitigate this threat, adds extra logic gates (key gates) and inputs (key bits) to the chip so that it functions correctly only when the correct key, known only to the designer but not the foundry, is applied. In this paper, we identify a new vulnerability in all existing logic locking schemes. Prior attacks on logic locking have assumed that, in addition to the design of the locked chip, the attacker has access to a working copy of the chip. Our attack does not require a working copy and yet we successfully recover a significant fraction of key bits from the design of the locked chip only. Empirically, we demonstrate the success of our attack on eight large benchmark circuits from a benchmark suite that has been tailored specifically for logic synthesis research, for two different logic locking schemes. Then, to address this vulnerability, we initiate the study of provably secure logic locking mechanisms. We formalize, for the first time to our knowledge, a precise notion of security for logic locking. We establish that any locking procedure that is secure under our definition is guaranteed to counter our desynthesis attack, and all other such known attacks. We then devise a new logic locking procedure, Meerkat, that guarantees that the locked chip reveals no information about the key or the designer's intended functionality. A main insight behind Meerkat is that canonical representations of boolean functionality via Reduced Ordered Binary Decision Diagrams (ROBDDs) can be leveraged effectively to provide security. We analyze Meerkat with regards to its security properties and the overhead it incurs. As such, our work is a contribution to both the foundations and practice of securing digital ICs.
1301.1394
Michael Fink
Vladimir Lifschitz and Fangkai Yang
Lloyd-Topor Completion and General Stable Models
Proceedings of Answer Set Programming and Other Computing Paradigms (ASPOCP 2012), 5th International Workshop, September 4, 2012, Budapest, Hungary
null
null
null
cs.LO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the relationship between the generalization of program completion defined in 1984 by Lloyd and Topor and the generalization of the stable model semantics introduced recently by Ferraris et al. The main theorem can be used to characterize, in some cases, the general stable models of a logic program by a first-order formula. The proof uses Truszczynski's stable model semantics of infinitary propositional formulas.
[ { "created": "Tue, 8 Jan 2013 02:29:55 GMT", "version": "v1" } ]
2013-01-09
[ [ "Lifschitz", "Vladimir", "" ], [ "Yang", "Fangkai", "" ] ]
We investigate the relationship between the generalization of program completion defined in 1984 by Lloyd and Topor and the generalization of the stable model semantics introduced recently by Ferraris et al. The main theorem can be used to characterize, in some cases, the general stable models of a logic program by a first-order formula. The proof uses Truszczynski's stable model semantics of infinitary propositional formulas.
2311.01115
Lara Ost
Sebastiano Cultrera di Montesano, Herbert Edelsbrunner, Monika Henzinger, Lara Ost
Dynamically Maintaining the Persistent Homology of Time Series
Corrected the statement and proof of Theorem 5.2; added a missing edge-case to the anti-cancellation algorithm
null
null
null
cs.DS cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a dynamic data structure for maintaining the persistent homology of a time series of real numbers. The data structure supports local operations, including the insertion and deletion of an item and the cutting and concatenating of lists, each in time $O(\log n + k)$, in which $n$ counts the critical items and $k$ the changes in the augmented persistence diagram. To achieve this, we design a tailor-made tree structure with an unconventional representation, referred to as banana tree, which may be useful in its own right.
[ { "created": "Thu, 2 Nov 2023 09:41:49 GMT", "version": "v1" }, { "created": "Tue, 2 Jul 2024 12:26:47 GMT", "version": "v2" } ]
2024-07-03
[ [ "di Montesano", "Sebastiano Cultrera", "" ], [ "Edelsbrunner", "Herbert", "" ], [ "Henzinger", "Monika", "" ], [ "Ost", "Lara", "" ] ]
We present a dynamic data structure for maintaining the persistent homology of a time series of real numbers. The data structure supports local operations, including the insertion and deletion of an item and the cutting and concatenating of lists, each in time $O(\log n + k)$, in which $n$ counts the critical items and $k$ the changes in the augmented persistence diagram. To achieve this, we design a tailor-made tree structure with an unconventional representation, referred to as banana tree, which may be useful in its own right.
2401.12596
Hengjia Li
Hengjia Li, Yang Liu, Yuqi Lin, Zhanwei Zhang, Yibo Zhao, weihang Pan, Tu Zheng, Zheng Yang, Yuchun Jiang, Boxi Wu, Deng Cai
UniHDA: A Unified and Versatile Framework for Multi-Modal Hybrid Domain Adaptation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, generative domain adaptation has achieved remarkable progress, enabling us to adapt a pre-trained generator to a new target domain. However, existing methods simply adapt the generator to a single target domain and are limited to a single modality, either text-driven or image-driven. Moreover, they cannot maintain well consistency with the source domain, which impedes the inheritance of the diversity. In this paper, we propose UniHDA, a \textbf{unified} and \textbf{versatile} framework for generative hybrid domain adaptation with multi-modal references from multiple domains. We use CLIP encoder to project multi-modal references into a unified embedding space and then linearly interpolate the direction vectors from multiple target domains to achieve hybrid domain adaptation. To ensure \textbf{consistency} with the source domain, we propose a novel cross-domain spatial structure (CSS) loss that maintains detailed spatial structure information between source and target generator. Experiments show that the adapted generator can synthesise realistic images with various attribute compositions. Additionally, our framework is generator-agnostic and versatile to multiple generators, e.g., StyleGAN, EG3D, and Diffusion Models.
[ { "created": "Tue, 23 Jan 2024 09:49:24 GMT", "version": "v1" }, { "created": "Fri, 15 Mar 2024 07:44:00 GMT", "version": "v2" } ]
2024-03-18
[ [ "Li", "Hengjia", "" ], [ "Liu", "Yang", "" ], [ "Lin", "Yuqi", "" ], [ "Zhang", "Zhanwei", "" ], [ "Zhao", "Yibo", "" ], [ "Pan", "weihang", "" ], [ "Zheng", "Tu", "" ], [ "Yang", "Zheng", "" ], [ "Jiang", "Yuchun", "" ], [ "Wu", "Boxi", "" ], [ "Cai", "Deng", "" ] ]
Recently, generative domain adaptation has achieved remarkable progress, enabling us to adapt a pre-trained generator to a new target domain. However, existing methods simply adapt the generator to a single target domain and are limited to a single modality, either text-driven or image-driven. Moreover, they cannot maintain well consistency with the source domain, which impedes the inheritance of the diversity. In this paper, we propose UniHDA, a \textbf{unified} and \textbf{versatile} framework for generative hybrid domain adaptation with multi-modal references from multiple domains. We use CLIP encoder to project multi-modal references into a unified embedding space and then linearly interpolate the direction vectors from multiple target domains to achieve hybrid domain adaptation. To ensure \textbf{consistency} with the source domain, we propose a novel cross-domain spatial structure (CSS) loss that maintains detailed spatial structure information between source and target generator. Experiments show that the adapted generator can synthesise realistic images with various attribute compositions. Additionally, our framework is generator-agnostic and versatile to multiple generators, e.g., StyleGAN, EG3D, and Diffusion Models.
1702.03389
Bing Zeng
Bing Zeng, Liang Gao, Xinyu Li
Whale swarm algorithm for function optimization
8 pages, 5 figures
LNCS. volume 10361. ICIC 2017: pp 624-639
10.1007/978-3-319-63309-1_55
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Increasing nature-inspired metaheuristic algorithms are applied to solving the real-world optimization problems, as they have some advantages over the classical methods of numerical optimization. This paper has proposed a new nature-inspired metaheuristic called Whale Swarm Algorithm for function optimization, which is inspired by the whales behavior of communicating with each other via ultrasound for hunting. The proposed Whale Swarm Algorithm has been compared with several popular metaheuristic algorithms on comprehensive performance metrics. According to the experimental results, Whale Swarm Algorithm has a quite competitive performance when compared with other algorithms.
[ { "created": "Sat, 11 Feb 2017 06:39:38 GMT", "version": "v1" }, { "created": "Thu, 30 Mar 2017 12:53:54 GMT", "version": "v2" } ]
2017-08-10
[ [ "Zeng", "Bing", "" ], [ "Gao", "Liang", "" ], [ "Li", "Xinyu", "" ] ]
Increasing nature-inspired metaheuristic algorithms are applied to solving the real-world optimization problems, as they have some advantages over the classical methods of numerical optimization. This paper has proposed a new nature-inspired metaheuristic called Whale Swarm Algorithm for function optimization, which is inspired by the whales behavior of communicating with each other via ultrasound for hunting. The proposed Whale Swarm Algorithm has been compared with several popular metaheuristic algorithms on comprehensive performance metrics. According to the experimental results, Whale Swarm Algorithm has a quite competitive performance when compared with other algorithms.
1705.00744
Ragav Venkatesan
Ragav Venkatesan, Hemanth Venkateswara, Sethuraman Panchanathan, Baoxin Li
A Strategy for an Uncompromising Incremental Learner
Under review at IEEE Transactions of Neural Networks and Learning Systems
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-class supervised learning systems require the knowledge of the entire range of labels they predict. Often when learnt incrementally, they suffer from catastrophic forgetting. To avoid this, generous leeways have to be made to the philosophy of incremental learning that either forces a part of the machine to not learn, or to retrain the machine again with a selection of the historic data. While these hacks work to various degrees, they do not adhere to the spirit of incremental learning. In this article, we redefine incremental learning with stringent conditions that do not allow for any undesirable relaxations and assumptions. We design a strategy involving generative models and the distillation of dark knowledge as a means of hallucinating data along with appropriate targets from past distributions. We call this technique, phantom sampling.We show that phantom sampling helps avoid catastrophic forgetting during incremental learning. Using an implementation based on deep neural networks, we demonstrate that phantom sampling dramatically avoids catastrophic forgetting. We apply these strategies to competitive multi-class incremental learning of deep neural networks. Using various benchmark datasets and through our strategy, we demonstrate that strict incremental learning could be achieved. We further put our strategy to test on challenging cases, including cross-domain increments and incrementing on a novel label space. We also propose a trivial extension to unbounded-continual learning and identify potential for future development.
[ { "created": "Tue, 2 May 2017 00:17:54 GMT", "version": "v1" }, { "created": "Mon, 17 Jul 2017 07:30:18 GMT", "version": "v2" } ]
2017-07-18
[ [ "Venkatesan", "Ragav", "" ], [ "Venkateswara", "Hemanth", "" ], [ "Panchanathan", "Sethuraman", "" ], [ "Li", "Baoxin", "" ] ]
Multi-class supervised learning systems require the knowledge of the entire range of labels they predict. Often when learnt incrementally, they suffer from catastrophic forgetting. To avoid this, generous leeways have to be made to the philosophy of incremental learning that either forces a part of the machine to not learn, or to retrain the machine again with a selection of the historic data. While these hacks work to various degrees, they do not adhere to the spirit of incremental learning. In this article, we redefine incremental learning with stringent conditions that do not allow for any undesirable relaxations and assumptions. We design a strategy involving generative models and the distillation of dark knowledge as a means of hallucinating data along with appropriate targets from past distributions. We call this technique, phantom sampling.We show that phantom sampling helps avoid catastrophic forgetting during incremental learning. Using an implementation based on deep neural networks, we demonstrate that phantom sampling dramatically avoids catastrophic forgetting. We apply these strategies to competitive multi-class incremental learning of deep neural networks. Using various benchmark datasets and through our strategy, we demonstrate that strict incremental learning could be achieved. We further put our strategy to test on challenging cases, including cross-domain increments and incrementing on a novel label space. We also propose a trivial extension to unbounded-continual learning and identify potential for future development.
2003.09554
Evangelia Gergatsouli
Evangelia Gergatsouli, Brendan Lucier, Christos Tzamos
Black-box Methods for Restoring Monotonicity
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many practical applications, heuristic or approximation algorithms are used to efficiently solve the task at hand. However their solutions frequently do not satisfy natural monotonicity properties of optimal solutions. In this work we develop algorithms that are able to restore monotonicity in the parameters of interest. Specifically, given oracle access to a (possibly non-monotone) multi-dimensional real-valued function $f$, we provide an algorithm that restores monotonicity while degrading the expected value of the function by at most $\varepsilon$. The number of queries required is at most logarithmic in $1/\varepsilon$ and exponential in the number of parameters. We also give a lower bound showing that this exponential dependence is necessary. Finally, we obtain improved query complexity bounds for restoring the weaker property of $k$-marginal monotonicity. Under this property, every $k$-dimensional projection of the function $f$ is required to be monotone. The query complexity we obtain only scales exponentially with $k$.
[ { "created": "Sat, 21 Mar 2020 02:19:56 GMT", "version": "v1" } ]
2020-03-24
[ [ "Gergatsouli", "Evangelia", "" ], [ "Lucier", "Brendan", "" ], [ "Tzamos", "Christos", "" ] ]
In many practical applications, heuristic or approximation algorithms are used to efficiently solve the task at hand. However their solutions frequently do not satisfy natural monotonicity properties of optimal solutions. In this work we develop algorithms that are able to restore monotonicity in the parameters of interest. Specifically, given oracle access to a (possibly non-monotone) multi-dimensional real-valued function $f$, we provide an algorithm that restores monotonicity while degrading the expected value of the function by at most $\varepsilon$. The number of queries required is at most logarithmic in $1/\varepsilon$ and exponential in the number of parameters. We also give a lower bound showing that this exponential dependence is necessary. Finally, we obtain improved query complexity bounds for restoring the weaker property of $k$-marginal monotonicity. Under this property, every $k$-dimensional projection of the function $f$ is required to be monotone. The query complexity we obtain only scales exponentially with $k$.
2112.13927
Yayun Du
Yayun Du, Andrew Miller, M. Khalid Jawed
Mechanics-based Analysis on Flagellated Robots
16 pages, 7 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore the locomotion of soft robots in granular medium (GM) resulting from the elastic deformation of slender rods. A low-cost, rapidly fabricable robot inspired by the physiological structure of bacteria is presented. It consists of a rigid head, with a motor and batteries embedded, and multiple elastic rods (our model for flagella) to investigate locomotion in GM. The elastic flagella are rotated at one end by the motor, and they deform due to the drag from GM, propelling the robot. The external drag is determined by the flagellar shape, while the latter changes due to the competition between external loading and elastic forces. In this coupled fluid-structure interaction problem, we observe that increasing the number of flagella can decrease or increase the propulsive speed of the robot, depending on the physical parameters of the system. This nonlinearity in the functional relation between propulsion and the parameters of this simple robot motivates us to fundamentally analyze its mechanics using theory, numerical simulation, and experiments. We present a simple Euler-Bernoulli beam theory-based analytical framework that is capable of qualitatively capturing both cases. Theoretical prediction quantitatively matches experiments when the flagellar deformation is small. To account for the geometrically nonlinear deformation often encountered in soft robots and microbes, we implement a simulation framework that incorporates discrete differential geometry-based simulations of elastic rods, a resistive force theory-based model for drag, and a modified Stokes law for the hydrodynamics of the robot head. Comparison with experimental data indicates that the simulations can quantitatively predict robotic motion. Overall, the theoretical and numerical tools presented in this paper can shed light on the design and control of this class of articulated robots in granular or fluid media.
[ { "created": "Mon, 27 Dec 2021 22:40:51 GMT", "version": "v1" } ]
2021-12-30
[ [ "Du", "Yayun", "" ], [ "Miller", "Andrew", "" ], [ "Jawed", "M. Khalid", "" ] ]
We explore the locomotion of soft robots in granular medium (GM) resulting from the elastic deformation of slender rods. A low-cost, rapidly fabricable robot inspired by the physiological structure of bacteria is presented. It consists of a rigid head, with a motor and batteries embedded, and multiple elastic rods (our model for flagella) to investigate locomotion in GM. The elastic flagella are rotated at one end by the motor, and they deform due to the drag from GM, propelling the robot. The external drag is determined by the flagellar shape, while the latter changes due to the competition between external loading and elastic forces. In this coupled fluid-structure interaction problem, we observe that increasing the number of flagella can decrease or increase the propulsive speed of the robot, depending on the physical parameters of the system. This nonlinearity in the functional relation between propulsion and the parameters of this simple robot motivates us to fundamentally analyze its mechanics using theory, numerical simulation, and experiments. We present a simple Euler-Bernoulli beam theory-based analytical framework that is capable of qualitatively capturing both cases. Theoretical prediction quantitatively matches experiments when the flagellar deformation is small. To account for the geometrically nonlinear deformation often encountered in soft robots and microbes, we implement a simulation framework that incorporates discrete differential geometry-based simulations of elastic rods, a resistive force theory-based model for drag, and a modified Stokes law for the hydrodynamics of the robot head. Comparison with experimental data indicates that the simulations can quantitatively predict robotic motion. Overall, the theoretical and numerical tools presented in this paper can shed light on the design and control of this class of articulated robots in granular or fluid media.
2012.11150
Sungwon Han
Sungwon Park, Sungwon Han, Sundong Kim, Danu Kim, Sungkyu Park, Seunghoon Hong and Meeyoung Cha
Improving Unsupervised Image Clustering With Robust Learning
Accepted at CVPR2021
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised image clustering methods often introduce alternative objectives to indirectly train the model and are subject to faulty predictions and overconfident results. To overcome these challenges, the current research proposes an innovative model RUC that is inspired by robust learning. RUC's novelty is at utilizing pseudo-labels of existing image clustering models as a noisy dataset that may include misclassified samples. Its retraining process can revise misaligned knowledge and alleviate the overconfidence problem in predictions. The model's flexible structure makes it possible to be used as an add-on module to other clustering methods and helps them achieve better performance on multiple datasets. Extensive experiments show that the proposed model can adjust the model confidence with better calibration and gain additional robustness against adversarial noise.
[ { "created": "Mon, 21 Dec 2020 07:02:11 GMT", "version": "v1" }, { "created": "Mon, 29 Mar 2021 15:36:14 GMT", "version": "v2" } ]
2021-03-30
[ [ "Park", "Sungwon", "" ], [ "Han", "Sungwon", "" ], [ "Kim", "Sundong", "" ], [ "Kim", "Danu", "" ], [ "Park", "Sungkyu", "" ], [ "Hong", "Seunghoon", "" ], [ "Cha", "Meeyoung", "" ] ]
Unsupervised image clustering methods often introduce alternative objectives to indirectly train the model and are subject to faulty predictions and overconfident results. To overcome these challenges, the current research proposes an innovative model RUC that is inspired by robust learning. RUC's novelty is at utilizing pseudo-labels of existing image clustering models as a noisy dataset that may include misclassified samples. Its retraining process can revise misaligned knowledge and alleviate the overconfidence problem in predictions. The model's flexible structure makes it possible to be used as an add-on module to other clustering methods and helps them achieve better performance on multiple datasets. Extensive experiments show that the proposed model can adjust the model confidence with better calibration and gain additional robustness against adversarial noise.
2110.11525
Jeremy Speth
Jeremy Speth, Nathan Vance, Patrick Flynn, Kevin W. Bowyer, Adam Czajka
Digital and Physical-World Attacks on Remote Pulse Detection
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Remote photoplethysmography (rPPG) is a technique for estimating blood volume changes from reflected light without the need for a contact sensor. We present the first examples of presentation attacks in the digital and physical domains on rPPG from face video. Digital attacks are easily performed by adding imperceptible periodic noise to the input videos. Physical attacks are performed with illumination from visible spectrum LEDs placed in close proximity to the face, while still being difficult to perceive with the human eye. We also show that our attacks extend beyond medical applications, since the method can effectively generate a strong periodic pulse on 3D-printed face masks, which presents difficulties for pulse-based face presentation attack detection (PAD). The paper concludes with ideas for using this work to improve robustness of rPPG methods and pulse-based face PAD.
[ { "created": "Thu, 21 Oct 2021 23:41:27 GMT", "version": "v1" } ]
2021-10-25
[ [ "Speth", "Jeremy", "" ], [ "Vance", "Nathan", "" ], [ "Flynn", "Patrick", "" ], [ "Bowyer", "Kevin W.", "" ], [ "Czajka", "Adam", "" ] ]
Remote photoplethysmography (rPPG) is a technique for estimating blood volume changes from reflected light without the need for a contact sensor. We present the first examples of presentation attacks in the digital and physical domains on rPPG from face video. Digital attacks are easily performed by adding imperceptible periodic noise to the input videos. Physical attacks are performed with illumination from visible spectrum LEDs placed in close proximity to the face, while still being difficult to perceive with the human eye. We also show that our attacks extend beyond medical applications, since the method can effectively generate a strong periodic pulse on 3D-printed face masks, which presents difficulties for pulse-based face presentation attack detection (PAD). The paper concludes with ideas for using this work to improve robustness of rPPG methods and pulse-based face PAD.
2207.10257
Jeong-Gi Kwak
Jeong-gi Kwak, Yuanming Li, Dongsik Yoon, Donghyeon Kim, David Han, Hanseok Ko
Injecting 3D Perception of Controllable NeRF-GAN into StyleGAN for Editable Portrait Image Synthesis
ECCV 2022, project page: https://jgkwak95.github.io/surfgan/
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
Over the years, 2D GANs have achieved great successes in photorealistic portrait generation. However, they lack 3D understanding in the generation process, thus they suffer from multi-view inconsistency problem. To alleviate the issue, many 3D-aware GANs have been proposed and shown notable results, but 3D GANs struggle with editing semantic attributes. The controllability and interpretability of 3D GANs have not been much explored. In this work, we propose two solutions to overcome these weaknesses of 2D GANs and 3D-aware GANs. We first introduce a novel 3D-aware GAN, SURF-GAN, which is capable of discovering semantic attributes during training and controlling them in an unsupervised manner. After that, we inject the prior of SURF-GAN into StyleGAN to obtain a high-fidelity 3D-controllable generator. Unlike existing latent-based methods allowing implicit pose control, the proposed 3D-controllable StyleGAN enables explicit pose control over portrait generation. This distillation allows direct compatibility between 3D control and many StyleGAN-based techniques (e.g., inversion and stylization), and also brings an advantage in terms of computational resources. Our codes are available at https://github.com/jgkwak95/SURF-GAN.
[ { "created": "Thu, 21 Jul 2022 01:41:54 GMT", "version": "v1" }, { "created": "Tue, 26 Jul 2022 07:27:35 GMT", "version": "v2" } ]
2022-07-27
[ [ "Kwak", "Jeong-gi", "" ], [ "Li", "Yuanming", "" ], [ "Yoon", "Dongsik", "" ], [ "Kim", "Donghyeon", "" ], [ "Han", "David", "" ], [ "Ko", "Hanseok", "" ] ]
Over the years, 2D GANs have achieved great successes in photorealistic portrait generation. However, they lack 3D understanding in the generation process, thus they suffer from multi-view inconsistency problem. To alleviate the issue, many 3D-aware GANs have been proposed and shown notable results, but 3D GANs struggle with editing semantic attributes. The controllability and interpretability of 3D GANs have not been much explored. In this work, we propose two solutions to overcome these weaknesses of 2D GANs and 3D-aware GANs. We first introduce a novel 3D-aware GAN, SURF-GAN, which is capable of discovering semantic attributes during training and controlling them in an unsupervised manner. After that, we inject the prior of SURF-GAN into StyleGAN to obtain a high-fidelity 3D-controllable generator. Unlike existing latent-based methods allowing implicit pose control, the proposed 3D-controllable StyleGAN enables explicit pose control over portrait generation. This distillation allows direct compatibility between 3D control and many StyleGAN-based techniques (e.g., inversion and stylization), and also brings an advantage in terms of computational resources. Our codes are available at https://github.com/jgkwak95/SURF-GAN.
1712.07206
Edoardo Di Napoli
Davor Davidovi\'c, Diego Fabregat-Traver, Markus H\"ohnerbach, and Edoardo di Napoli
Accelerating the computation of FLAPW methods on heterogeneous architectures
22 pages, submitted to special issue of CCPE
null
10.1002/cpe.4905
null
cs.DC cs.CE cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Legacy codes in computational science and engineering have been very successful in providing essential functionality to researchers. However, they are not capable of exploiting the massive parallelism provided by emerging heterogeneous architectures. The lack of portable performance and scalability puts them at high risk: either they evolve or they are doomed to disappear. One example of legacy code which would heavily benefit from a modern design is FLEUR, a software for electronic structure calculations. In previous work, the computational bottleneck of FLEUR was partially re-engineered to have a modular design that relies on standard building blocks, namely BLAS and LAPACK. In this paper, we demonstrate how the initial redesign enables the portability to heterogeneous architectures. More specifically, we study different approaches to port the code to architectures consisting of multi-core CPUs equipped with one or more coprocessors such as Nvidia GPUs and Intel Xeon Phis. Our final code attains over 70\% of the architectures' peak performance, and outperforms Nvidia's and Intel's libraries. Finally, on JURECA, the supercomputer where FLEUR is often executed, the code takes advantage of the full power of the computing nodes, attaining $5\times$ speedup over the sole use of the CPUs.
[ { "created": "Tue, 19 Dec 2017 20:58:08 GMT", "version": "v1" } ]
2022-03-18
[ [ "Davidović", "Davor", "" ], [ "Fabregat-Traver", "Diego", "" ], [ "Höhnerbach", "Markus", "" ], [ "di Napoli", "Edoardo", "" ] ]
Legacy codes in computational science and engineering have been very successful in providing essential functionality to researchers. However, they are not capable of exploiting the massive parallelism provided by emerging heterogeneous architectures. The lack of portable performance and scalability puts them at high risk: either they evolve or they are doomed to disappear. One example of legacy code which would heavily benefit from a modern design is FLEUR, a software for electronic structure calculations. In previous work, the computational bottleneck of FLEUR was partially re-engineered to have a modular design that relies on standard building blocks, namely BLAS and LAPACK. In this paper, we demonstrate how the initial redesign enables the portability to heterogeneous architectures. More specifically, we study different approaches to port the code to architectures consisting of multi-core CPUs equipped with one or more coprocessors such as Nvidia GPUs and Intel Xeon Phis. Our final code attains over 70\% of the architectures' peak performance, and outperforms Nvidia's and Intel's libraries. Finally, on JURECA, the supercomputer where FLEUR is often executed, the code takes advantage of the full power of the computing nodes, attaining $5\times$ speedup over the sole use of the CPUs.
2207.14124
Peter Xenopoulos
Peter Xenopoulos, Claudio Silva
Graph Neural Networks to Predict Sports Outcomes
Accepted as a short paper (6 pages) to 2021 IEEE International Conference on Big Data
null
10.1109/BigData52589.2021.9671833
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting outcomes in sports is important for teams, leagues, bettors, media, and fans. Given the growing amount of player tracking data, sports analytics models are increasingly utilizing spatially-derived features built upon player tracking data. However, player-specific information, such as location, cannot readily be included as features themselves, since common modeling techniques rely on vector input. Accordingly, spatially-derived features are commonly constructed in relation to anchor objects, such as the distance to a ball or goal, through global feature aggregations, or via role-assignment schemes, where players are designated a distinct role in the game. In doing so, we sacrifice inter-player and local relationships in favor of global ones. To address this issue, we introduce a sport-agnostic graph-based representation of game states. We then use our proposed graph representation as input to graph neural networks to predict sports outcomes. Our approach preserves permutation invariance and allows for flexible player interaction weights. We demonstrate how our method provides statistically significant improvements over the state of the art for prediction tasks in both American football and esports, reducing test set loss by 9% and 20%, respectively. Additionally, we show how our model can be used to answer "what if" questions in sports and to visualize relationships between players.
[ { "created": "Thu, 28 Jul 2022 14:45:02 GMT", "version": "v1" } ]
2022-07-29
[ [ "Xenopoulos", "Peter", "" ], [ "Silva", "Claudio", "" ] ]
Predicting outcomes in sports is important for teams, leagues, bettors, media, and fans. Given the growing amount of player tracking data, sports analytics models are increasingly utilizing spatially-derived features built upon player tracking data. However, player-specific information, such as location, cannot readily be included as features themselves, since common modeling techniques rely on vector input. Accordingly, spatially-derived features are commonly constructed in relation to anchor objects, such as the distance to a ball or goal, through global feature aggregations, or via role-assignment schemes, where players are designated a distinct role in the game. In doing so, we sacrifice inter-player and local relationships in favor of global ones. To address this issue, we introduce a sport-agnostic graph-based representation of game states. We then use our proposed graph representation as input to graph neural networks to predict sports outcomes. Our approach preserves permutation invariance and allows for flexible player interaction weights. We demonstrate how our method provides statistically significant improvements over the state of the art for prediction tasks in both American football and esports, reducing test set loss by 9% and 20%, respectively. Additionally, we show how our model can be used to answer "what if" questions in sports and to visualize relationships between players.
1612.07928
Martianus Frederic Ezerman
Zuling Chang, Martianus Frederic Ezerman, San Ling and Huaxiong Wang
The Cycle Structure of LFSR with Arbitrary Characteristic Polynomial over Finite Fields
An extended abstract containing preliminary results was presented at SETA 2016
Cryptogr. Commun. vol 10 no. 6 pp. 1183-1202, 2018
10.1007/s12095-017-0273-2
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We determine the cycle structure of linear feedback shift register with arbitrary monic characteristic polynomial over any finite field. For each cycle, a method to find a state and a new way to represent the state are proposed.
[ { "created": "Fri, 23 Dec 2016 10:43:52 GMT", "version": "v1" } ]
2019-06-13
[ [ "Chang", "Zuling", "" ], [ "Ezerman", "Martianus Frederic", "" ], [ "Ling", "San", "" ], [ "Wang", "Huaxiong", "" ] ]
We determine the cycle structure of linear feedback shift register with arbitrary monic characteristic polynomial over any finite field. For each cycle, a method to find a state and a new way to represent the state are proposed.
2204.02973
Yanyong Huang
Yanyong Huang, Kejun Guo, Xiuwen Yi, Zhong Li, Tianrui Li
Incremental Unsupervised Feature Selection for Dynamic Incomplete Multi-view Data
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-view unsupervised feature selection has been proven to be efficient in reducing the dimensionality of multi-view unlabeled data with high dimensions. The previous methods assume all of the views are complete. However, in real applications, the multi-view data are often incomplete, i.e., some views of instances are missing, which will result in the failure of these methods. Besides, while the data arrive in form of streams, these existing methods will suffer the issues of high storage cost and expensive computation time. To address these issues, we propose an Incremental Incomplete Multi-view Unsupervised Feature Selection method (I$^2$MUFS) on incomplete multi-view streaming data. By jointly considering the consistent and complementary information across different views, I$^2$MUFS embeds the unsupervised feature selection into an extended weighted non-negative matrix factorization model, which can learn a consensus clustering indicator matrix and fuse different latent feature matrices with adaptive view weights. Furthermore, we introduce the incremental leaning mechanisms to develop an alternative iterative algorithm, where the feature selection matrix is incrementally updated, rather than recomputing on the entire updated data from scratch. A series of experiments are conducted to verify the effectiveness of the proposed method by comparing with several state-of-the-art methods. The experimental results demonstrate the effectiveness and efficiency of the proposed method in terms of the clustering metrics and the computational cost.
[ { "created": "Tue, 5 Apr 2022 16:29:39 GMT", "version": "v1" }, { "created": "Fri, 30 Dec 2022 09:59:37 GMT", "version": "v2" } ]
2023-01-02
[ [ "Huang", "Yanyong", "" ], [ "Guo", "Kejun", "" ], [ "Yi", "Xiuwen", "" ], [ "Li", "Zhong", "" ], [ "Li", "Tianrui", "" ] ]
Multi-view unsupervised feature selection has been proven to be efficient in reducing the dimensionality of multi-view unlabeled data with high dimensions. The previous methods assume all of the views are complete. However, in real applications, the multi-view data are often incomplete, i.e., some views of instances are missing, which will result in the failure of these methods. Besides, while the data arrive in form of streams, these existing methods will suffer the issues of high storage cost and expensive computation time. To address these issues, we propose an Incremental Incomplete Multi-view Unsupervised Feature Selection method (I$^2$MUFS) on incomplete multi-view streaming data. By jointly considering the consistent and complementary information across different views, I$^2$MUFS embeds the unsupervised feature selection into an extended weighted non-negative matrix factorization model, which can learn a consensus clustering indicator matrix and fuse different latent feature matrices with adaptive view weights. Furthermore, we introduce the incremental leaning mechanisms to develop an alternative iterative algorithm, where the feature selection matrix is incrementally updated, rather than recomputing on the entire updated data from scratch. A series of experiments are conducted to verify the effectiveness of the proposed method by comparing with several state-of-the-art methods. The experimental results demonstrate the effectiveness and efficiency of the proposed method in terms of the clustering metrics and the computational cost.
1204.3799
David Laniado
Pablo Arag\'on, Andreas Kaltenbrunner, David Laniado and Yana Volkovich
Biographical Social Networks on Wikipedia - A cross-cultural study of links that made history
4 pages, 3 figures
Proceedings of WikiSym, 2012
null
null
cs.SI cs.CY physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is arguable whether history is made by great men and women or vice versa, but undoubtably social connections shape history. Analysing Wikipedia, a global collective memory place, we aim to understand how social links are recorded across cultures. Starting with the set of biographies in the English Wikipedia we focus on the networks of links between these biographical articles on the 15 largest language Wikipedias. We detect the most central characters in these networks and point out culture-related peculiarities. Furthermore, we reveal remarkable similarities between distinct groups of language Wikipedias and highlight the shared knowledge about connections between persons across cultures.
[ { "created": "Tue, 17 Apr 2012 14:14:08 GMT", "version": "v1" }, { "created": "Wed, 4 Jul 2012 14:11:12 GMT", "version": "v2" } ]
2012-07-05
[ [ "Aragón", "Pablo", "" ], [ "Kaltenbrunner", "Andreas", "" ], [ "Laniado", "David", "" ], [ "Volkovich", "Yana", "" ] ]
It is arguable whether history is made by great men and women or vice versa, but undoubtably social connections shape history. Analysing Wikipedia, a global collective memory place, we aim to understand how social links are recorded across cultures. Starting with the set of biographies in the English Wikipedia we focus on the networks of links between these biographical articles on the 15 largest language Wikipedias. We detect the most central characters in these networks and point out culture-related peculiarities. Furthermore, we reveal remarkable similarities between distinct groups of language Wikipedias and highlight the shared knowledge about connections between persons across cultures.
2104.10247
Ian Porada
Ian Porada, Kaheer Suleman, Adam Trischler, and Jackie Chi Kit Cheung
Modeling Event Plausibility with Consistent Conceptual Abstraction
NAACL-HLT 2021
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Understanding natural language requires common sense, one aspect of which is the ability to discern the plausibility of events. While distributional models -- most recently pre-trained, Transformer language models -- have demonstrated improvements in modeling event plausibility, their performance still falls short of humans'. In this work, we show that Transformer-based plausibility models are markedly inconsistent across the conceptual classes of a lexical hierarchy, inferring that "a person breathing" is plausible while "a dentist breathing" is not, for example. We find this inconsistency persists even when models are softly injected with lexical knowledge, and we present a simple post-hoc method of forcing model consistency that improves correlation with human plausibility judgements.
[ { "created": "Tue, 20 Apr 2021 21:08:32 GMT", "version": "v1" } ]
2021-04-22
[ [ "Porada", "Ian", "" ], [ "Suleman", "Kaheer", "" ], [ "Trischler", "Adam", "" ], [ "Cheung", "Jackie Chi Kit", "" ] ]
Understanding natural language requires common sense, one aspect of which is the ability to discern the plausibility of events. While distributional models -- most recently pre-trained, Transformer language models -- have demonstrated improvements in modeling event plausibility, their performance still falls short of humans'. In this work, we show that Transformer-based plausibility models are markedly inconsistent across the conceptual classes of a lexical hierarchy, inferring that "a person breathing" is plausible while "a dentist breathing" is not, for example. We find this inconsistency persists even when models are softly injected with lexical knowledge, and we present a simple post-hoc method of forcing model consistency that improves correlation with human plausibility judgements.
1904.09936
Meera Hahn
Meera Hahn, Asim Kadav, James M. Rehg and Hans Peter Graf
Tripping through time: Efficient Localization of Activities in Videos
Presented at BMVC, 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Localizing moments in untrimmed videos via language queries is a new and interesting task that requires the ability to accurately ground language into video. Previous works have approached this task by processing the entire video, often more than once, to localize relevant activities. In the real world applications of this approach, such as video surveillance, efficiency is a key system requirement. In this paper, we present TripNet, an end-to-end system that uses a gated attention architecture to model fine-grained textual and visual representations in order to align text and video content. Furthermore, TripNet uses reinforcement learning to efficiently localize relevant activity clips in long videos, by learning how to intelligently skip around the video. It extracts visual features for few frames to perform activity classification. In our evaluation over Charades-STA, ActivityNet Captions and the TACoS dataset, we find that TripNet achieves high accuracy and saves processing time by only looking at 32-41% of the entire video.
[ { "created": "Mon, 22 Apr 2019 15:53:13 GMT", "version": "v1" }, { "created": "Tue, 23 Apr 2019 18:41:21 GMT", "version": "v2" }, { "created": "Thu, 25 Apr 2019 17:06:49 GMT", "version": "v3" }, { "created": "Thu, 12 Sep 2019 17:49:05 GMT", "version": "v4" }, { "created": "Tue, 18 Aug 2020 16:56:23 GMT", "version": "v5" } ]
2020-08-19
[ [ "Hahn", "Meera", "" ], [ "Kadav", "Asim", "" ], [ "Rehg", "James M.", "" ], [ "Graf", "Hans Peter", "" ] ]
Localizing moments in untrimmed videos via language queries is a new and interesting task that requires the ability to accurately ground language into video. Previous works have approached this task by processing the entire video, often more than once, to localize relevant activities. In the real world applications of this approach, such as video surveillance, efficiency is a key system requirement. In this paper, we present TripNet, an end-to-end system that uses a gated attention architecture to model fine-grained textual and visual representations in order to align text and video content. Furthermore, TripNet uses reinforcement learning to efficiently localize relevant activity clips in long videos, by learning how to intelligently skip around the video. It extracts visual features for few frames to perform activity classification. In our evaluation over Charades-STA, ActivityNet Captions and the TACoS dataset, we find that TripNet achieves high accuracy and saves processing time by only looking at 32-41% of the entire video.
2102.04361
Daniel Stan
Daniel Stan and Anthony Widjaja Lin
Regular Model Checking Approach to Knowledge Reasoning over Parameterized Systems (technical report)
Extended version, version of record accepted at the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS-21)
null
null
null
cs.FL cs.LO cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a general framework for modelling and verifying epistemic properties over parameterized multi-agent systems that communicate by truthful public announcements. In our framework, the number of agents or the amount of certain resources are parameterized (i.e. not known a priori), and the corresponding verification problem asks whether a given epistemic property is true regardless of the instantiation of the parameters. For example, in a muddy children puzzle, one could ask whether each child will eventually find out whether (s)he is muddy, regardless of the number of children. Our framework is regular model checking (RMC)-based, wherein synchronous finite-state automata (equivalently, monadic second-order logic over words) are used to specify the systems. We propose an extension of public announcement logic as specification language. Of special interests is the addition of the so-called iterated public announcement operators, which are crucial for reasoning about knowledge in parameterized systems. Although the operators make the model checking problem undecidable, we show that this becomes decidable when an appropriate "disappearance relation" is given. Further, we show how Angluin's L*-algorithm for learning finite automata can be applied to find a disappearance relation, which is guaranteed to terminate if it is regular. We have implemented the algorithm and apply this to such examples as the Muddy Children Puzzle, the Russian Card Problem, and Large Number Challenge.
[ { "created": "Mon, 8 Feb 2021 17:10:24 GMT", "version": "v1" }, { "created": "Wed, 17 Feb 2021 21:50:29 GMT", "version": "v2" }, { "created": "Mon, 8 Mar 2021 19:20:12 GMT", "version": "v3" } ]
2021-03-10
[ [ "Stan", "Daniel", "" ], [ "Lin", "Anthony Widjaja", "" ] ]
We present a general framework for modelling and verifying epistemic properties over parameterized multi-agent systems that communicate by truthful public announcements. In our framework, the number of agents or the amount of certain resources are parameterized (i.e. not known a priori), and the corresponding verification problem asks whether a given epistemic property is true regardless of the instantiation of the parameters. For example, in a muddy children puzzle, one could ask whether each child will eventually find out whether (s)he is muddy, regardless of the number of children. Our framework is regular model checking (RMC)-based, wherein synchronous finite-state automata (equivalently, monadic second-order logic over words) are used to specify the systems. We propose an extension of public announcement logic as specification language. Of special interests is the addition of the so-called iterated public announcement operators, which are crucial for reasoning about knowledge in parameterized systems. Although the operators make the model checking problem undecidable, we show that this becomes decidable when an appropriate "disappearance relation" is given. Further, we show how Angluin's L*-algorithm for learning finite automata can be applied to find a disappearance relation, which is guaranteed to terminate if it is regular. We have implemented the algorithm and apply this to such examples as the Muddy Children Puzzle, the Russian Card Problem, and Large Number Challenge.
2306.06264
Pouya Pezeshkpour
Pouya Pezeshkpour
Measuring and Modifying Factual Knowledge in Large Language Models
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) store an extensive amount of factual knowledge obtained from vast collections of text. To effectively utilize these models for downstream tasks, it is crucial to have reliable methods for measuring their knowledge. However, existing approaches for knowledge measurement have certain limitations, and despite recent efforts, they fail to provide accurate measurements and the necessary insights for modifying the knowledge within LLMs. In this work, we employ information theory-based measurements to provide a framework estimating the factual knowledge contained within large language models. More specifically, we measure knowledge by analyzing the LLM's prediction probability distribution before and after instilling the target knowledge, employing metrics such as entropy and KL-divergence. Introducing our metrics, we first assess their accuracy in comparison to previous ranking-based methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we explore two prominent methods of knowledge instillation, discovering that LLMs exhibit limitations in capturing new knowledge under specific circumstances for one of these methods. Lastly, we demonstrate the applicability of our methods in extracting unlearned and mislearned facts in LLMs through their application to in-context learning. We make code and data for all methods and experiments in this paper publicly available.
[ { "created": "Fri, 9 Jun 2023 21:25:48 GMT", "version": "v1" } ]
2023-06-13
[ [ "Pezeshkpour", "Pouya", "" ] ]
Large Language Models (LLMs) store an extensive amount of factual knowledge obtained from vast collections of text. To effectively utilize these models for downstream tasks, it is crucial to have reliable methods for measuring their knowledge. However, existing approaches for knowledge measurement have certain limitations, and despite recent efforts, they fail to provide accurate measurements and the necessary insights for modifying the knowledge within LLMs. In this work, we employ information theory-based measurements to provide a framework estimating the factual knowledge contained within large language models. More specifically, we measure knowledge by analyzing the LLM's prediction probability distribution before and after instilling the target knowledge, employing metrics such as entropy and KL-divergence. Introducing our metrics, we first assess their accuracy in comparison to previous ranking-based methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we explore two prominent methods of knowledge instillation, discovering that LLMs exhibit limitations in capturing new knowledge under specific circumstances for one of these methods. Lastly, we demonstrate the applicability of our methods in extracting unlearned and mislearned facts in LLMs through their application to in-context learning. We make code and data for all methods and experiments in this paper publicly available.
2304.00862
Pablo Dorta-Gonzalez
Pablo Dorta-Gonz\'alez and Mar\'ia Isabel Dorta-Gonz\'alez
The funding effect on citation and social attention: the UN Sustainable Development Goals (SDGs) as a case study
This article has been approved for publication in Online Information Review on March 13th, 2023 (25 pages, 4 figures, 5 tables)
null
10.1108/OIR-05-2022-0300
null
cs.DL stat.AP
http://creativecommons.org/licenses/by/4.0/
Purpose: Academic citation and social attention measure different dimensions in the impact of research results. We quantify the contribution of funding to both indicators considering the differences attributable to the research field and access type. Design/methodology/approach: Citation and social attention accumulated until the year 2021 of more than 367 thousand research articles published in the year 2018, are studied. We consider funding acknowledgements in the research articles. The data source is Dimensions and the units of study are research articles in the UN Sustainable Development Goals. Findings: Most cited goals by researchers do not coincide with those that arouse greater social attention. A small proportion of articles accumulates a large part of the citations and most of the social attention. Both citation and social attention grow with funding. Thus, funded research has a greater probability of being cited in academic articles and mentioned in social media. Funded research receives on average two to three times more citations and 2.5 to 4.5 times more social attention than unfunded research. Moreover, the open access modalities gold and hybrid have the greatest advantages in citation and social attention due to funding. Originality: The joint evaluation of the effect of both funding and open access on social attention. Research limitations: Specific topics were studied in a specific period. Studying other topics and/or different time periods might result in different findings. Practical implications: When funding to publish in open or hybrid access journals is not available, it is advisable to self-archiving the pre-print or post-print version in a freely accessible repository. Social implications: Although cautiously, it is also advisable to consider the social impact of the research to complement the scientific impact in the evaluation of the research.
[ { "created": "Mon, 3 Apr 2023 10:22:29 GMT", "version": "v1" } ]
2023-04-04
[ [ "Dorta-González", "Pablo", "" ], [ "Dorta-González", "María Isabel", "" ] ]
Purpose: Academic citation and social attention measure different dimensions in the impact of research results. We quantify the contribution of funding to both indicators considering the differences attributable to the research field and access type. Design/methodology/approach: Citation and social attention accumulated until the year 2021 of more than 367 thousand research articles published in the year 2018, are studied. We consider funding acknowledgements in the research articles. The data source is Dimensions and the units of study are research articles in the UN Sustainable Development Goals. Findings: Most cited goals by researchers do not coincide with those that arouse greater social attention. A small proportion of articles accumulates a large part of the citations and most of the social attention. Both citation and social attention grow with funding. Thus, funded research has a greater probability of being cited in academic articles and mentioned in social media. Funded research receives on average two to three times more citations and 2.5 to 4.5 times more social attention than unfunded research. Moreover, the open access modalities gold and hybrid have the greatest advantages in citation and social attention due to funding. Originality: The joint evaluation of the effect of both funding and open access on social attention. Research limitations: Specific topics were studied in a specific period. Studying other topics and/or different time periods might result in different findings. Practical implications: When funding to publish in open or hybrid access journals is not available, it is advisable to self-archiving the pre-print or post-print version in a freely accessible repository. Social implications: Although cautiously, it is also advisable to consider the social impact of the research to complement the scientific impact in the evaluation of the research.
2106.14016
Li Liu
Jianrong Wang, Nan Gu, Mei Yu, Xuewei Li, Qiang Fang, Li Liu
An Attention Self-supervised Contrastive Learning based Three-stage Model for Hand Shape Feature Representation in Cued Speech
null
null
null
null
cs.MM
http://creativecommons.org/publicdomain/zero/1.0/
Cued Speech (CS) is a communication system for deaf people or hearing impaired people, in which a speaker uses it to aid a lipreader in phonetic level by clarifying potentially ambiguous mouth movements with hand shape and positions. Feature extraction of multi-modal CS is a key step in CS recognition. Recent supervised deep learning based methods suffer from noisy CS data annotations especially for hand shape modality. In this work, we first propose a self-supervised contrastive learning method to learn the feature representation of image without using labels. Secondly, a small amount of manually annotated CS data are used to fine-tune the first module. Thirdly, we present a module, which combines Bi-LSTM and self-attention networks to further learn sequential features with temporal and contextual information. Besides, to enlarge the volume and the diversity of the current limited CS datasets, we build a new British English dataset containing 5 native CS speakers. Evaluation results on both French and British English datasets show that our model achieves over 90% accuracy in hand shape recognition. Significant improvements of 8.75% (for French) and 10.09% (for British English) are achieved in CS phoneme recognition correctness compared with the state-of-the-art.
[ { "created": "Sat, 26 Jun 2021 13:20:33 GMT", "version": "v1" } ]
2021-06-29
[ [ "Wang", "Jianrong", "" ], [ "Gu", "Nan", "" ], [ "Yu", "Mei", "" ], [ "Li", "Xuewei", "" ], [ "Fang", "Qiang", "" ], [ "Liu", "Li", "" ] ]
Cued Speech (CS) is a communication system for deaf people or hearing impaired people, in which a speaker uses it to aid a lipreader in phonetic level by clarifying potentially ambiguous mouth movements with hand shape and positions. Feature extraction of multi-modal CS is a key step in CS recognition. Recent supervised deep learning based methods suffer from noisy CS data annotations especially for hand shape modality. In this work, we first propose a self-supervised contrastive learning method to learn the feature representation of image without using labels. Secondly, a small amount of manually annotated CS data are used to fine-tune the first module. Thirdly, we present a module, which combines Bi-LSTM and self-attention networks to further learn sequential features with temporal and contextual information. Besides, to enlarge the volume and the diversity of the current limited CS datasets, we build a new British English dataset containing 5 native CS speakers. Evaluation results on both French and British English datasets show that our model achieves over 90% accuracy in hand shape recognition. Significant improvements of 8.75% (for French) and 10.09% (for British English) are achieved in CS phoneme recognition correctness compared with the state-of-the-art.
2408.08133
Victor Verreet
Victor Verreet, Lennert De Smet, Luc De Raedt, Emanuele Sansone
EXPLAIN, AGREE, LEARN: Scaling Learning for Neural Probabilistic Logic
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Neural probabilistic logic systems follow the neuro-symbolic (NeSy) paradigm by combining the perceptive and learning capabilities of neural networks with the robustness of probabilistic logic. Learning corresponds to likelihood optimization of the neural networks. However, to obtain the likelihood exactly, expensive probabilistic logic inference is required. To scale learning to more complex systems, we therefore propose to instead optimize a sampling based objective. We prove that the objective has a bounded error with respect to the likelihood, which vanishes when increasing the sample count. Furthermore, the error vanishes faster by exploiting a new concept of sample diversity. We then develop the EXPLAIN, AGREE, LEARN (EXAL) method that uses this objective. EXPLAIN samples explanations for the data. AGREE reweighs each explanation in concordance with the neural component. LEARN uses the reweighed explanations as a signal for learning. In contrast to previous NeSy methods, EXAL can scale to larger problem sizes while retaining theoretical guarantees on the error. Experimentally, our theoretical claims are verified and EXAL outperforms recent NeSy methods when scaling up the MNIST addition and Warcraft pathfinding problems.
[ { "created": "Thu, 15 Aug 2024 13:07:51 GMT", "version": "v1" } ]
2024-08-16
[ [ "Verreet", "Victor", "" ], [ "De Smet", "Lennert", "" ], [ "De Raedt", "Luc", "" ], [ "Sansone", "Emanuele", "" ] ]
Neural probabilistic logic systems follow the neuro-symbolic (NeSy) paradigm by combining the perceptive and learning capabilities of neural networks with the robustness of probabilistic logic. Learning corresponds to likelihood optimization of the neural networks. However, to obtain the likelihood exactly, expensive probabilistic logic inference is required. To scale learning to more complex systems, we therefore propose to instead optimize a sampling based objective. We prove that the objective has a bounded error with respect to the likelihood, which vanishes when increasing the sample count. Furthermore, the error vanishes faster by exploiting a new concept of sample diversity. We then develop the EXPLAIN, AGREE, LEARN (EXAL) method that uses this objective. EXPLAIN samples explanations for the data. AGREE reweighs each explanation in concordance with the neural component. LEARN uses the reweighed explanations as a signal for learning. In contrast to previous NeSy methods, EXAL can scale to larger problem sizes while retaining theoretical guarantees on the error. Experimentally, our theoretical claims are verified and EXAL outperforms recent NeSy methods when scaling up the MNIST addition and Warcraft pathfinding problems.
2305.13401
Haotian Ye
Haotian Ye, Yihong Liu, Hinrich Sch\"utze
A study of conceptual language similarity: comparison and evaluation
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An interesting line of research in natural language processing (NLP) aims to incorporate linguistic typology to bridge linguistic diversity and assist the research of low-resource languages. While most works construct linguistic similarity measures based on lexical or typological features, such as word order and verbal inflection, recent work has introduced a novel approach to defining language similarity based on how they represent basic concepts, which is complementary to existing similarity measures. In this work, we study the conceptual similarity in detail and evaluate it extensively on a binary classification task.
[ { "created": "Mon, 22 May 2023 18:28:02 GMT", "version": "v1" } ]
2023-05-24
[ [ "Ye", "Haotian", "" ], [ "Liu", "Yihong", "" ], [ "Schütze", "Hinrich", "" ] ]
An interesting line of research in natural language processing (NLP) aims to incorporate linguistic typology to bridge linguistic diversity and assist the research of low-resource languages. While most works construct linguistic similarity measures based on lexical or typological features, such as word order and verbal inflection, recent work has introduced a novel approach to defining language similarity based on how they represent basic concepts, which is complementary to existing similarity measures. In this work, we study the conceptual similarity in detail and evaluate it extensively on a binary classification task.
1812.03237
Md Mehedi Hassan Onik
Md Mehedi Hassan Onik, Mahdi H. Miraz, Chul-Soo Kim
A Recruitment and Human Resource Management Technique Using Blockchain Technology for Industry 4.0
Onik, M. M. H., Miraz, M. H., & Kim, C. S. (2018, April). A recruitment and human resource management technique using Blockchain technology for Industry 4.0. In Proceedings of the Smart Cities Symposium (SCS-2018), Manama, Bahrain (pp. 11-16). IET
null
10.1049/cp.2018.1371
null
cs.CR cs.CY cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Application of Information Technology (IT) in the domain of Human Resource Management (HRM) systems is a sine qua non for any organization for successfully adopting and implementing Fourth Industrial Revolution (Industry 4.0). However, these systems are required to ensure non-biased, efficient, transparent and secure environment. Blockchain, a technology based on distributed digital ledgers, can help facilitate the process of successfully effectuating these specifications. A detailed literature review has been conducted to identify the current status of usage of Information Technology in the domain of Human Resource Management and how Blockchain can help achieve a smart, cost-effective, efficient, transparent and secure factory management system. A Blockchain based Recruitment Management System (BcRMS) as well as Blockchain based Human Resource Management System (BcHRMS) algorithm have been proposed. From the analysis of the results obtained through the case study, it is evident that the proposed system holds definite advantages compared to the existing recruitment systems. Future research directions have also been identified and advocated.
[ { "created": "Fri, 7 Dec 2018 23:09:06 GMT", "version": "v1" } ]
2019-02-13
[ [ "Onik", "Md Mehedi Hassan", "" ], [ "Miraz", "Mahdi H.", "" ], [ "Kim", "Chul-Soo", "" ] ]
Application of Information Technology (IT) in the domain of Human Resource Management (HRM) systems is a sine qua non for any organization for successfully adopting and implementing Fourth Industrial Revolution (Industry 4.0). However, these systems are required to ensure non-biased, efficient, transparent and secure environment. Blockchain, a technology based on distributed digital ledgers, can help facilitate the process of successfully effectuating these specifications. A detailed literature review has been conducted to identify the current status of usage of Information Technology in the domain of Human Resource Management and how Blockchain can help achieve a smart, cost-effective, efficient, transparent and secure factory management system. A Blockchain based Recruitment Management System (BcRMS) as well as Blockchain based Human Resource Management System (BcHRMS) algorithm have been proposed. From the analysis of the results obtained through the case study, it is evident that the proposed system holds definite advantages compared to the existing recruitment systems. Future research directions have also been identified and advocated.
2305.09696
Tianping Zhang
Tianping Zhang, Shaowen Wang, Shuicheng Yan, Jian Li, Qian Liu
Generative Table Pre-training Empowers Models for Tabular Prediction
null
null
null
null
cs.LG cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, the topic of table pre-training has attracted considerable research interest. However, how to employ table pre-training to boost the performance of tabular prediction remains an open challenge. In this paper, we propose TapTap, the first attempt that leverages table pre-training to empower models for tabular prediction. After pre-training on a large corpus of real-world tabular data, TapTap can generate high-quality synthetic tables to support various applications on tabular data, including privacy protection, low resource regime, missing value imputation, and imbalanced classification. Extensive experiments on 12 datasets demonstrate that TapTap outperforms a total of 16 baselines in different scenarios. Meanwhile, it can be easily combined with various backbone models, including LightGBM, Multilayer Perceptron (MLP) and Transformer. Moreover, with the aid of table pre-training, models trained using synthetic data generated by TapTap can even compete with models using the original dataset on half of the experimental datasets, marking a milestone in the development of synthetic tabular data generation. The codes are available at https://github.com/ZhangTP1996/TapTap.
[ { "created": "Tue, 16 May 2023 06:37:38 GMT", "version": "v1" } ]
2023-05-18
[ [ "Zhang", "Tianping", "" ], [ "Wang", "Shaowen", "" ], [ "Yan", "Shuicheng", "" ], [ "Li", "Jian", "" ], [ "Liu", "Qian", "" ] ]
Recently, the topic of table pre-training has attracted considerable research interest. However, how to employ table pre-training to boost the performance of tabular prediction remains an open challenge. In this paper, we propose TapTap, the first attempt that leverages table pre-training to empower models for tabular prediction. After pre-training on a large corpus of real-world tabular data, TapTap can generate high-quality synthetic tables to support various applications on tabular data, including privacy protection, low resource regime, missing value imputation, and imbalanced classification. Extensive experiments on 12 datasets demonstrate that TapTap outperforms a total of 16 baselines in different scenarios. Meanwhile, it can be easily combined with various backbone models, including LightGBM, Multilayer Perceptron (MLP) and Transformer. Moreover, with the aid of table pre-training, models trained using synthetic data generated by TapTap can even compete with models using the original dataset on half of the experimental datasets, marking a milestone in the development of synthetic tabular data generation. The codes are available at https://github.com/ZhangTP1996/TapTap.
2011.05602
Zheng Zhu
Jintao Ke, Siyuan Feng, Zheng Zhu, Hai Yang, Jieping Ye
Joint predictions of multi-modal ride-hailing demands: a deep multi-task multigraph learning-based approach
null
null
10.1016/j.trc.2021.103063
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Ride-hailing platforms generally provide various service options to customers, such as solo ride services, shared ride services, etc. It is generally expected that demands for different service modes are correlated, and the prediction of demand for one service mode can benefit from historical observations of demands for other service modes. Moreover, an accurate joint prediction of demands for multiple service modes can help the platforms better allocate and dispatch vehicle resources. Although there is a large stream of literature on ride-hailing demand predictions for one specific service mode, little efforts have been paid towards joint predictions of ride-hailing demands for multiple service modes. To address this issue, we propose a deep multi-task multi-graph learning approach, which combines two components: (1) multiple multi-graph convolutional (MGC) networks for predicting demands for different service modes, and (2) multi-task learning modules that enable knowledge sharing across multiple MGC networks. More specifically, two multi-task learning structures are established. The first one is the regularized cross-task learning, which builds cross-task connections among the inputs and outputs of multiple MGC networks. The second one is the multi-linear relationship learning, which imposes a prior tensor normal distribution on the weights of various MGC networks. Although there are no concrete bridges between different MGC networks, the weights of these networks are constrained by each other and subject to a common prior distribution. Evaluated with the for-hire-vehicle datasets in Manhattan, we show that our propose approach outperforms the benchmark algorithms in prediction accuracy for different ride-hailing modes.
[ { "created": "Wed, 11 Nov 2020 07:10:50 GMT", "version": "v1" } ]
2022-04-27
[ [ "Ke", "Jintao", "" ], [ "Feng", "Siyuan", "" ], [ "Zhu", "Zheng", "" ], [ "Yang", "Hai", "" ], [ "Ye", "Jieping", "" ] ]
Ride-hailing platforms generally provide various service options to customers, such as solo ride services, shared ride services, etc. It is generally expected that demands for different service modes are correlated, and the prediction of demand for one service mode can benefit from historical observations of demands for other service modes. Moreover, an accurate joint prediction of demands for multiple service modes can help the platforms better allocate and dispatch vehicle resources. Although there is a large stream of literature on ride-hailing demand predictions for one specific service mode, little efforts have been paid towards joint predictions of ride-hailing demands for multiple service modes. To address this issue, we propose a deep multi-task multi-graph learning approach, which combines two components: (1) multiple multi-graph convolutional (MGC) networks for predicting demands for different service modes, and (2) multi-task learning modules that enable knowledge sharing across multiple MGC networks. More specifically, two multi-task learning structures are established. The first one is the regularized cross-task learning, which builds cross-task connections among the inputs and outputs of multiple MGC networks. The second one is the multi-linear relationship learning, which imposes a prior tensor normal distribution on the weights of various MGC networks. Although there are no concrete bridges between different MGC networks, the weights of these networks are constrained by each other and subject to a common prior distribution. Evaluated with the for-hire-vehicle datasets in Manhattan, we show that our propose approach outperforms the benchmark algorithms in prediction accuracy for different ride-hailing modes.
2104.13369
Michal Yarom
Oran Lang, Yossi Gandelsman, Michal Yarom, Yoav Wald, Gal Elidan, Avinatan Hassidim, William T. Freeman, Phillip Isola, Amir Globerson, Michal Irani, Inbar Mosseri
Explaining in Style: Training a GAN to explain a classifier in StyleSpace
Accepted to ICCV 2021. Project page: https://explaining-in-style.github.io/, Code: https://github.com/google/explaining-in-style
null
null
null
cs.CV cs.LG cs.NE eess.IV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image classification models can depend on multiple different semantic attributes of the image. An explanation of the decision of the classifier needs to both discover and visualize these properties. Here we present StylEx, a method for doing this, by training a generative model to specifically explain multiple attributes that underlie classifier decisions. A natural source for such attributes is the StyleSpace of StyleGAN, which is known to generate semantically meaningful dimensions in the image. However, because standard GAN training is not dependent on the classifier, it may not represent these attributes which are important for the classifier decision, and the dimensions of StyleSpace may represent irrelevant attributes. To overcome this, we propose a training procedure for a StyleGAN, which incorporates the classifier model, in order to learn a classifier-specific StyleSpace. Explanatory attributes are then selected from this space. These can be used to visualize the effect of changing multiple attributes per image, thus providing image-specific explanations. We apply StylEx to multiple domains, including animals, leaves, faces and retinal images. For these, we show how an image can be modified in different ways to change its classifier output. Our results show that the method finds attributes that align well with semantic ones, generate meaningful image-specific explanations, and are human-interpretable as measured in user-studies.
[ { "created": "Tue, 27 Apr 2021 17:57:19 GMT", "version": "v1" }, { "created": "Wed, 1 Sep 2021 08:04:54 GMT", "version": "v2" } ]
2021-09-02
[ [ "Lang", "Oran", "" ], [ "Gandelsman", "Yossi", "" ], [ "Yarom", "Michal", "" ], [ "Wald", "Yoav", "" ], [ "Elidan", "Gal", "" ], [ "Hassidim", "Avinatan", "" ], [ "Freeman", "William T.", "" ], [ "Isola", "Phillip", "" ], [ "Globerson", "Amir", "" ], [ "Irani", "Michal", "" ], [ "Mosseri", "Inbar", "" ] ]
Image classification models can depend on multiple different semantic attributes of the image. An explanation of the decision of the classifier needs to both discover and visualize these properties. Here we present StylEx, a method for doing this, by training a generative model to specifically explain multiple attributes that underlie classifier decisions. A natural source for such attributes is the StyleSpace of StyleGAN, which is known to generate semantically meaningful dimensions in the image. However, because standard GAN training is not dependent on the classifier, it may not represent these attributes which are important for the classifier decision, and the dimensions of StyleSpace may represent irrelevant attributes. To overcome this, we propose a training procedure for a StyleGAN, which incorporates the classifier model, in order to learn a classifier-specific StyleSpace. Explanatory attributes are then selected from this space. These can be used to visualize the effect of changing multiple attributes per image, thus providing image-specific explanations. We apply StylEx to multiple domains, including animals, leaves, faces and retinal images. For these, we show how an image can be modified in different ways to change its classifier output. Our results show that the method finds attributes that align well with semantic ones, generate meaningful image-specific explanations, and are human-interpretable as measured in user-studies.
2003.02495
Mustafa Emara
Mustafa Emara, Miltiades C. Filippou, Dario Sabella
MEC-enhanced Information Freshness for Safety-critical C-V2X Communications
Accepted at ICC 2020, CLEEN workshop
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information freshness is a status update timeliness indicator of utmost importance to several real-time applications, such as connected and autonomous driving. The Ageof- Information (AoI) metric is widely considered as useful to quantify the information freshness of delivered messages to the involved entities. Recently, the advent of Multi-access Edge Computing (MEC) promises several performance benefits for Vehicular-to-Everything (V2X) communications, emphasizing on the experienced End-to-End (E2E) message delay. In this paper, we argue that, when it comes to safety-critical use cases, such as the one of Vulnerable Road User (VRU), additional metrics can be more insightful to evaluate and address scalability issues in dense urban environments. In particular, the impact of the packet inter-arrival time on the timeliness of VRU messages arriving at nearby vehicles can be directly assessed by exploiting the AoI metric. For that purpose, assuming a MEC-enabled multi-VRU system setting, we model the AoI and, by means of a performance comparison to the state-of-the-art network architecture based on numerical evaluations, we provide evidence of the information freshness and system scalability enhancements offered by MEC infrastructure deployment for different system parameter settings involving a large number of connected entities.
[ { "created": "Thu, 5 Mar 2020 09:20:10 GMT", "version": "v1" } ]
2020-03-06
[ [ "Emara", "Mustafa", "" ], [ "Filippou", "Miltiades C.", "" ], [ "Sabella", "Dario", "" ] ]
Information freshness is a status update timeliness indicator of utmost importance to several real-time applications, such as connected and autonomous driving. The Ageof- Information (AoI) metric is widely considered as useful to quantify the information freshness of delivered messages to the involved entities. Recently, the advent of Multi-access Edge Computing (MEC) promises several performance benefits for Vehicular-to-Everything (V2X) communications, emphasizing on the experienced End-to-End (E2E) message delay. In this paper, we argue that, when it comes to safety-critical use cases, such as the one of Vulnerable Road User (VRU), additional metrics can be more insightful to evaluate and address scalability issues in dense urban environments. In particular, the impact of the packet inter-arrival time on the timeliness of VRU messages arriving at nearby vehicles can be directly assessed by exploiting the AoI metric. For that purpose, assuming a MEC-enabled multi-VRU system setting, we model the AoI and, by means of a performance comparison to the state-of-the-art network architecture based on numerical evaluations, we provide evidence of the information freshness and system scalability enhancements offered by MEC infrastructure deployment for different system parameter settings involving a large number of connected entities.
2306.01470
Ryo Karakida
Tomohiro Hayase, Ryo Karakida
Understanding MLP-Mixer as a Wide and Sparse MLP
Accepted in ICML 2024
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-layer perceptron (MLP) is a fundamental component of deep learning, and recent MLP-based architectures, especially the MLP-Mixer, have achieved significant empirical success. Nevertheless, our understanding of why and how the MLP-Mixer outperforms conventional MLPs remains largely unexplored. In this work, we reveal that sparseness is a key mechanism underlying the MLP-Mixers. First, the Mixers have an effective expression as a wider MLP with Kronecker-product weights, clarifying that the Mixers efficiently embody several sparseness properties explored in deep learning. In the case of linear layers, the effective expression elucidates an implicit sparse regularization caused by the model architecture and a hidden relation to Monarch matrices, which is also known as another form of sparse parameterization. Next, for general cases, we empirically demonstrate quantitative similarities between the Mixer and the unstructured sparse-weight MLPs. Following a guiding principle proposed by Golubeva, Neyshabur and Gur-Ari (2021), which fixes the number of connections and increases the width and sparsity, the Mixers can demonstrate improved performance.
[ { "created": "Fri, 2 Jun 2023 11:51:24 GMT", "version": "v1" }, { "created": "Mon, 6 May 2024 20:03:17 GMT", "version": "v2" } ]
2024-05-08
[ [ "Hayase", "Tomohiro", "" ], [ "Karakida", "Ryo", "" ] ]
Multi-layer perceptron (MLP) is a fundamental component of deep learning, and recent MLP-based architectures, especially the MLP-Mixer, have achieved significant empirical success. Nevertheless, our understanding of why and how the MLP-Mixer outperforms conventional MLPs remains largely unexplored. In this work, we reveal that sparseness is a key mechanism underlying the MLP-Mixers. First, the Mixers have an effective expression as a wider MLP with Kronecker-product weights, clarifying that the Mixers efficiently embody several sparseness properties explored in deep learning. In the case of linear layers, the effective expression elucidates an implicit sparse regularization caused by the model architecture and a hidden relation to Monarch matrices, which is also known as another form of sparse parameterization. Next, for general cases, we empirically demonstrate quantitative similarities between the Mixer and the unstructured sparse-weight MLPs. Following a guiding principle proposed by Golubeva, Neyshabur and Gur-Ari (2021), which fixes the number of connections and increases the width and sparsity, the Mixers can demonstrate improved performance.
2211.15557
Andy Applebaum
Melody Wolk, Andy Applebaum, Camron Dennler, Patrick Dwyer, Marina Moskowitz, Harold Nguyen, Nicole Nichols, Nicole Park, Paul Rachwalski, Frank Rau, Adrian Webster
Beyond CAGE: Investigating Generalization of Learned Autonomous Network Defense Policies
NeurIPS 2022 Workshop: Reinforcement Learning for Real Life
null
null
null
cs.LG cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advancements in reinforcement learning (RL) have inspired new directions in intelligent automation of network defense. However, many of these advancements have either outpaced their application to network security or have not considered the challenges associated with implementing them in the real-world. To understand these problems, this work evaluates several RL approaches implemented in the second edition of the CAGE Challenge, a public competition to build an autonomous network defender agent in a high-fidelity network simulator. Our approaches all build on the Proximal Policy Optimization (PPO) family of algorithms, and include hierarchical RL, action masking, custom training, and ensemble RL. We find that the ensemble RL technique performs strongest, outperforming our other models and taking second place in the competition. To understand applicability to real environments we evaluate each method's ability to generalize to unseen networks and against an unknown attack strategy. In unseen environments, all of our approaches perform worse, with degradation varied based on the type of environmental change. Against an unknown attacker strategy, we found that our models had reduced overall performance even though the new strategy was less efficient than the ones our models trained on. Together, these results highlight promising research directions for autonomous network defense in the real world.
[ { "created": "Mon, 28 Nov 2022 17:01:24 GMT", "version": "v1" }, { "created": "Wed, 30 Nov 2022 14:35:42 GMT", "version": "v2" } ]
2022-12-01
[ [ "Wolk", "Melody", "" ], [ "Applebaum", "Andy", "" ], [ "Dennler", "Camron", "" ], [ "Dwyer", "Patrick", "" ], [ "Moskowitz", "Marina", "" ], [ "Nguyen", "Harold", "" ], [ "Nichols", "Nicole", "" ], [ "Park", "Nicole", "" ], [ "Rachwalski", "Paul", "" ], [ "Rau", "Frank", "" ], [ "Webster", "Adrian", "" ] ]
Advancements in reinforcement learning (RL) have inspired new directions in intelligent automation of network defense. However, many of these advancements have either outpaced their application to network security or have not considered the challenges associated with implementing them in the real-world. To understand these problems, this work evaluates several RL approaches implemented in the second edition of the CAGE Challenge, a public competition to build an autonomous network defender agent in a high-fidelity network simulator. Our approaches all build on the Proximal Policy Optimization (PPO) family of algorithms, and include hierarchical RL, action masking, custom training, and ensemble RL. We find that the ensemble RL technique performs strongest, outperforming our other models and taking second place in the competition. To understand applicability to real environments we evaluate each method's ability to generalize to unseen networks and against an unknown attack strategy. In unseen environments, all of our approaches perform worse, with degradation varied based on the type of environmental change. Against an unknown attacker strategy, we found that our models had reduced overall performance even though the new strategy was less efficient than the ones our models trained on. Together, these results highlight promising research directions for autonomous network defense in the real world.
2103.00383
Gautam Krishna
Gautam Krishna, Mason Carnahan, Shilpa Shamapant, Yashitha Surendranath, Saumya Jain, Arundhati Ghosh, Co Tran, Jose del R Millan and Ahmed H Tewfik
Brain Signals to Rescue Aphasia, Apraxia and Dysarthria Speech Recognition
Accepted to IEEE EMBC 2021
null
null
null
cs.SD cs.LG eess.AS q-bio.QM
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a deep learning-based algorithm to improve the performance of automatic speech recognition (ASR) systems for aphasia, apraxia, and dysarthria speech by utilizing electroencephalography (EEG) features recorded synchronously with aphasia, apraxia, and dysarthria speech. We demonstrate a significant decoding performance improvement by more than 50\% during test time for isolated speech recognition task and we also provide preliminary results indicating performance improvement for the more challenging continuous speech recognition task by utilizing EEG features. The results presented in this paper show the first step towards demonstrating the possibility of utilizing non-invasive neural signals to design a real-time robust speech prosthetic for stroke survivors recovering from aphasia, apraxia, and dysarthria. Our aphasia, apraxia, and dysarthria speech-EEG data set will be released to the public to help further advance this interesting and crucial research.
[ { "created": "Sun, 28 Feb 2021 03:27:02 GMT", "version": "v1" }, { "created": "Sun, 18 Jul 2021 00:02:25 GMT", "version": "v2" } ]
2021-07-20
[ [ "Krishna", "Gautam", "" ], [ "Carnahan", "Mason", "" ], [ "Shamapant", "Shilpa", "" ], [ "Surendranath", "Yashitha", "" ], [ "Jain", "Saumya", "" ], [ "Ghosh", "Arundhati", "" ], [ "Tran", "Co", "" ], [ "Millan", "Jose del R", "" ], [ "Tewfik", "Ahmed H", "" ] ]
In this paper, we propose a deep learning-based algorithm to improve the performance of automatic speech recognition (ASR) systems for aphasia, apraxia, and dysarthria speech by utilizing electroencephalography (EEG) features recorded synchronously with aphasia, apraxia, and dysarthria speech. We demonstrate a significant decoding performance improvement by more than 50\% during test time for isolated speech recognition task and we also provide preliminary results indicating performance improvement for the more challenging continuous speech recognition task by utilizing EEG features. The results presented in this paper show the first step towards demonstrating the possibility of utilizing non-invasive neural signals to design a real-time robust speech prosthetic for stroke survivors recovering from aphasia, apraxia, and dysarthria. Our aphasia, apraxia, and dysarthria speech-EEG data set will be released to the public to help further advance this interesting and crucial research.
2008.13710
Eden Belouadah
Eden Belouadah, Adrian Popescu, Ioannis Kanellos
Initial Classifier Weights Replay for Memoryless Class Incremental Learning
Accepted in BMVC2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Incremental Learning (IL) is useful when artificial systems need to deal with streams of data and do not have access to all data at all times. The most challenging setting requires a constant complexity of the deep model and an incremental model update without access to a bounded memory of past data. Then, the representations of past classes are strongly affected by catastrophic forgetting. To mitigate its negative effect, an adapted fine tuning which includes knowledge distillation is usually deployed. We propose a different approach based on a vanilla fine tuning backbone. It leverages initial classifier weights which provide a strong representation of past classes because they are trained with all class data. However, the magnitude of classifiers learned in different states varies and normalization is needed for a fair handling of all classes. Normalization is performed by standardizing the initial classifier weights, which are assumed to be normally distributed. In addition, a calibration of prediction scores is done by using state level statistics to further improve classification fairness. We conduct a thorough evaluation with four public datasets in a memoryless incremental learning setting. Results show that our method outperforms existing techniques by a large margin for large-scale datasets.
[ { "created": "Mon, 31 Aug 2020 16:18:12 GMT", "version": "v1" } ]
2020-09-01
[ [ "Belouadah", "Eden", "" ], [ "Popescu", "Adrian", "" ], [ "Kanellos", "Ioannis", "" ] ]
Incremental Learning (IL) is useful when artificial systems need to deal with streams of data and do not have access to all data at all times. The most challenging setting requires a constant complexity of the deep model and an incremental model update without access to a bounded memory of past data. Then, the representations of past classes are strongly affected by catastrophic forgetting. To mitigate its negative effect, an adapted fine tuning which includes knowledge distillation is usually deployed. We propose a different approach based on a vanilla fine tuning backbone. It leverages initial classifier weights which provide a strong representation of past classes because they are trained with all class data. However, the magnitude of classifiers learned in different states varies and normalization is needed for a fair handling of all classes. Normalization is performed by standardizing the initial classifier weights, which are assumed to be normally distributed. In addition, a calibration of prediction scores is done by using state level statistics to further improve classification fairness. We conduct a thorough evaluation with four public datasets in a memoryless incremental learning setting. Results show that our method outperforms existing techniques by a large margin for large-scale datasets.
1709.08696
David Mestel
David Mestel
Widths of regular and context-free languages
22 pages
39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019)
10.4230/LIPIcs.FSTTCS.2019.49
null
cs.FL cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a partially-ordered finite alphabet $\Sigma$ and a language $L\subseteq \Sigma^*$, how large can an antichain in $L$ be (where $L$ is given the lexicographic ordering)? More precisely, since $L$ will in general be infinite, we should ask about the rate of growth of maximum antichains consisting of words of length $n$. This fundamental property of partial orders is known as the width, and in a companion work we show that the problem of computing the information leakage permitted by a deterministic interactive system modeled as a finite-state transducer can be reduced to the problem of computing the width of a certain regular language. In this paper, we show that if $L$ is regular then there is a dichotomy between polynomial and exponential antichain growth. We give a polynomial-time algorithm to distinguish the two cases, and to compute the order of polynomial growth, with the language specified as an NFA. For context-free languages we show that there is a similar dichotomy, but now the problem of distinguishing the two cases is undecidable. Finally, we generalise the lexicographic order to tree languages, and show that for regular tree languages there is a trichotomy between polynomial, exponential and doubly exponential antichain growth.
[ { "created": "Mon, 25 Sep 2017 19:45:03 GMT", "version": "v1" }, { "created": "Fri, 24 Nov 2017 14:31:01 GMT", "version": "v2" }, { "created": "Sat, 17 Nov 2018 05:19:14 GMT", "version": "v3" }, { "created": "Fri, 15 Feb 2019 10:06:16 GMT", "version": "v4" }, { "created": "Sat, 7 Dec 2019 16:01:18 GMT", "version": "v5" } ]
2019-12-10
[ [ "Mestel", "David", "" ] ]
Given a partially-ordered finite alphabet $\Sigma$ and a language $L\subseteq \Sigma^*$, how large can an antichain in $L$ be (where $L$ is given the lexicographic ordering)? More precisely, since $L$ will in general be infinite, we should ask about the rate of growth of maximum antichains consisting of words of length $n$. This fundamental property of partial orders is known as the width, and in a companion work we show that the problem of computing the information leakage permitted by a deterministic interactive system modeled as a finite-state transducer can be reduced to the problem of computing the width of a certain regular language. In this paper, we show that if $L$ is regular then there is a dichotomy between polynomial and exponential antichain growth. We give a polynomial-time algorithm to distinguish the two cases, and to compute the order of polynomial growth, with the language specified as an NFA. For context-free languages we show that there is a similar dichotomy, but now the problem of distinguishing the two cases is undecidable. Finally, we generalise the lexicographic order to tree languages, and show that for regular tree languages there is a trichotomy between polynomial, exponential and doubly exponential antichain growth.
2009.11142
Patrick Rodler
Patrick Rodler and Erich Teppan
The Scheduling Job-Set Optimization Problem: A Model-Based Diagnosis Approach
See also the online proceedings of the International Workshop on Principles of Diagnosis (DX-2020): http://www.dx-2020.org/papers/DX-2020_paper_18.pdf
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
A common issue for companies is that the volume of product orders may at times exceed the production capacity. We formally introduce two novel problems dealing with the question which orders to discard or postpone in order to meet certain (timeliness) goals, and try to approach them by means of model-based diagnosis. In thorough analyses, we identify many similarities of the introduced problems to diagnosis problems, but also reveal crucial idiosyncracies and outline ways to handle or leverage them. Finally, a proof-of-concept evaluation on industrial-scale problem instances from a well-known scheduling benchmark suite demonstrates that one of the two formalized problems can be well attacked by out-of-the-box model-based diagnosis tools.
[ { "created": "Wed, 23 Sep 2020 13:38:36 GMT", "version": "v1" }, { "created": "Thu, 4 Aug 2022 12:36:06 GMT", "version": "v2" } ]
2022-08-05
[ [ "Rodler", "Patrick", "" ], [ "Teppan", "Erich", "" ] ]
A common issue for companies is that the volume of product orders may at times exceed the production capacity. We formally introduce two novel problems dealing with the question which orders to discard or postpone in order to meet certain (timeliness) goals, and try to approach them by means of model-based diagnosis. In thorough analyses, we identify many similarities of the introduced problems to diagnosis problems, but also reveal crucial idiosyncracies and outline ways to handle or leverage them. Finally, a proof-of-concept evaluation on industrial-scale problem instances from a well-known scheduling benchmark suite demonstrates that one of the two formalized problems can be well attacked by out-of-the-box model-based diagnosis tools.
1704.05232
Ragesh Jaiswal
Anup Bhattacharya and Yoav Freund and Ragesh Jaiswal
On the k-Means/Median Cost Function
This update includes minor improvements and a new section on Dimension Estimation
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we study the $k$-means cost function. Given a dataset $X \subseteq \mathbb{R}^d$ and an integer $k$, the goal of the Euclidean $k$-means problem is to find a set of $k$ centers $C \subseteq \mathbb{R}^d$ such that $\Phi(C, X) \equiv \sum_{x \in X} \min_{c \in C} ||x - c||^2$ is minimized. Let $\Delta(X,k) \equiv \min_{C \subseteq \mathbb{R}^d} \Phi(C, X)$ denote the cost of the optimal $k$-means solution. For any dataset $X$, $\Delta(X,k)$ decreases as $k$ increases. In this work, we try to understand this behaviour more precisely. For any dataset $X \subseteq \mathbb{R}^d$, integer $k \geq 1$, and a precision parameter $\varepsilon > 0$, let $L(X, k, \varepsilon)$ denote the smallest integer such that $\Delta(X, L(X, k, \varepsilon)) \leq \varepsilon \cdot \Delta(X,k)$. We show upper and lower bounds on this quantity. Our techniques generalize for the metric $k$-median problem in arbitrary metric spaces and we give bounds in terms of the doubling dimension of the metric. Finally, we observe that for any dataset $X$, we can compute a set $S$ of size $O \left(L(X, k, \varepsilon/c) \right)$ using $D^2$-sampling such that $\Phi(S,X) \leq \varepsilon \cdot \Delta(X,k)$ for some fixed constant $c$. We also discuss some applications of our bounds.
[ { "created": "Tue, 18 Apr 2017 08:34:34 GMT", "version": "v1" }, { "created": "Thu, 9 Sep 2021 06:36:13 GMT", "version": "v2" } ]
2021-09-10
[ [ "Bhattacharya", "Anup", "" ], [ "Freund", "Yoav", "" ], [ "Jaiswal", "Ragesh", "" ] ]
In this work, we study the $k$-means cost function. Given a dataset $X \subseteq \mathbb{R}^d$ and an integer $k$, the goal of the Euclidean $k$-means problem is to find a set of $k$ centers $C \subseteq \mathbb{R}^d$ such that $\Phi(C, X) \equiv \sum_{x \in X} \min_{c \in C} ||x - c||^2$ is minimized. Let $\Delta(X,k) \equiv \min_{C \subseteq \mathbb{R}^d} \Phi(C, X)$ denote the cost of the optimal $k$-means solution. For any dataset $X$, $\Delta(X,k)$ decreases as $k$ increases. In this work, we try to understand this behaviour more precisely. For any dataset $X \subseteq \mathbb{R}^d$, integer $k \geq 1$, and a precision parameter $\varepsilon > 0$, let $L(X, k, \varepsilon)$ denote the smallest integer such that $\Delta(X, L(X, k, \varepsilon)) \leq \varepsilon \cdot \Delta(X,k)$. We show upper and lower bounds on this quantity. Our techniques generalize for the metric $k$-median problem in arbitrary metric spaces and we give bounds in terms of the doubling dimension of the metric. Finally, we observe that for any dataset $X$, we can compute a set $S$ of size $O \left(L(X, k, \varepsilon/c) \right)$ using $D^2$-sampling such that $\Phi(S,X) \leq \varepsilon \cdot \Delta(X,k)$ for some fixed constant $c$. We also discuss some applications of our bounds.
2311.10089
Yaniv Taigman
Shelly Sheynin, Adam Polyak, Uriel Singer, Yuval Kirstain, Amit Zohar, Oron Ashual, Devi Parikh, Yaniv Taigman
Emu Edit: Precise Image Editing via Recognition and Generation Tasks
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Instruction-based image editing holds immense potential for a variety of applications, as it enables users to perform any editing operation using a natural language instruction. However, current models in this domain often struggle with accurately executing user instructions. We present Emu Edit, a multi-task image editing model which sets state-of-the-art results in instruction-based image editing. To develop Emu Edit we train it to multi-task across an unprecedented range of tasks, such as region-based editing, free-form editing, and Computer Vision tasks, all of which are formulated as generative tasks. Additionally, to enhance Emu Edit's multi-task learning abilities, we provide it with learned task embeddings which guide the generation process towards the correct edit type. Both these elements are essential for Emu Edit's outstanding performance. Furthermore, we show that Emu Edit can generalize to new tasks, such as image inpainting, super-resolution, and compositions of editing tasks, with just a few labeled examples. This capability offers a significant advantage in scenarios where high-quality samples are scarce. Lastly, to facilitate a more rigorous and informed assessment of instructable image editing models, we release a new challenging and versatile benchmark that includes seven different image editing tasks.
[ { "created": "Thu, 16 Nov 2023 18:55:58 GMT", "version": "v1" } ]
2023-11-17
[ [ "Sheynin", "Shelly", "" ], [ "Polyak", "Adam", "" ], [ "Singer", "Uriel", "" ], [ "Kirstain", "Yuval", "" ], [ "Zohar", "Amit", "" ], [ "Ashual", "Oron", "" ], [ "Parikh", "Devi", "" ], [ "Taigman", "Yaniv", "" ] ]
Instruction-based image editing holds immense potential for a variety of applications, as it enables users to perform any editing operation using a natural language instruction. However, current models in this domain often struggle with accurately executing user instructions. We present Emu Edit, a multi-task image editing model which sets state-of-the-art results in instruction-based image editing. To develop Emu Edit we train it to multi-task across an unprecedented range of tasks, such as region-based editing, free-form editing, and Computer Vision tasks, all of which are formulated as generative tasks. Additionally, to enhance Emu Edit's multi-task learning abilities, we provide it with learned task embeddings which guide the generation process towards the correct edit type. Both these elements are essential for Emu Edit's outstanding performance. Furthermore, we show that Emu Edit can generalize to new tasks, such as image inpainting, super-resolution, and compositions of editing tasks, with just a few labeled examples. This capability offers a significant advantage in scenarios where high-quality samples are scarce. Lastly, to facilitate a more rigorous and informed assessment of instructable image editing models, we release a new challenging and versatile benchmark that includes seven different image editing tasks.
2201.08296
Ian T Foster
Ian Foster and Carl Kesselman
CUF-Links: Continuous and Ubiquitous FAIRness Linkages for reproducible research
null
Computer, vol. 55, no. 8, pp. 20-30, Aug. 2022
10.1109/MC.2022.3160876
null
cs.SE cs.SI
http://creativecommons.org/licenses/by/4.0/
Despite much creative work on methods and tools, reproducibility -- the ability to repeat the computational steps used to obtain a research result -- remains elusive. One reason for these difficulties is that extant tools for capturing research processes do not align well with the rich working practices of scientists. We advocate here for simple mechanisms that can be integrated easily with current work practices to capture basic information about every data product consumed or produced in a project. We argue that by thus extending the scope of findable, accessible, interoperable, and reusable (FAIR) data in both time and space to enable the creation of a continuous chain of continuous and ubiquitous FAIRness linkages (CUF-Links) from inputs to outputs, such mechanisms can provide a strong foundation for documenting the provenance linkages that are essential to reproducible research. We give examples of mechanisms that can achieve these goals, and review how they have been applied in practice.
[ { "created": "Thu, 20 Jan 2022 17:03:37 GMT", "version": "v1" } ]
2022-08-30
[ [ "Foster", "Ian", "" ], [ "Kesselman", "Carl", "" ] ]
Despite much creative work on methods and tools, reproducibility -- the ability to repeat the computational steps used to obtain a research result -- remains elusive. One reason for these difficulties is that extant tools for capturing research processes do not align well with the rich working practices of scientists. We advocate here for simple mechanisms that can be integrated easily with current work practices to capture basic information about every data product consumed or produced in a project. We argue that by thus extending the scope of findable, accessible, interoperable, and reusable (FAIR) data in both time and space to enable the creation of a continuous chain of continuous and ubiquitous FAIRness linkages (CUF-Links) from inputs to outputs, such mechanisms can provide a strong foundation for documenting the provenance linkages that are essential to reproducible research. We give examples of mechanisms that can achieve these goals, and review how they have been applied in practice.
2403.19867
Hoa Vu
Huy Pham, Hoang Ta, Hoa T. Vu
Finding Decision Tree Splits in Streaming and Massively Parallel Models
null
null
null
null
cs.DS cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
In this work, we provide data stream algorithms that compute optimal splits in decision tree learning. In particular, given a data stream of observations $x_i$ and their labels $y_i$, the goal is to find the optimal split point $j$ that divides the data into two sets such that the mean squared error (for regression) or misclassification rate (for classification) is minimized. We provide various fast streaming algorithms that use sublinear space and a small number of passes for these problems. These algorithms can also be extended to the massively parallel computation model. Our work, while not directly comparable, complements the seminal work of Domingos and Hulten (KDD 2000).
[ { "created": "Thu, 28 Mar 2024 22:26:38 GMT", "version": "v1" }, { "created": "Wed, 17 Apr 2024 07:57:44 GMT", "version": "v2" } ]
2024-04-18
[ [ "Pham", "Huy", "" ], [ "Ta", "Hoang", "" ], [ "Vu", "Hoa T.", "" ] ]
In this work, we provide data stream algorithms that compute optimal splits in decision tree learning. In particular, given a data stream of observations $x_i$ and their labels $y_i$, the goal is to find the optimal split point $j$ that divides the data into two sets such that the mean squared error (for regression) or misclassification rate (for classification) is minimized. We provide various fast streaming algorithms that use sublinear space and a small number of passes for these problems. These algorithms can also be extended to the massively parallel computation model. Our work, while not directly comparable, complements the seminal work of Domingos and Hulten (KDD 2000).
1903.11960
Luca Franceschi
Luca Franceschi, Mathias Niepert, Massimiliano Pontil, Xiao He
Learning Discrete Structures for Graph Neural Networks
ICML 2019, code at https://github.com/lucfra/LDS - Revision of Sec. 3
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.
[ { "created": "Thu, 28 Mar 2019 13:30:24 GMT", "version": "v1" }, { "created": "Mon, 29 Apr 2019 09:53:04 GMT", "version": "v2" }, { "created": "Fri, 17 May 2019 09:43:48 GMT", "version": "v3" }, { "created": "Fri, 19 Jun 2020 09:44:16 GMT", "version": "v4" } ]
2020-06-22
[ [ "Franceschi", "Luca", "" ], [ "Niepert", "Mathias", "" ], [ "Pontil", "Massimiliano", "" ], [ "He", "Xiao", "" ] ]
Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.
1409.1461
David Flatow
David Flatow, Mor Naaman, Ke Eddie Xie, Yana Volkovich, Yaron Kanza
On the Accuracy of Hyper-local Geotagging of Social Media Content
10 pages
null
null
null
cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social media users share billions of items per year, only a small fraction of which is geotagged. We present a data- driven approach for identifying non-geotagged content items that can be associated with a hyper-local geographic area by modeling the location distributions of hyper-local n-grams that appear in the text. We explore the trade-off between accuracy, precision and coverage of this method. Further, we explore differences across content received from multiple platforms and devices, and show, for example, that content shared via different sources and applications produces significantly different geographic distributions, and that it is best to model and predict location for items according to their source. Our findings show the potential and the bounds of a data-driven approach to geotag short social media texts, and offer implications for all applications that use data-driven approaches to locate content.
[ { "created": "Thu, 4 Sep 2014 15:10:32 GMT", "version": "v1" }, { "created": "Sun, 1 Feb 2015 05:52:55 GMT", "version": "v2" } ]
2015-02-03
[ [ "Flatow", "David", "" ], [ "Naaman", "Mor", "" ], [ "Xie", "Ke Eddie", "" ], [ "Volkovich", "Yana", "" ], [ "Kanza", "Yaron", "" ] ]
Social media users share billions of items per year, only a small fraction of which is geotagged. We present a data- driven approach for identifying non-geotagged content items that can be associated with a hyper-local geographic area by modeling the location distributions of hyper-local n-grams that appear in the text. We explore the trade-off between accuracy, precision and coverage of this method. Further, we explore differences across content received from multiple platforms and devices, and show, for example, that content shared via different sources and applications produces significantly different geographic distributions, and that it is best to model and predict location for items according to their source. Our findings show the potential and the bounds of a data-driven approach to geotag short social media texts, and offer implications for all applications that use data-driven approaches to locate content.
2103.16516
Aj Piergiovanni
AJ Piergiovanni and Michael S. Ryoo
Recognizing Actions in Videos from Unseen Viewpoints
null
CVPR 2021
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard methods for video recognition use large CNNs designed to capture spatio-temporal data. However, training these models requires a large amount of labeled training data, containing a wide variety of actions, scenes, settings and camera viewpoints. In this paper, we show that current convolutional neural network models are unable to recognize actions from camera viewpoints not present in their training data (i.e., unseen view action recognition). To address this, we develop approaches based on 3D representations and introduce a new geometric convolutional layer that can learn viewpoint invariant representations. Further, we introduce a new, challenging dataset for unseen view recognition and show the approaches ability to learn viewpoint invariant representations.
[ { "created": "Tue, 30 Mar 2021 17:17:54 GMT", "version": "v1" } ]
2021-03-31
[ [ "Piergiovanni", "AJ", "" ], [ "Ryoo", "Michael S.", "" ] ]
Standard methods for video recognition use large CNNs designed to capture spatio-temporal data. However, training these models requires a large amount of labeled training data, containing a wide variety of actions, scenes, settings and camera viewpoints. In this paper, we show that current convolutional neural network models are unable to recognize actions from camera viewpoints not present in their training data (i.e., unseen view action recognition). To address this, we develop approaches based on 3D representations and introduce a new geometric convolutional layer that can learn viewpoint invariant representations. Further, we introduce a new, challenging dataset for unseen view recognition and show the approaches ability to learn viewpoint invariant representations.
1203.2900
Dominique Duval
Jean-Guillaume Dumas (LJK), Dominique Duval (LJK), Laurent Fousse (LJK), Jean-Claude Reynaud (RC)
Decorated proofs for computational effects: Exceptions
11 pages
null
null
null
cs.LO math.CT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We define a proof system for exceptions which is close to the syntax for exceptions, in the sense that the exceptions do not appear explicitly in the type of any expression. This proof system is sound with respect to the intended denotational semantics of exceptions. With this inference system we prove several properties of exceptions.
[ { "created": "Tue, 13 Mar 2012 19:21:55 GMT", "version": "v1" } ]
2012-03-15
[ [ "Dumas", "Jean-Guillaume", "", "LJK" ], [ "Duval", "Dominique", "", "LJK" ], [ "Fousse", "Laurent", "", "LJK" ], [ "Reynaud", "Jean-Claude", "", "RC" ] ]
We define a proof system for exceptions which is close to the syntax for exceptions, in the sense that the exceptions do not appear explicitly in the type of any expression. This proof system is sound with respect to the intended denotational semantics of exceptions. With this inference system we prove several properties of exceptions.
1312.5345
Wei-Cheng Liao
Wei-Cheng Liao, Mingyi Hong, Hamid Farmanbar, Xu Li, Zhi-Quan Luo, and Hang Zhang
Min Flow Rate Maximization for Software Defined Radio Access Networks
Submitted to JSAC special issue on 5G Wireless Communication Systems
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a heterogeneous network (HetNet) of base stations (BSs) connected via a backhaul network of routers and wired/wireless links with limited capacity. The optimal provision of such networks requires proper resource allocation across the radio access links in conjunction with appropriate traffic engineering within the backhaul network. In this paper we propose an efficient algorithm for joint resource allocation across the wireless links and the flow control within the backhaul network. The proposed algorithm, which maximizes the minimum rate among all the users and/or flows, is based on a decomposition approach that leverages both the Alternating Direction Method of Multipliers (ADMM) and the weighted-MMSE (WMMSE) algorithm. We show that this algorithm is easily parallelizable and converges globally to a stationary solution of the joint optimization problem. The proposed algorithm can also be extended to deal with per-flow quality of service constraint, or to networks with multi-antenna nodes.
[ { "created": "Wed, 18 Dec 2013 21:30:51 GMT", "version": "v1" } ]
2013-12-20
[ [ "Liao", "Wei-Cheng", "" ], [ "Hong", "Mingyi", "" ], [ "Farmanbar", "Hamid", "" ], [ "Li", "Xu", "" ], [ "Luo", "Zhi-Quan", "" ], [ "Zhang", "Hang", "" ] ]
We consider a heterogeneous network (HetNet) of base stations (BSs) connected via a backhaul network of routers and wired/wireless links with limited capacity. The optimal provision of such networks requires proper resource allocation across the radio access links in conjunction with appropriate traffic engineering within the backhaul network. In this paper we propose an efficient algorithm for joint resource allocation across the wireless links and the flow control within the backhaul network. The proposed algorithm, which maximizes the minimum rate among all the users and/or flows, is based on a decomposition approach that leverages both the Alternating Direction Method of Multipliers (ADMM) and the weighted-MMSE (WMMSE) algorithm. We show that this algorithm is easily parallelizable and converges globally to a stationary solution of the joint optimization problem. The proposed algorithm can also be extended to deal with per-flow quality of service constraint, or to networks with multi-antenna nodes.
2006.11719
Liu Yang
Zizhen Wang, Yixing Fan, Jiafeng Guo, Liu Yang, Ruqing Zhang, Yanyan Lan, Xueqi Cheng, Hui Jiang, Xiaozhao Wang
Match$^2$: A Matching over Matching Model for Similar Question Identification
Accepted by SIGIR 2020. 10 pages
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Community Question Answering (CQA) has become a primary means for people to acquire knowledge, where people are free to ask questions or submit answers. To enhance the efficiency of the service, similar question identification becomes a core task in CQA which aims to find a similar question from the archived repository whenever a new question is asked. However, it has long been a challenge to properly measure the similarity between two questions due to the inherent variation of natural language, i.e., there could be different ways to ask a same question or different questions sharing similar expressions. To alleviate this problem, it is natural to involve the existing answers for the enrichment of the archived questions. Traditional methods typically take a one-side usage, which leverages the answer as some expanded representation of the corresponding question. Unfortunately, this may introduce unexpected noises into the similarity computation since answers are often long and diverse, leading to inferior performance. In this work, we propose a two-side usage, which leverages the answer as a bridge of the two questions. The key idea is based on our observation that similar questions could be addressed by similar parts of the answer while different questions may not. In other words, we can compare the matching patterns of the two questions over the same answer to measure their similarity. In this way, we propose a novel matching over matching model, namely Match$^2$, which compares the matching patterns between two question-answer pairs for similar question identification. Empirical experiments on two benchmark datasets demonstrate that our model can significantly outperform previous state-of-the-art methods on the similar question identification task.
[ { "created": "Sun, 21 Jun 2020 05:59:34 GMT", "version": "v1" } ]
2020-06-23
[ [ "Wang", "Zizhen", "" ], [ "Fan", "Yixing", "" ], [ "Guo", "Jiafeng", "" ], [ "Yang", "Liu", "" ], [ "Zhang", "Ruqing", "" ], [ "Lan", "Yanyan", "" ], [ "Cheng", "Xueqi", "" ], [ "Jiang", "Hui", "" ], [ "Wang", "Xiaozhao", "" ] ]
Community Question Answering (CQA) has become a primary means for people to acquire knowledge, where people are free to ask questions or submit answers. To enhance the efficiency of the service, similar question identification becomes a core task in CQA which aims to find a similar question from the archived repository whenever a new question is asked. However, it has long been a challenge to properly measure the similarity between two questions due to the inherent variation of natural language, i.e., there could be different ways to ask a same question or different questions sharing similar expressions. To alleviate this problem, it is natural to involve the existing answers for the enrichment of the archived questions. Traditional methods typically take a one-side usage, which leverages the answer as some expanded representation of the corresponding question. Unfortunately, this may introduce unexpected noises into the similarity computation since answers are often long and diverse, leading to inferior performance. In this work, we propose a two-side usage, which leverages the answer as a bridge of the two questions. The key idea is based on our observation that similar questions could be addressed by similar parts of the answer while different questions may not. In other words, we can compare the matching patterns of the two questions over the same answer to measure their similarity. In this way, we propose a novel matching over matching model, namely Match$^2$, which compares the matching patterns between two question-answer pairs for similar question identification. Empirical experiments on two benchmark datasets demonstrate that our model can significantly outperform previous state-of-the-art methods on the similar question identification task.
2407.14504
Bahram Jalali
Yiming Zhou, Callen MacPhee, Tingyi Zhou, Bahram Jalali
Nonlinear Schr\"odinger Network
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Deep neural networks (DNNs) have achieved exceptional performance across various fields by learning complex nonlinear mappings from large-scale datasets. However, they encounter challenges such as high computational costs and limited interpretability. To address these issues, hybrid approaches that integrate physics with AI are gaining interest. This paper introduces a novel physics-based AI model called the "Nonlinear Schr\"odinger Network", which treats the Nonlinear Schr\"odinger Equation (NLSE) as a general-purpose trainable model for learning complex patterns including nonlinear mappings and memory effects from data. Existing physics-informed machine learning methods use neural networks to approximate the solutions of partial differential equations (PDEs). In contrast, our approach directly treats the PDE as a trainable model to obtain general nonlinear mappings that would otherwise require neural networks. As a type of physics-AI symbiosis, it offers a more interpretable and parameter-efficient alternative to traditional black-box neural networks, achieving comparable or better accuracy in some time series classification tasks while significantly reducing the number of required parameters. Notably, the trained Nonlinear Schr\"odinger Network is interpretable, with all parameters having physical meanings as properties of a virtual physical system that transforms the data to a more separable space. This interpretability allows for insight into the underlying dynamics of the data transformation process. Applications to time series forecasting have also been explored. While our current implementation utilizes the NLSE, the proposed method of using physics equations as trainable models to learn nonlinear mappings from data is not limited to the NLSE and may be extended to other master equations of physics.
[ { "created": "Fri, 19 Jul 2024 17:58:00 GMT", "version": "v1" }, { "created": "Wed, 24 Jul 2024 04:33:55 GMT", "version": "v2" } ]
2024-07-25
[ [ "Zhou", "Yiming", "" ], [ "MacPhee", "Callen", "" ], [ "Zhou", "Tingyi", "" ], [ "Jalali", "Bahram", "" ] ]
Deep neural networks (DNNs) have achieved exceptional performance across various fields by learning complex nonlinear mappings from large-scale datasets. However, they encounter challenges such as high computational costs and limited interpretability. To address these issues, hybrid approaches that integrate physics with AI are gaining interest. This paper introduces a novel physics-based AI model called the "Nonlinear Schr\"odinger Network", which treats the Nonlinear Schr\"odinger Equation (NLSE) as a general-purpose trainable model for learning complex patterns including nonlinear mappings and memory effects from data. Existing physics-informed machine learning methods use neural networks to approximate the solutions of partial differential equations (PDEs). In contrast, our approach directly treats the PDE as a trainable model to obtain general nonlinear mappings that would otherwise require neural networks. As a type of physics-AI symbiosis, it offers a more interpretable and parameter-efficient alternative to traditional black-box neural networks, achieving comparable or better accuracy in some time series classification tasks while significantly reducing the number of required parameters. Notably, the trained Nonlinear Schr\"odinger Network is interpretable, with all parameters having physical meanings as properties of a virtual physical system that transforms the data to a more separable space. This interpretability allows for insight into the underlying dynamics of the data transformation process. Applications to time series forecasting have also been explored. While our current implementation utilizes the NLSE, the proposed method of using physics equations as trainable models to learn nonlinear mappings from data is not limited to the NLSE and may be extended to other master equations of physics.
2309.00184
Md Abu Sayed
Md Abu Sayed, Moqsadur Rahman, Mohammad Ariful Islam Khan, Deepak Tosh
A Survey of Network Requirements for Enabling Effective Cyber Deception
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the evolving landscape of cybersecurity, the utilization of cyber deception has gained prominence as a proactive defense strategy against sophisticated attacks. This paper presents a comprehensive survey that investigates the crucial network requirements essential for the successful implementation of effective cyber deception techniques. With a focus on diverse network architectures and topologies, we delve into the intricate relationship between network characteristics and the deployment of deception mechanisms. This survey provides an in-depth analysis of prevailing cyber deception frameworks, highlighting their strengths and limitations in meeting the requirements for optimal efficacy. By synthesizing insights from both theoretical and practical perspectives, we contribute to a comprehensive understanding of the network prerequisites crucial for enabling robust and adaptable cyber deception strategies.
[ { "created": "Fri, 1 Sep 2023 00:38:57 GMT", "version": "v1" }, { "created": "Fri, 27 Oct 2023 01:10:00 GMT", "version": "v2" }, { "created": "Mon, 8 Jan 2024 05:09:31 GMT", "version": "v3" } ]
2024-01-09
[ [ "Sayed", "Md Abu", "" ], [ "Rahman", "Moqsadur", "" ], [ "Khan", "Mohammad Ariful Islam", "" ], [ "Tosh", "Deepak", "" ] ]
In the evolving landscape of cybersecurity, the utilization of cyber deception has gained prominence as a proactive defense strategy against sophisticated attacks. This paper presents a comprehensive survey that investigates the crucial network requirements essential for the successful implementation of effective cyber deception techniques. With a focus on diverse network architectures and topologies, we delve into the intricate relationship between network characteristics and the deployment of deception mechanisms. This survey provides an in-depth analysis of prevailing cyber deception frameworks, highlighting their strengths and limitations in meeting the requirements for optimal efficacy. By synthesizing insights from both theoretical and practical perspectives, we contribute to a comprehensive understanding of the network prerequisites crucial for enabling robust and adaptable cyber deception strategies.
2209.05580
Joshua Ott
Joshua Ott, Sung-Kyun Kim, Amanda Bouman, Oriana Peltzer, Mamoru Sobue, Harrison Delecki, Mykel J. Kochenderfer, Joel Burdick, Ali-akbar Agha-mohammadi
Risk-aware Meta-level Decision Making for Exploration Under Uncertainty
IEEE International Conference on Control, Decision and Information Technologies
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Robotic exploration of unknown environments is fundamentally a problem of decision making under uncertainty where the robot must account for uncertainty in sensor measurements, localization, action execution, as well as many other factors. For large-scale exploration applications, autonomous systems must overcome the challenges of sequentially deciding which areas of the environment are valuable to explore while safely evaluating the risks associated with obstacles and hazardous terrain. In this work, we propose a risk-aware meta-level decision making framework to balance the tradeoffs associated with local and global exploration. Meta-level decision making builds upon classical hierarchical coverage planners by switching between local and global policies with the overall objective of selecting the policy that is most likely to maximize reward in a stochastic environment. We use information about the environment history, traversability risk, and kinodynamic constraints to reason about the probability of successful policy execution to switch between local and global policies. We have validated our solution in both simulation and on a variety of large-scale real world hardware tests. Our results show that by balancing local and global exploration we are able to significantly explore large-scale environments more efficiently.
[ { "created": "Mon, 12 Sep 2022 20:05:14 GMT", "version": "v1" }, { "created": "Sun, 10 Dec 2023 19:12:46 GMT", "version": "v2" }, { "created": "Tue, 30 Apr 2024 15:38:46 GMT", "version": "v3" } ]
2024-05-01
[ [ "Ott", "Joshua", "" ], [ "Kim", "Sung-Kyun", "" ], [ "Bouman", "Amanda", "" ], [ "Peltzer", "Oriana", "" ], [ "Sobue", "Mamoru", "" ], [ "Delecki", "Harrison", "" ], [ "Kochenderfer", "Mykel J.", "" ], [ "Burdick", "Joel", "" ], [ "Agha-mohammadi", "Ali-akbar", "" ] ]
Robotic exploration of unknown environments is fundamentally a problem of decision making under uncertainty where the robot must account for uncertainty in sensor measurements, localization, action execution, as well as many other factors. For large-scale exploration applications, autonomous systems must overcome the challenges of sequentially deciding which areas of the environment are valuable to explore while safely evaluating the risks associated with obstacles and hazardous terrain. In this work, we propose a risk-aware meta-level decision making framework to balance the tradeoffs associated with local and global exploration. Meta-level decision making builds upon classical hierarchical coverage planners by switching between local and global policies with the overall objective of selecting the policy that is most likely to maximize reward in a stochastic environment. We use information about the environment history, traversability risk, and kinodynamic constraints to reason about the probability of successful policy execution to switch between local and global policies. We have validated our solution in both simulation and on a variety of large-scale real world hardware tests. Our results show that by balancing local and global exploration we are able to significantly explore large-scale environments more efficiently.
2009.14759
Yuxuan Wu
Yuxuan Wu and Hideki Nakayama
Graph-based Heuristic Search for Module Selection Procedure in Neural Module Network
in Neural Module Network[C]//Proceedings of the Asian Conference on Computer Vision. 2020
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural Module Network (NMN) is a machine learning model for solving the visual question answering tasks. NMN uses programs to encode modules' structures, and its modularized architecture enables it to solve logical problems more reasonably. However, because of the non-differentiable procedure of module selection, NMN is hard to be trained end-to-end. To overcome this problem, existing work either included ground-truth program into training data or applied reinforcement learning to explore the program. However, both of these methods still have weaknesses. In consideration of this, we proposed a new learning framework for NMN. Graph-based Heuristic Search is the algorithm we proposed to discover the optimal program through a heuristic search on the data structure named Program Graph. Our experiments on FigureQA and CLEVR dataset show that our methods can realize the training of NMN without ground-truth programs and achieve superior efficiency over existing reinforcement learning methods in program exploration.
[ { "created": "Wed, 30 Sep 2020 15:55:44 GMT", "version": "v1" } ]
2020-11-30
[ [ "Wu", "Yuxuan", "" ], [ "Nakayama", "Hideki", "" ] ]
Neural Module Network (NMN) is a machine learning model for solving the visual question answering tasks. NMN uses programs to encode modules' structures, and its modularized architecture enables it to solve logical problems more reasonably. However, because of the non-differentiable procedure of module selection, NMN is hard to be trained end-to-end. To overcome this problem, existing work either included ground-truth program into training data or applied reinforcement learning to explore the program. However, both of these methods still have weaknesses. In consideration of this, we proposed a new learning framework for NMN. Graph-based Heuristic Search is the algorithm we proposed to discover the optimal program through a heuristic search on the data structure named Program Graph. Our experiments on FigureQA and CLEVR dataset show that our methods can realize the training of NMN without ground-truth programs and achieve superior efficiency over existing reinforcement learning methods in program exploration.
1308.3693
Louis Francois Pau
L.-F. Pau
Business and social evaluation of denial of service attacks of communications networks in view of scaling economic counter-measures
null
The virtual battlefield : perspectives on cyber warfare , Cryptology and information security Series, Vol 3, IOS Press, Amsterdam, 2009, pp. 282-293
null
null
cs.CY cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper gives an analytical method to determine the economic and indirect implications of denial of service and distributed denial of service attacks. It is based on time preference dynamics applied to the monetary mass for the restoration of capabilities, on long term investments to rebuild capabilities, and of the usability level of the capabilities after an attack. A simple illustrative example is provided for a denial of service on a corporate data centre. The needed data collection methodologies are categorized by classes of targets. The use of the method is explained in the context of legal or policy driven dissuasive, retaliation or compensation/ restoration actions. A concrete set of deployment cases in mobile communications services is discussed. The conclusion includes policy recommendations as well as information exchange requirements.
[ { "created": "Tue, 13 Aug 2013 18:38:25 GMT", "version": "v1" } ]
2013-08-19
[ [ "Pau", "L. -F.", "" ] ]
This paper gives an analytical method to determine the economic and indirect implications of denial of service and distributed denial of service attacks. It is based on time preference dynamics applied to the monetary mass for the restoration of capabilities, on long term investments to rebuild capabilities, and of the usability level of the capabilities after an attack. A simple illustrative example is provided for a denial of service on a corporate data centre. The needed data collection methodologies are categorized by classes of targets. The use of the method is explained in the context of legal or policy driven dissuasive, retaliation or compensation/ restoration actions. A concrete set of deployment cases in mobile communications services is discussed. The conclusion includes policy recommendations as well as information exchange requirements.
1705.01784
Changjun Wang
Zhigang Cao, Bo Chen, Xujin Chen, Changjun Wang
A Network Game of Dynamic Traffic
Extended Abstract in Proceedings of the 18th ACM Conference on Economics and Computation
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a network congestion game of discrete-time dynamic traffic of atomic agents with a single origin-destination pair. Any agent freely makes a dynamic decision at each vertex (e.g., road crossing) and traffic is regulated with given priorities on edges (e.g., road segments). We first constructively prove that there always exists a subgame perfect equilibrium (SPE) in this game. We then study the relationship between this model and a simplified model, in which agents select and fix an origin-destination path simultaneously. We show that the set of Nash equilibrium (NE) flows of the simplified model is a proper subset of the set of SPE flows of our main model. We prove that each NE is also a strong NE and hence weakly Pareto optimal. We establish several other nice properties of NE flows, including global First-In-First-Out. Then for two classes of networks, including series-parallel ones, we show that the queue lengths at equilibrium are bounded at any given instance, which means the price of anarchy of any given game instance is bounded, provided that the inflow size never exceeds the network capacity.
[ { "created": "Thu, 4 May 2017 10:33:03 GMT", "version": "v1" } ]
2017-05-05
[ [ "Cao", "Zhigang", "" ], [ "Chen", "Bo", "" ], [ "Chen", "Xujin", "" ], [ "Wang", "Changjun", "" ] ]
We study a network congestion game of discrete-time dynamic traffic of atomic agents with a single origin-destination pair. Any agent freely makes a dynamic decision at each vertex (e.g., road crossing) and traffic is regulated with given priorities on edges (e.g., road segments). We first constructively prove that there always exists a subgame perfect equilibrium (SPE) in this game. We then study the relationship between this model and a simplified model, in which agents select and fix an origin-destination path simultaneously. We show that the set of Nash equilibrium (NE) flows of the simplified model is a proper subset of the set of SPE flows of our main model. We prove that each NE is also a strong NE and hence weakly Pareto optimal. We establish several other nice properties of NE flows, including global First-In-First-Out. Then for two classes of networks, including series-parallel ones, we show that the queue lengths at equilibrium are bounded at any given instance, which means the price of anarchy of any given game instance is bounded, provided that the inflow size never exceeds the network capacity.
2302.09189
Teruaki Hayashi
Koike Hiroaki and Teruaki Hayashi
Extraction of Constituent Factors of Digestion Efficiency in Information Transfer by Media Composed of Texts and Images
This paper is the revised version of the 29th annual conference of the Natural Language Processing Society in Japan, in Japanese language
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The development and spread of information and communication technologies have increased and diversified information. However, the increase in the volume and the selection of information does not necessarily promote understanding. In addition, conventional evaluations of information transfer have focused only on the arrival of information to the receivers. They need to sufficiently take into account the receivers' understanding of the information after it has been acquired, which is the original purpose of the evaluation. In this study, we propose the concept of "information digestion," which refers to the receivers' correct understanding of the acquired information, its contents, and its purpose. In the experiment, we proposed an evaluation model of information digestibility using hierarchical factor analysis and extracted factors that constitute digestibility by four types of media.
[ { "created": "Fri, 17 Feb 2023 23:45:02 GMT", "version": "v1" } ]
2023-02-21
[ [ "Hiroaki", "Koike", "" ], [ "Hayashi", "Teruaki", "" ] ]
The development and spread of information and communication technologies have increased and diversified information. However, the increase in the volume and the selection of information does not necessarily promote understanding. In addition, conventional evaluations of information transfer have focused only on the arrival of information to the receivers. They need to sufficiently take into account the receivers' understanding of the information after it has been acquired, which is the original purpose of the evaluation. In this study, we propose the concept of "information digestion," which refers to the receivers' correct understanding of the acquired information, its contents, and its purpose. In the experiment, we proposed an evaluation model of information digestibility using hierarchical factor analysis and extracted factors that constitute digestibility by four types of media.
1507.07648
Rehan Abdul Aziz
Rehan Abdul Aziz and Geoffrey Chu and Christian Muise and Peter Stuckey
Projected Model Counting
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Model counting is the task of computing the number of assignments to variables V that satisfy a given propositional theory F. Model counting is an essential tool in probabilistic reasoning. In this paper, we introduce the problem of model counting projected on a subset P of original variables that we call 'priority' variables. The task is to compute the number of assignments to P such that there exists an extension to 'non-priority' variables V\P that satisfies F. Projected model counting arises when some parts of the model are irrelevant to the counts, in particular when we require additional variables to model the problem we are counting in SAT. We discuss three different approaches to projected model counting (two of which are novel), and compare their performance on different benchmark problems. To appear in 18th International Conference on Theory and Applications of Satisfiability Testing, September 24-27, 2015, Austin, Texas, USA
[ { "created": "Tue, 28 Jul 2015 05:45:05 GMT", "version": "v1" } ]
2015-07-29
[ [ "Aziz", "Rehan Abdul", "" ], [ "Chu", "Geoffrey", "" ], [ "Muise", "Christian", "" ], [ "Stuckey", "Peter", "" ] ]
Model counting is the task of computing the number of assignments to variables V that satisfy a given propositional theory F. Model counting is an essential tool in probabilistic reasoning. In this paper, we introduce the problem of model counting projected on a subset P of original variables that we call 'priority' variables. The task is to compute the number of assignments to P such that there exists an extension to 'non-priority' variables V\P that satisfies F. Projected model counting arises when some parts of the model are irrelevant to the counts, in particular when we require additional variables to model the problem we are counting in SAT. We discuss three different approaches to projected model counting (two of which are novel), and compare their performance on different benchmark problems. To appear in 18th International Conference on Theory and Applications of Satisfiability Testing, September 24-27, 2015, Austin, Texas, USA
1811.07958
Brent Griffin
Brent A. Griffin and Jason J. Corso
Tukey-Inspired Video Object Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the problem of strictly unsupervised video object segmentation, i.e., the separation of a primary object from background in video without a user-provided object mask or any training on an annotated dataset. We find foreground objects in low-level vision data using a John Tukey-inspired measure of "outlierness". This Tukey-inspired measure also estimates the reliability of each data source as video characteristics change (e.g., a camera starts moving). The proposed method achieves state-of-the-art results for strictly unsupervised video object segmentation on the challenging DAVIS dataset. Finally, we use a variant of the Tukey-inspired measure to combine the output of multiple segmentation methods, including those using supervision during training, runtime, or both. This collectively more robust method of segmentation improves the Jaccard measure of its constituent methods by as much as 28%.
[ { "created": "Mon, 19 Nov 2018 20:15:27 GMT", "version": "v1" }, { "created": "Fri, 30 Nov 2018 02:37:11 GMT", "version": "v2" } ]
2018-12-03
[ [ "Griffin", "Brent A.", "" ], [ "Corso", "Jason J.", "" ] ]
We investigate the problem of strictly unsupervised video object segmentation, i.e., the separation of a primary object from background in video without a user-provided object mask or any training on an annotated dataset. We find foreground objects in low-level vision data using a John Tukey-inspired measure of "outlierness". This Tukey-inspired measure also estimates the reliability of each data source as video characteristics change (e.g., a camera starts moving). The proposed method achieves state-of-the-art results for strictly unsupervised video object segmentation on the challenging DAVIS dataset. Finally, we use a variant of the Tukey-inspired measure to combine the output of multiple segmentation methods, including those using supervision during training, runtime, or both. This collectively more robust method of segmentation improves the Jaccard measure of its constituent methods by as much as 28%.
1705.11175
Nathanael Lemessa Baisa
Nathanael L. Baisa, Deepayan Bhowmik and Andrew Wallace
Long-term Correlation Tracking using Multi-layer Hybrid Features in Sparse and Dense Environments
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tracking a target of interest in both sparse and crowded environments is a challenging problem, not yet successfully addressed in the literature. In this paper, we propose a new long-term visual tracking algorithm, learning discriminative correlation filters and using an online classifier, to track a target of interest in both sparse and crowded video sequences. First, we learn a translation correlation filter using a multi-layer hybrid of convolutional neural networks (CNN) and traditional hand-crafted features. We combine advantages of both the lower convolutional layer which retains more spatial details for precise localization and the higher convolutional layer which encodes semantic information for handling appearance variations, and then integrate these with histogram of oriented gradients (HOG) and color-naming traditional features. Second, we include a re-detection module for overcoming tracking failures due to long-term occlusions by training an incremental (online) SVM on the most confident frames using hand-engineered features. This re-detection module is activated only when the correlation response of the object is below some pre-defined threshold. This generates high score detection proposals which are temporally filtered using a Gaussian mixture probability hypothesis density (GM-PHD) filter to find the detection proposal with the maximum weight as the target state estimate by removing the other detection proposals as clutter. Finally, we learn a scale correlation filter for estimating the scale of a target by constructing a target pyramid around the estimated or re-detected position using the HOG features. We carry out extensive experiments on both sparse and dense data sets which show that our method significantly outperforms state-of-the-art methods.
[ { "created": "Wed, 31 May 2017 16:44:45 GMT", "version": "v1" }, { "created": "Fri, 2 Jun 2017 00:37:05 GMT", "version": "v2" }, { "created": "Mon, 27 Nov 2017 14:54:39 GMT", "version": "v3" }, { "created": "Mon, 16 Apr 2018 09:43:49 GMT", "version": "v4" }, { "created": "Thu, 6 Sep 2018 12:48:02 GMT", "version": "v5" }, { "created": "Sun, 3 Feb 2019 21:19:22 GMT", "version": "v6" } ]
2019-02-05
[ [ "Baisa", "Nathanael L.", "" ], [ "Bhowmik", "Deepayan", "" ], [ "Wallace", "Andrew", "" ] ]
Tracking a target of interest in both sparse and crowded environments is a challenging problem, not yet successfully addressed in the literature. In this paper, we propose a new long-term visual tracking algorithm, learning discriminative correlation filters and using an online classifier, to track a target of interest in both sparse and crowded video sequences. First, we learn a translation correlation filter using a multi-layer hybrid of convolutional neural networks (CNN) and traditional hand-crafted features. We combine advantages of both the lower convolutional layer which retains more spatial details for precise localization and the higher convolutional layer which encodes semantic information for handling appearance variations, and then integrate these with histogram of oriented gradients (HOG) and color-naming traditional features. Second, we include a re-detection module for overcoming tracking failures due to long-term occlusions by training an incremental (online) SVM on the most confident frames using hand-engineered features. This re-detection module is activated only when the correlation response of the object is below some pre-defined threshold. This generates high score detection proposals which are temporally filtered using a Gaussian mixture probability hypothesis density (GM-PHD) filter to find the detection proposal with the maximum weight as the target state estimate by removing the other detection proposals as clutter. Finally, we learn a scale correlation filter for estimating the scale of a target by constructing a target pyramid around the estimated or re-detected position using the HOG features. We carry out extensive experiments on both sparse and dense data sets which show that our method significantly outperforms state-of-the-art methods.
2111.00684
Lu Lin
Lu Lin, Ethan Blaser and Hongning Wang
Graph Structural Attack by Perturbing Spectral Distance
Proceedings of the 28th ACM SIGKDD international conference on knowledge discovery & data mining (KDD'22)
null
null
null
cs.LG cs.AI cs.CR cs.SI
http://creativecommons.org/licenses/by/4.0/
Graph Convolutional Networks (GCNs) have fueled a surge of research interest due to their encouraging performance on graph learning tasks, but they are also shown vulnerability to adversarial attacks. In this paper, an effective graph structural attack is investigated to disrupt graph spectral filters in the Fourier domain, which are the theoretical foundation of GCNs. We define the notion of spectral distance based on the eigenvalues of graph Laplacian to measure the disruption of spectral filters. We realize the attack by maximizing the spectral distance and propose an efficient approximation to reduce the time complexity brought by eigen-decomposition. The experiments demonstrate the remarkable effectiveness of the proposed attack in both black-box and white-box settings for both test-time evasion attacks and training-time poisoning attacks. Our qualitative analysis suggests the connection between the imposed spectral changes in the Fourier domain and the attack behavior in the spatial domain, which provides empirical evidence that maximizing spectral distance is an effective way to change the graph structural property and thus disturb the frequency components for graph filters to affect the learning of GCNs.
[ { "created": "Mon, 1 Nov 2021 04:02:34 GMT", "version": "v1" }, { "created": "Wed, 3 Nov 2021 14:54:33 GMT", "version": "v2" }, { "created": "Sun, 2 Oct 2022 21:39:21 GMT", "version": "v3" } ]
2022-10-04
[ [ "Lin", "Lu", "" ], [ "Blaser", "Ethan", "" ], [ "Wang", "Hongning", "" ] ]
Graph Convolutional Networks (GCNs) have fueled a surge of research interest due to their encouraging performance on graph learning tasks, but they are also shown vulnerability to adversarial attacks. In this paper, an effective graph structural attack is investigated to disrupt graph spectral filters in the Fourier domain, which are the theoretical foundation of GCNs. We define the notion of spectral distance based on the eigenvalues of graph Laplacian to measure the disruption of spectral filters. We realize the attack by maximizing the spectral distance and propose an efficient approximation to reduce the time complexity brought by eigen-decomposition. The experiments demonstrate the remarkable effectiveness of the proposed attack in both black-box and white-box settings for both test-time evasion attacks and training-time poisoning attacks. Our qualitative analysis suggests the connection between the imposed spectral changes in the Fourier domain and the attack behavior in the spatial domain, which provides empirical evidence that maximizing spectral distance is an effective way to change the graph structural property and thus disturb the frequency components for graph filters to affect the learning of GCNs.
2210.13578
Reza Rawassizadeh
Xiang Ji and Yesim Sungu-Eryilmaz and Elaheh Momeni and Reza Rawassizadeh
Speeding Up Question Answering Task of Language Models via Inverted Index
null
null
null
null
cs.CL cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
Natural language processing applications, such as conversational agents and their question-answering capabilities, are widely used in the real world. Despite the wide popularity of large language models (LLMs), few real-world conversational agents take advantage of LLMs. Extensive resources consumed by LLMs disable developers from integrating them into end-user applications. In this study, we leverage an inverted indexing mechanism combined with LLMs to improve the efficiency of question-answering models for closed-domain questions. Our experiments show that using the index improves the average response time by 97.44%. In addition, due to the reduced search scope, the average BLEU score improved by 0.23 while using the inverted index.
[ { "created": "Mon, 24 Oct 2022 19:59:17 GMT", "version": "v1" } ]
2022-10-26
[ [ "Ji", "Xiang", "" ], [ "Sungu-Eryilmaz", "Yesim", "" ], [ "Momeni", "Elaheh", "" ], [ "Rawassizadeh", "Reza", "" ] ]
Natural language processing applications, such as conversational agents and their question-answering capabilities, are widely used in the real world. Despite the wide popularity of large language models (LLMs), few real-world conversational agents take advantage of LLMs. Extensive resources consumed by LLMs disable developers from integrating them into end-user applications. In this study, we leverage an inverted indexing mechanism combined with LLMs to improve the efficiency of question-answering models for closed-domain questions. Our experiments show that using the index improves the average response time by 97.44%. In addition, due to the reduced search scope, the average BLEU score improved by 0.23 while using the inverted index.
1708.09630
Rohollah Moghadam
Rohollah Moghadam and Hamidreza Modares
Resilient Autonomous Control of Distributed Multi-agent Systems in Contested Environments
null
null
null
null
cs.MA cs.LG cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input, which allows changing the team behavior or mission in response to environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H_infinity controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H_infinity optimal synchronization problem and off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient reinforcement learning algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach.
[ { "created": "Thu, 31 Aug 2017 09:21:08 GMT", "version": "v1" }, { "created": "Sat, 30 Sep 2017 13:52:23 GMT", "version": "v2" }, { "created": "Sun, 31 Dec 2017 05:51:31 GMT", "version": "v3" }, { "created": "Mon, 9 Apr 2018 02:25:13 GMT", "version": "v4" } ]
2018-04-10
[ [ "Moghadam", "Rohollah", "" ], [ "Modares", "Hamidreza", "" ] ]
An autonomous and resilient controller is proposed for leader-follower multi-agent systems under uncertainties and cyber-physical attacks. The leader is assumed non-autonomous with a nonzero control input, which allows changing the team behavior or mission in response to environmental changes. A resilient learning-based control protocol is presented to find optimal solutions to the synchronization problem in the presence of attacks and system dynamic uncertainties. An observer-based distributed H_infinity controller is first designed to prevent propagating the effects of attacks on sensors and actuators throughout the network, as well as to attenuate the effect of these attacks on the compromised agent itself. Non-homogeneous game algebraic Riccati equations are derived to solve the H_infinity optimal synchronization problem and off-policy reinforcement learning is utilized to learn their solution without requiring any knowledge of the agent's dynamics. A trust-confidence based distributed control protocol is then proposed to mitigate attacks that hijack the entire node and attacks on communication links. A confidence value is defined for each agent based solely on its local evidence. The proposed resilient reinforcement learning algorithm employs the confidence value of each agent to indicate the trustworthiness of its own information and broadcast it to its neighbors to put weights on the data they receive from it during and after learning. If the confidence value of an agent is low, it employs a trust mechanism to identify compromised agents and remove the data it receives from them from the learning process. Simulation results are provided to show the effectiveness of the proposed approach.
2402.18201
Sen Xu
Sen Xu, Shikui Wei, Tao Ruan, and Lixin Liao
Learning Invariant Inter-pixel Correlations for Superpixel Generation
Accepted by AAAI24
Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 6351-6359 (2024)
10.1609/aaai.v38i6.28454
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep superpixel algorithms have made remarkable strides by substituting hand-crafted features with learnable ones. Nevertheless, we observe that existing deep superpixel methods, serving as mid-level representation operations, remain sensitive to the statistical properties (e.g., color distribution, high-level semantics) embedded within the training dataset. Consequently, learnable features exhibit constrained discriminative capability, resulting in unsatisfactory pixel grouping performance, particularly in untrainable application scenarios. To address this issue, we propose the Content Disentangle Superpixel (CDS) algorithm to selectively separate the invariant inter-pixel correlations and statistical properties, i.e., style noise. Specifically, We first construct auxiliary modalities that are homologous to the original RGB image but have substantial stylistic variations. Then, driven by mutual information, we propose the local-grid correlation alignment across modalities to reduce the distribution discrepancy of adaptively selected features and learn invariant inter-pixel correlations. Afterwards, we perform global-style mutual information minimization to enforce the separation of invariant content and train data styles. The experimental results on four benchmark datasets demonstrate the superiority of our approach to existing state-of-the-art methods, regarding boundary adherence, generalization, and efficiency. Code and pre-trained model are available at https://github.com/rookiie/CDSpixel.
[ { "created": "Wed, 28 Feb 2024 09:46:56 GMT", "version": "v1" }, { "created": "Tue, 9 Apr 2024 07:18:41 GMT", "version": "v2" } ]
2024-04-10
[ [ "Xu", "Sen", "" ], [ "Wei", "Shikui", "" ], [ "Ruan", "Tao", "" ], [ "Liao", "Lixin", "" ] ]
Deep superpixel algorithms have made remarkable strides by substituting hand-crafted features with learnable ones. Nevertheless, we observe that existing deep superpixel methods, serving as mid-level representation operations, remain sensitive to the statistical properties (e.g., color distribution, high-level semantics) embedded within the training dataset. Consequently, learnable features exhibit constrained discriminative capability, resulting in unsatisfactory pixel grouping performance, particularly in untrainable application scenarios. To address this issue, we propose the Content Disentangle Superpixel (CDS) algorithm to selectively separate the invariant inter-pixel correlations and statistical properties, i.e., style noise. Specifically, We first construct auxiliary modalities that are homologous to the original RGB image but have substantial stylistic variations. Then, driven by mutual information, we propose the local-grid correlation alignment across modalities to reduce the distribution discrepancy of adaptively selected features and learn invariant inter-pixel correlations. Afterwards, we perform global-style mutual information minimization to enforce the separation of invariant content and train data styles. The experimental results on four benchmark datasets demonstrate the superiority of our approach to existing state-of-the-art methods, regarding boundary adherence, generalization, and efficiency. Code and pre-trained model are available at https://github.com/rookiie/CDSpixel.
2109.06862
Saibo Geng
Saibo Geng, R\'emi Lebret, Karl Aberer
Legal Transformer Models May Not Always Help
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Deep learning-based Natural Language Processing methods, especially transformers, have achieved impressive performance in the last few years. Applying those state-of-the-art NLP methods to legal activities to automate or simplify some simple work is of great value. This work investigates the value of domain adaptive pre-training and language adapters in legal NLP tasks. By comparing the performance of language models with domain adaptive pre-training on different tasks and different dataset splits, we show that domain adaptive pre-training is only helpful with low-resource downstream tasks, thus far from being a panacea. We also benchmark the performance of adapters in a typical legal NLP task and show that they can yield similar performance to full model tuning with much smaller training costs. As an additional result, we release LegalRoBERTa, a RoBERTa model further pre-trained on legal corpora.
[ { "created": "Tue, 14 Sep 2021 17:53:55 GMT", "version": "v1" }, { "created": "Wed, 15 Sep 2021 07:14:15 GMT", "version": "v2" } ]
2021-09-16
[ [ "Geng", "Saibo", "" ], [ "Lebret", "Rémi", "" ], [ "Aberer", "Karl", "" ] ]
Deep learning-based Natural Language Processing methods, especially transformers, have achieved impressive performance in the last few years. Applying those state-of-the-art NLP methods to legal activities to automate or simplify some simple work is of great value. This work investigates the value of domain adaptive pre-training and language adapters in legal NLP tasks. By comparing the performance of language models with domain adaptive pre-training on different tasks and different dataset splits, we show that domain adaptive pre-training is only helpful with low-resource downstream tasks, thus far from being a panacea. We also benchmark the performance of adapters in a typical legal NLP task and show that they can yield similar performance to full model tuning with much smaller training costs. As an additional result, we release LegalRoBERTa, a RoBERTa model further pre-trained on legal corpora.
2211.16356
Runjia Li
Runjia Li, Yang Yu, Charlie Haywood
Real-time Blind Deblurring Based on Lightweight Deep-Wiener-Network
imcomplete figures
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
In this paper, we address the problem of blind deblurring with high efficiency. We propose a set of lightweight deep-wiener-network to finish the task with real-time speed. The Network contains a deep neural network for estimating parameters of wiener networks and a wiener network for deblurring. Experimental evaluations show that our approaches have an edge on State of the Art in terms of inference times and numbers of parameters. Two of our models can reach a speed of 100 images per second, which is qualified for real-time deblurring. Further research may focus on some real-world applications of deblurring with our models.
[ { "created": "Tue, 29 Nov 2022 16:42:01 GMT", "version": "v1" }, { "created": "Wed, 11 Jan 2023 21:24:51 GMT", "version": "v2" }, { "created": "Tue, 14 Feb 2023 12:38:06 GMT", "version": "v3" } ]
2023-02-15
[ [ "Li", "Runjia", "" ], [ "Yu", "Yang", "" ], [ "Haywood", "Charlie", "" ] ]
In this paper, we address the problem of blind deblurring with high efficiency. We propose a set of lightweight deep-wiener-network to finish the task with real-time speed. The Network contains a deep neural network for estimating parameters of wiener networks and a wiener network for deblurring. Experimental evaluations show that our approaches have an edge on State of the Art in terms of inference times and numbers of parameters. Two of our models can reach a speed of 100 images per second, which is qualified for real-time deblurring. Further research may focus on some real-world applications of deblurring with our models.
1907.04228
Michael Bullock
Michael S. Bullock, Christos N. Gagatsos, Saikat Guha, and Boulat A. Bash
Fundamental limits of quantum-secure covert communication over bosonic channels
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the fundamental limit of quantum-secure covert communication over the lossy thermal noise bosonic channel, the quantum-mechanical model underlying many practical channels. We assume that the adversary has unlimited quantum information processing capabilities as well as access to all transmitted photons that do not reach the legitimate receiver. Given existence of noise that is uncontrolled by the adversary, the square root law (SRL) governs covert communication: up to c*sqrt{n} covert bits can be transmitted reliably in n channel uses. Attempting to surpass this limit results in detection with unity probability as n approaches infinity. Here we present the expression for c, characterizing the SRL for the bosonic channel. We also prove that discrete-valued coherent state quadrature phase shift keying (QPSK) constellation achieves the optimal c, which is the same as that achieved by a circularly-symmetric complex-valued Gaussian prior on coherent state amplitude. Finally, while binary phase shift keying (BPSK) achieves the Holevo capacity for non-covert bosonic channels in the low received signal-to-noise ratio regime, we show that it is strictly sub-optimal for covert communication.
[ { "created": "Tue, 9 Jul 2019 15:09:16 GMT", "version": "v1" } ]
2019-07-10
[ [ "Bullock", "Michael S.", "" ], [ "Gagatsos", "Christos N.", "" ], [ "Guha", "Saikat", "" ], [ "Bash", "Boulat A.", "" ] ]
We investigate the fundamental limit of quantum-secure covert communication over the lossy thermal noise bosonic channel, the quantum-mechanical model underlying many practical channels. We assume that the adversary has unlimited quantum information processing capabilities as well as access to all transmitted photons that do not reach the legitimate receiver. Given existence of noise that is uncontrolled by the adversary, the square root law (SRL) governs covert communication: up to c*sqrt{n} covert bits can be transmitted reliably in n channel uses. Attempting to surpass this limit results in detection with unity probability as n approaches infinity. Here we present the expression for c, characterizing the SRL for the bosonic channel. We also prove that discrete-valued coherent state quadrature phase shift keying (QPSK) constellation achieves the optimal c, which is the same as that achieved by a circularly-symmetric complex-valued Gaussian prior on coherent state amplitude. Finally, while binary phase shift keying (BPSK) achieves the Holevo capacity for non-covert bosonic channels in the low received signal-to-noise ratio regime, we show that it is strictly sub-optimal for covert communication.
1607.07911
Thomas Kalinowski
Rachel Wulan Nirmalasari Wijaya, Andrea Semani\v{c}ov\'a-Fe\v{n}ov\v{c}\'ikov\'a, Joe Ryan, Thomas Kalinowski
$H$-supermagic labelings for firecrackers, banana trees and flowers
null
Australasian Journal of Combinatorics, 69(3), 442-451, 2017
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A simple graph $G=(V,E)$ admits an $H$-covering if every edge in $E$ is contained in a subgraph $H'=(V',E')$ of $G$ which is isomorphic to $H$. In this case we say that $G$ is $H$-supermagic if there is a bijection $f:V\cup E\to\{1,\ldots\lvert V\rvert+\lvert E\rvert\}$ such that $f(V)=\{1,\ldots,\lvert V\rvert\}$ and $\sum_{v\in V(H')}f(v)+\sum_{e\in E(H')}f(e)$ is constant over all subgraphs $H'$ of $G$ which are isomorphic to $H$. In this paper, we show that for odd $n$ and arbitrary $k$, the firecracker $F_{k,n}$ is $F_{2,n}$-supermagic, the banana tree $B_{k,n}$ is $B_{1,n}$-supermagic and the flower $F_n$ is $C_3$-supermagic.
[ { "created": "Tue, 26 Jul 2016 22:46:49 GMT", "version": "v1" }, { "created": "Tue, 27 Jun 2017 06:08:20 GMT", "version": "v2" } ]
2018-01-17
[ [ "Wijaya", "Rachel Wulan Nirmalasari", "" ], [ "Semaničová-Feňovčíková", "Andrea", "" ], [ "Ryan", "Joe", "" ], [ "Kalinowski", "Thomas", "" ] ]
A simple graph $G=(V,E)$ admits an $H$-covering if every edge in $E$ is contained in a subgraph $H'=(V',E')$ of $G$ which is isomorphic to $H$. In this case we say that $G$ is $H$-supermagic if there is a bijection $f:V\cup E\to\{1,\ldots\lvert V\rvert+\lvert E\rvert\}$ such that $f(V)=\{1,\ldots,\lvert V\rvert\}$ and $\sum_{v\in V(H')}f(v)+\sum_{e\in E(H')}f(e)$ is constant over all subgraphs $H'$ of $G$ which are isomorphic to $H$. In this paper, we show that for odd $n$ and arbitrary $k$, the firecracker $F_{k,n}$ is $F_{2,n}$-supermagic, the banana tree $B_{k,n}$ is $B_{1,n}$-supermagic and the flower $F_n$ is $C_3$-supermagic.
1210.0271
Lawrence Ong
Roy Timo, Gottfried Lechner, Lawrence Ong, Sarah J. Johnson
Multi-Way Relay Networks: Orthogonal Uplink, Source-Channel Separation and Code Design
Authors' final version (accepted and to appear in IEEE Transactions on Communications)
IEEE Transactions on Communications, Vol. 61, No. 2, pp. 753-768, Feb. 2013
10.1109/TCOMM.2012.121112.110730
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a multi-way relay network with an orthogonal uplink and correlated sources, and we characterise reliable communication (in the usual Shannon sense) with a single-letter expression. The characterisation is obtained using a joint source-channel random-coding argument, which is based on a combination of Wyner et al.'s "Cascaded Slepian-Wolf Source Coding" and Tuncel's "Slepian-Wolf Coding over Broadcast Channels". We prove a separation theorem for the special case of two nodes; that is, we show that a modular code architecture with separate source and channel coding functions is (asymptotically) optimal. Finally, we propose a practical coding scheme based on low-density parity-check codes, and we analyse its performance using multi-edge density evolution.
[ { "created": "Mon, 1 Oct 2012 01:09:32 GMT", "version": "v1" } ]
2013-09-18
[ [ "Timo", "Roy", "" ], [ "Lechner", "Gottfried", "" ], [ "Ong", "Lawrence", "" ], [ "Johnson", "Sarah J.", "" ] ]
We consider a multi-way relay network with an orthogonal uplink and correlated sources, and we characterise reliable communication (in the usual Shannon sense) with a single-letter expression. The characterisation is obtained using a joint source-channel random-coding argument, which is based on a combination of Wyner et al.'s "Cascaded Slepian-Wolf Source Coding" and Tuncel's "Slepian-Wolf Coding over Broadcast Channels". We prove a separation theorem for the special case of two nodes; that is, we show that a modular code architecture with separate source and channel coding functions is (asymptotically) optimal. Finally, we propose a practical coding scheme based on low-density parity-check codes, and we analyse its performance using multi-edge density evolution.
2108.05575
Gosse Minnema
Gosse Minnema
Kicktionary-LOME: A Domain-Specific Multilingual Frame Semantic Parsing Model for Football Language
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
This technical report introduces an adapted version of the LOME frame semantic parsing model (Xia et al., EACL 2021) which is capable of automatically annotating texts according to the "Kicktionary" domain-specific framenet resource. Several methods for training a model even with limited available training data are proposed. While there are some challenges for evaluation related to the nature of the available annotations, preliminary results are very promising, with the best model reaching F1-scores of 0.83 (frame prediction) and 0.81 (semantic role prediction).
[ { "created": "Thu, 12 Aug 2021 07:47:13 GMT", "version": "v1" } ]
2021-08-13
[ [ "Minnema", "Gosse", "" ] ]
This technical report introduces an adapted version of the LOME frame semantic parsing model (Xia et al., EACL 2021) which is capable of automatically annotating texts according to the "Kicktionary" domain-specific framenet resource. Several methods for training a model even with limited available training data are proposed. While there are some challenges for evaluation related to the nature of the available annotations, preliminary results are very promising, with the best model reaching F1-scores of 0.83 (frame prediction) and 0.81 (semantic role prediction).
1106.5451
Ilango Sriram
Ilango Sriram and Dave Cliff
Hybrid complex network topologies are preferred for component-subscription in large-scale data-centres
null
CompleNet 2010, CCIS vol. 116, pp. 130-137, Springer Heidelberg, 2011
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We report on experiments exploring the interplay between the topology of the complex network of dependent components in a large-scale data-centre, and the robustness and scaling properties of that data-centre. In a previous paper [1] we used the SPECI large-scale data-centre simulator [2] to compare the robustness and scaling characteristics of data-centres whose dependent components are connected via Strogatz-Watts small-world (SW) networks [3], versus those organized as Barabasi-Albert scale-free (SF) networks [4], and found significant differences. In this paper, we present results from using the Klemm-Eguiliz (KE) construction method [5] to generate complex network topologies for data-centre component dependencies. The KE model has a control parameter {\mu}\in[0,1]\inR that determines whether the networks generated are SW (0<{\mu}<<1) or SF ({\mu}=1) or a "hybrid" network topology part-way between SW and SF (0<{\mu}<1). We find that the best scores for system-level performance metrics of the simulated data-centres are given by "hybrid" values of {\mu} significantly different from pure-SW or pure-SF.
[ { "created": "Mon, 27 Jun 2011 17:16:41 GMT", "version": "v1" } ]
2011-06-28
[ [ "Sriram", "Ilango", "" ], [ "Cliff", "Dave", "" ] ]
We report on experiments exploring the interplay between the topology of the complex network of dependent components in a large-scale data-centre, and the robustness and scaling properties of that data-centre. In a previous paper [1] we used the SPECI large-scale data-centre simulator [2] to compare the robustness and scaling characteristics of data-centres whose dependent components are connected via Strogatz-Watts small-world (SW) networks [3], versus those organized as Barabasi-Albert scale-free (SF) networks [4], and found significant differences. In this paper, we present results from using the Klemm-Eguiliz (KE) construction method [5] to generate complex network topologies for data-centre component dependencies. The KE model has a control parameter {\mu}\in[0,1]\inR that determines whether the networks generated are SW (0<{\mu}<<1) or SF ({\mu}=1) or a "hybrid" network topology part-way between SW and SF (0<{\mu}<1). We find that the best scores for system-level performance metrics of the simulated data-centres are given by "hybrid" values of {\mu} significantly different from pure-SW or pure-SF.
1802.06767
Kyrylo Malakhov
A. V. Palagin, N.G. Petrenko, V.Yu. Velychko, K.S. Malakhov
The problem of the development ontology-driven architecture of intellectual software systems
in Russian; "Bibliography" section updated for correct identification of references by the Google Scholar parser software; 6 pages; 6 figures
Visnik of the Volodymyr Dahl East ukrainian national university 13 (2011) 179-184 Luhansk
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper describes the architecture of the intelligence system for automated design of ontological knowledge bases of domain areas and the software model of the management GUI (Graphical User Interface) subsystem
[ { "created": "Sat, 17 Feb 2018 10:24:01 GMT", "version": "v1" }, { "created": "Thu, 22 Feb 2018 12:57:27 GMT", "version": "v2" } ]
2018-02-23
[ [ "Palagin", "A. V.", "" ], [ "Petrenko", "N. G.", "" ], [ "Velychko", "V. Yu.", "" ], [ "Malakhov", "K. S.", "" ] ]
The paper describes the architecture of the intelligence system for automated design of ontological knowledge bases of domain areas and the software model of the management GUI (Graphical User Interface) subsystem
1311.2079
Myunghwan Kim
Myunghwan Kim and Jure Leskovec
Nonparametric Multi-group Membership Model for Dynamic Networks
In Advances in Neural Information Processing Systems 25 (2013)
null
null
null
cs.SI physics.soc-ph stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relational data-like graphs, networks, and matrices-is often dynamic, where the relational structure evolves over time. A fundamental problem in the analysis of time-varying network data is to extract a summary of the common structure and the dynamics of the underlying relations between the entities. Here we build on the intuition that changes in the network structure are driven by the dynamics at the level of groups of nodes. We propose a nonparametric multi-group membership model for dynamic networks. Our model contains three main components: We model the birth and death of individual groups with respect to the dynamics of the network structure via a distance dependent Indian Buffet Process. We capture the evolution of individual node group memberships via a Factorial Hidden Markov model. And, we explain the dynamics of the network structure by explicitly modeling the connectivity structure of groups. We demonstrate our model's capability of identifying the dynamics of latent groups in a number of different types of network data. Experimental results show that our model provides improved predictive performance over existing dynamic network models on future network forecasting and missing link prediction.
[ { "created": "Fri, 8 Nov 2013 21:00:51 GMT", "version": "v1" } ]
2013-11-12
[ [ "Kim", "Myunghwan", "" ], [ "Leskovec", "Jure", "" ] ]
Relational data-like graphs, networks, and matrices-is often dynamic, where the relational structure evolves over time. A fundamental problem in the analysis of time-varying network data is to extract a summary of the common structure and the dynamics of the underlying relations between the entities. Here we build on the intuition that changes in the network structure are driven by the dynamics at the level of groups of nodes. We propose a nonparametric multi-group membership model for dynamic networks. Our model contains three main components: We model the birth and death of individual groups with respect to the dynamics of the network structure via a distance dependent Indian Buffet Process. We capture the evolution of individual node group memberships via a Factorial Hidden Markov model. And, we explain the dynamics of the network structure by explicitly modeling the connectivity structure of groups. We demonstrate our model's capability of identifying the dynamics of latent groups in a number of different types of network data. Experimental results show that our model provides improved predictive performance over existing dynamic network models on future network forecasting and missing link prediction.
2406.04152
Steven Arzt
Steven Arzt, Linda Schreiber, Dominik Appelt
Position: How Regulation Will Change Software Security Research
5 pages, submitted to SE2023 workshop at FSE 2024
null
null
null
cs.SE
http://creativecommons.org/licenses/by-sa/4.0/
Software security has been an important research topic over the years. The community has proposed processes and tools for secure software development and security analysis. However, a significant number of vulnerabilities remains in real-world software-driven systems and products. To alleviate this problem, legislation is being established to oblige manufacturers, for example, to comply with essential security requirements and to establish appropriate development practices. We argue that software engineering research needs to provide better tools and support that helps industry comply with the new standards while retaining effcient processes. We argue for a stronger cooperation between legal scholars and computer scientists, and for bridging the gap between higher-level regulation and code-level engineering.
[ { "created": "Thu, 6 Jun 2024 15:16:44 GMT", "version": "v1" } ]
2024-06-07
[ [ "Arzt", "Steven", "" ], [ "Schreiber", "Linda", "" ], [ "Appelt", "Dominik", "" ] ]
Software security has been an important research topic over the years. The community has proposed processes and tools for secure software development and security analysis. However, a significant number of vulnerabilities remains in real-world software-driven systems and products. To alleviate this problem, legislation is being established to oblige manufacturers, for example, to comply with essential security requirements and to establish appropriate development practices. We argue that software engineering research needs to provide better tools and support that helps industry comply with the new standards while retaining effcient processes. We argue for a stronger cooperation between legal scholars and computer scientists, and for bridging the gap between higher-level regulation and code-level engineering.
2111.07898
Sushrut Thorat
Sushrut Thorat, Giacomo Aldegheri, Tim C. Kietzmann
Category-orthogonal object features guide information processing in recurrent neural networks trained for object categorization
13 pages, 9 figures, peer-reviewed and accepted at the SVRHM 2021 workshop at NeurIPS (+ 2 additional sections in the Appendix presenting newer supplementary results). SVRHM 2021 Workshop@ NeurIPS. 2021
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Recurrent neural networks (RNNs) have been shown to perform better than feedforward architectures in visual object categorization tasks, especially in challenging conditions such as cluttered images. However, little is known about the exact computational role of recurrent information flow in these conditions. Here we test RNNs trained for object categorization on the hypothesis that recurrence iteratively aids object categorization via the communication of category-orthogonal auxiliary variables (the location, orientation, and scale of the object). Using diagnostic linear readouts, we find that: (a) information about auxiliary variables increases across time in all network layers, (b) this information is indeed present in the recurrent information flow, and (c) its manipulation significantly affects task performance. These observations confirm the hypothesis that category-orthogonal auxiliary variable information is conveyed through recurrent connectivity and is used to optimize category inference in cluttered environments.
[ { "created": "Mon, 15 Nov 2021 16:52:07 GMT", "version": "v1" }, { "created": "Tue, 10 May 2022 17:36:28 GMT", "version": "v2" } ]
2022-05-11
[ [ "Thorat", "Sushrut", "" ], [ "Aldegheri", "Giacomo", "" ], [ "Kietzmann", "Tim C.", "" ] ]
Recurrent neural networks (RNNs) have been shown to perform better than feedforward architectures in visual object categorization tasks, especially in challenging conditions such as cluttered images. However, little is known about the exact computational role of recurrent information flow in these conditions. Here we test RNNs trained for object categorization on the hypothesis that recurrence iteratively aids object categorization via the communication of category-orthogonal auxiliary variables (the location, orientation, and scale of the object). Using diagnostic linear readouts, we find that: (a) information about auxiliary variables increases across time in all network layers, (b) this information is indeed present in the recurrent information flow, and (c) its manipulation significantly affects task performance. These observations confirm the hypothesis that category-orthogonal auxiliary variable information is conveyed through recurrent connectivity and is used to optimize category inference in cluttered environments.
2407.09950
Hossein Mousavi
Seyed Muhammad Hossein Mousavi
PSO Fuzzy XGBoost Classifier Boosted with Neural Gas Features on EEG Signals in Emotion Recognition
PSO, Fuzzy, XGBoost, Neural Gas Network (NGN), Feature Selection, EEG Signals, Emotion Recognition
null
null
null
cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Emotion recognition is the technology-driven process of identifying and categorizing human emotions from various data sources, such as facial expressions, voice patterns, body motion, and physiological signals, such as EEG. These physiological indicators, though rich in data, present challenges due to their complexity and variability, necessitating sophisticated feature selection and extraction methods. NGN, an unsupervised learning algorithm, effectively adapts to input spaces without predefined grid structures, improving feature extraction from physiological data. Furthermore, the incorporation of fuzzy logic enables the handling of fuzzy data by introducing reasoning that mimics human decision-making. The combination of PSO with XGBoost aids in optimizing model performance through efficient hyperparameter tuning and decision process optimization. This study explores the integration of Neural-Gas Network (NGN), XGBoost, Particle Swarm Optimization (PSO), and fuzzy logic to enhance emotion recognition using physiological signals. Our research addresses three critical questions concerning the improvement of XGBoost with PSO and fuzzy logic, NGN's effectiveness in feature selection, and the performance comparison of the PSO-fuzzy XGBoost classifier with standard benchmarks. Acquired results indicate that our methodologies enhance the accuracy of emotion recognition systems and outperform other feature selection techniques using the majority of classifiers, offering significant implications for both theoretical advancement and practical application in emotion recognition technology.
[ { "created": "Sat, 13 Jul 2024 17:15:23 GMT", "version": "v1" } ]
2024-07-16
[ [ "Mousavi", "Seyed Muhammad Hossein", "" ] ]
Emotion recognition is the technology-driven process of identifying and categorizing human emotions from various data sources, such as facial expressions, voice patterns, body motion, and physiological signals, such as EEG. These physiological indicators, though rich in data, present challenges due to their complexity and variability, necessitating sophisticated feature selection and extraction methods. NGN, an unsupervised learning algorithm, effectively adapts to input spaces without predefined grid structures, improving feature extraction from physiological data. Furthermore, the incorporation of fuzzy logic enables the handling of fuzzy data by introducing reasoning that mimics human decision-making. The combination of PSO with XGBoost aids in optimizing model performance through efficient hyperparameter tuning and decision process optimization. This study explores the integration of Neural-Gas Network (NGN), XGBoost, Particle Swarm Optimization (PSO), and fuzzy logic to enhance emotion recognition using physiological signals. Our research addresses three critical questions concerning the improvement of XGBoost with PSO and fuzzy logic, NGN's effectiveness in feature selection, and the performance comparison of the PSO-fuzzy XGBoost classifier with standard benchmarks. Acquired results indicate that our methodologies enhance the accuracy of emotion recognition systems and outperform other feature selection techniques using the majority of classifiers, offering significant implications for both theoretical advancement and practical application in emotion recognition technology.
1909.06892
Shubham Jain
Shubham Jain, Sumeet Kumar Gupta, Anand Raghunathan
TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks
12 pages, 18 figures, Accepted in IEEE Transactions on Very Large Scale Integration (VLSI) Systems 2020
null
null
null
cs.LG cs.AR cs.CV cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of lower precision has emerged as a popular technique to optimize the compute and storage requirements of complex Deep Neural Networks (DNNs). In the quest for lower precision, recent studies have shown that ternary DNNs (which represent weights and activations by signed ternary values) represent a promising sweet spot, achieving accuracy close to full-precision networks on complex tasks. We propose TiM-DNN, a programmable in-memory accelerator that is specifically designed to execute ternary DNNs. TiM-DNN supports various ternary representations including unweighted {-1,0,1}, symmetric weighted {-a,0,a}, and asymmetric weighted {-a,0,b} ternary systems. The building blocks of TiM-DNN are TiM tiles -- specialized memory arrays that perform massively parallel signed ternary vector-matrix multiplications with a single access. TiM tiles are in turn composed of Ternary Processing Cells (TPCs), bit-cells that function as both ternary storage units and signed ternary multiplication units. We evaluate an implementation of TiM-DNN in 32nm technology using an architectural simulator calibrated with SPICE simulations and RTL synthesis. We evaluate TiM-DNN across a suite of state-of-the-art DNN benchmarks including both deep convolutional and recurrent neural networks. A 32-tile instance of TiM-DNN achieves a peak performance of 114 TOPs/s, consumes 0.9W power, and occupies 1.96mm2 chip area, representing a 300X and 388X improvement in TOPS/W and TOPS/mm2, respectively, compared to an NVIDIA Tesla V100 GPU. In comparison to specialized DNN accelerators, TiM-DNN achieves 55X-240X and 160X-291X improvement in TOPS/W and TOPS/mm2, respectively. Finally, when compared to a well-optimized near-memory accelerator for ternary DNNs, TiM-DNN demonstrates 3.9x-4.7x improvement in system-level energy and 3.2x-4.2x speedup, underscoring the potential of in-memory computing for ternary DNNs.
[ { "created": "Sun, 15 Sep 2019 21:43:19 GMT", "version": "v1" }, { "created": "Mon, 30 Sep 2019 03:59:26 GMT", "version": "v2" }, { "created": "Tue, 5 May 2020 02:42:18 GMT", "version": "v3" } ]
2020-05-06
[ [ "Jain", "Shubham", "" ], [ "Gupta", "Sumeet Kumar", "" ], [ "Raghunathan", "Anand", "" ] ]
The use of lower precision has emerged as a popular technique to optimize the compute and storage requirements of complex Deep Neural Networks (DNNs). In the quest for lower precision, recent studies have shown that ternary DNNs (which represent weights and activations by signed ternary values) represent a promising sweet spot, achieving accuracy close to full-precision networks on complex tasks. We propose TiM-DNN, a programmable in-memory accelerator that is specifically designed to execute ternary DNNs. TiM-DNN supports various ternary representations including unweighted {-1,0,1}, symmetric weighted {-a,0,a}, and asymmetric weighted {-a,0,b} ternary systems. The building blocks of TiM-DNN are TiM tiles -- specialized memory arrays that perform massively parallel signed ternary vector-matrix multiplications with a single access. TiM tiles are in turn composed of Ternary Processing Cells (TPCs), bit-cells that function as both ternary storage units and signed ternary multiplication units. We evaluate an implementation of TiM-DNN in 32nm technology using an architectural simulator calibrated with SPICE simulations and RTL synthesis. We evaluate TiM-DNN across a suite of state-of-the-art DNN benchmarks including both deep convolutional and recurrent neural networks. A 32-tile instance of TiM-DNN achieves a peak performance of 114 TOPs/s, consumes 0.9W power, and occupies 1.96mm2 chip area, representing a 300X and 388X improvement in TOPS/W and TOPS/mm2, respectively, compared to an NVIDIA Tesla V100 GPU. In comparison to specialized DNN accelerators, TiM-DNN achieves 55X-240X and 160X-291X improvement in TOPS/W and TOPS/mm2, respectively. Finally, when compared to a well-optimized near-memory accelerator for ternary DNNs, TiM-DNN demonstrates 3.9x-4.7x improvement in system-level energy and 3.2x-4.2x speedup, underscoring the potential of in-memory computing for ternary DNNs.
2110.00841
Aishwarya Sarkar
Aishwarya Sarkar, Jien Zhang, Chaoqun Lu, Ali Jannesari
Transfer Learning Approaches for Knowledge Discovery in Grid-based Geo-Spatiotemporal Data
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extracting and meticulously analyzing geo-spatiotemporal features is crucial to recognize intricate underlying causes of natural events, such as floods. Limited evidence about hidden factors leading to climate change makes it challenging to predict regional water discharge accurately. In addition, the explosive growth in complex geo-spatiotemporal environment data that requires repeated learning by the state-of-the-art neural networks for every new region emphasizes the need for new computationally efficient methods, advanced computational resources, and extensive training on a massive amount of available monitored data. We, therefore, propose HydroDeep, an effectively reusable pretrained model to address this problem of transferring knowledge from one region to another by effectively capturing their intrinsic geo-spatiotemporal variance. Further, we present four transfer learning approaches on HydroDeep for spatiotemporal interpretability that improve Nash-Sutcliffe efficiency by 9% to 108% in new regions with a 95% reduction in time.
[ { "created": "Sat, 2 Oct 2021 16:55:34 GMT", "version": "v1" }, { "created": "Fri, 29 Oct 2021 23:54:37 GMT", "version": "v2" }, { "created": "Tue, 2 Nov 2021 01:27:50 GMT", "version": "v3" } ]
2021-11-03
[ [ "Sarkar", "Aishwarya", "" ], [ "Zhang", "Jien", "" ], [ "Lu", "Chaoqun", "" ], [ "Jannesari", "Ali", "" ] ]
Extracting and meticulously analyzing geo-spatiotemporal features is crucial to recognize intricate underlying causes of natural events, such as floods. Limited evidence about hidden factors leading to climate change makes it challenging to predict regional water discharge accurately. In addition, the explosive growth in complex geo-spatiotemporal environment data that requires repeated learning by the state-of-the-art neural networks for every new region emphasizes the need for new computationally efficient methods, advanced computational resources, and extensive training on a massive amount of available monitored data. We, therefore, propose HydroDeep, an effectively reusable pretrained model to address this problem of transferring knowledge from one region to another by effectively capturing their intrinsic geo-spatiotemporal variance. Further, we present four transfer learning approaches on HydroDeep for spatiotemporal interpretability that improve Nash-Sutcliffe efficiency by 9% to 108% in new regions with a 95% reduction in time.
2407.06346
Fred Lu
Fred Lu, Ryan R. Curtin, Edward Raff, Francis Ferraro, James Holt
High-Dimensional Distributed Sparse Classification with Scalable Communication-Efficient Global Updates
KDD 2024, Research Track
null
10.1145/3637528.3672038
null
cs.LG cs.DC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the size of datasets used in statistical learning continues to grow, distributed training of models has attracted increasing attention. These methods partition the data and exploit parallelism to reduce memory and runtime, but suffer increasingly from communication costs as the data size or the number of iterations grows. Recent work on linear models has shown that a surrogate likelihood can be optimized locally to iteratively improve on an initial solution in a communication-efficient manner. However, existing versions of these methods experience multiple shortcomings as the data size becomes massive, including diverging updates and efficiently handling sparsity. In this work we develop solutions to these problems which enable us to learn a communication-efficient distributed logistic regression model even beyond millions of features. In our experiments we demonstrate a large improvement in accuracy over distributed algorithms with only a few distributed update steps needed, and similar or faster runtimes. Our code is available at \url{https://github.com/FutureComputing4AI/ProxCSL}.
[ { "created": "Mon, 8 Jul 2024 19:34:39 GMT", "version": "v1" } ]
2024-07-10
[ [ "Lu", "Fred", "" ], [ "Curtin", "Ryan R.", "" ], [ "Raff", "Edward", "" ], [ "Ferraro", "Francis", "" ], [ "Holt", "James", "" ] ]
As the size of datasets used in statistical learning continues to grow, distributed training of models has attracted increasing attention. These methods partition the data and exploit parallelism to reduce memory and runtime, but suffer increasingly from communication costs as the data size or the number of iterations grows. Recent work on linear models has shown that a surrogate likelihood can be optimized locally to iteratively improve on an initial solution in a communication-efficient manner. However, existing versions of these methods experience multiple shortcomings as the data size becomes massive, including diverging updates and efficiently handling sparsity. In this work we develop solutions to these problems which enable us to learn a communication-efficient distributed logistic regression model even beyond millions of features. In our experiments we demonstrate a large improvement in accuracy over distributed algorithms with only a few distributed update steps needed, and similar or faster runtimes. Our code is available at \url{https://github.com/FutureComputing4AI/ProxCSL}.
2310.05146
John Chong Min Tan
John Chong Min Tan, Mehul Motani
Large Language Model (LLM) as a System of Multiple Expert Agents: An Approach to solve the Abstraction and Reasoning Corpus (ARC) Challenge
6 main pages, 1 page references, 18 pages appendix
null
null
null
cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
We attempt to solve the Abstraction and Reasoning Corpus (ARC) Challenge using Large Language Models (LLMs) as a system of multiple expert agents. Using the flexibility of LLMs to be prompted to do various novel tasks using zero-shot, few-shot, context-grounded prompting, we explore the feasibility of using LLMs to solve the ARC Challenge. We firstly convert the input image into multiple suitable text-based abstraction spaces. We then utilise the associative power of LLMs to derive the input-output relationship and map this to actions in the form of a working program, similar to Voyager / Ghost in the MineCraft. In addition, we use iterative environmental feedback in order to guide LLMs to solve the task. Our proposed approach achieves 50 solves out of 111 training set problems (45%) with just three abstraction spaces - grid, object and pixel - and we believe that with more abstraction spaces and learnable actions, we will be able to solve more.
[ { "created": "Sun, 8 Oct 2023 12:37:28 GMT", "version": "v1" } ]
2023-10-10
[ [ "Tan", "John Chong Min", "" ], [ "Motani", "Mehul", "" ] ]
We attempt to solve the Abstraction and Reasoning Corpus (ARC) Challenge using Large Language Models (LLMs) as a system of multiple expert agents. Using the flexibility of LLMs to be prompted to do various novel tasks using zero-shot, few-shot, context-grounded prompting, we explore the feasibility of using LLMs to solve the ARC Challenge. We firstly convert the input image into multiple suitable text-based abstraction spaces. We then utilise the associative power of LLMs to derive the input-output relationship and map this to actions in the form of a working program, similar to Voyager / Ghost in the MineCraft. In addition, we use iterative environmental feedback in order to guide LLMs to solve the task. Our proposed approach achieves 50 solves out of 111 training set problems (45%) with just three abstraction spaces - grid, object and pixel - and we believe that with more abstraction spaces and learnable actions, we will be able to solve more.
2010.04434
Tielin Zhang
Tielin Zhang and Shuncheng Jia and Xiang Cheng and Bo Xu
Tuning Convolutional Spiking Neural Network with Biologically-plausible Reward Propagation
Final Version. Accepted by IEEE Transactions on Neural Networks and Learning Systems
null
null
null
cs.NE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking Neural Networks (SNNs) contain more biologically realistic structures and biologically-inspired learning principles than those in standard Artificial Neural Networks (ANNs). SNNs are considered the third generation of ANNs, powerful on the robust computation with a low computational cost. The neurons in SNNs are non-differential, containing decayed historical states and generating event-based spikes after their states reaching the firing threshold. These dynamic characteristics of SNNs make it difficult to be directly trained with the standard backpropagation (BP), which is also considered not biologically plausible. In this paper, a Biologically-plausible Reward Propagation (BRP) algorithm is proposed and applied to the SNN architecture with both spiking-convolution (with both 1D and 2D convolutional kernels) and full-connection layers. Unlike the standard BP that propagates error signals from post to presynaptic neurons layer by layer, the BRP propagates target labels instead of errors directly from the output layer to all pre-hidden layers. This effort is more consistent with the top-down reward-guiding learning in cortical columns of the neocortex. Synaptic modifications with only local gradient differences are induced with pseudo-BP that might also be replaced with the Spike-Timing Dependent Plasticity (STDP). The performance of the proposed BRP-SNN is further verified on the spatial (including MNIST and Cifar-10) and temporal (including TIDigits and DvsGesture) tasks, where the SNN using BRP has reached a similar accuracy compared to other state-of-the-art BP-based SNNs and saved 50% more computational cost than ANNs. We think the introduction of biologically plausible learning rules to the training procedure of biologically realistic SNNs will give us more hints and inspirations toward a better understanding of the biological system's intelligent nature.
[ { "created": "Fri, 9 Oct 2020 08:42:13 GMT", "version": "v1" }, { "created": "Thu, 12 Nov 2020 06:06:27 GMT", "version": "v2" }, { "created": "Mon, 31 May 2021 13:50:56 GMT", "version": "v3" } ]
2021-06-01
[ [ "Zhang", "Tielin", "" ], [ "Jia", "Shuncheng", "" ], [ "Cheng", "Xiang", "" ], [ "Xu", "Bo", "" ] ]
Spiking Neural Networks (SNNs) contain more biologically realistic structures and biologically-inspired learning principles than those in standard Artificial Neural Networks (ANNs). SNNs are considered the third generation of ANNs, powerful on the robust computation with a low computational cost. The neurons in SNNs are non-differential, containing decayed historical states and generating event-based spikes after their states reaching the firing threshold. These dynamic characteristics of SNNs make it difficult to be directly trained with the standard backpropagation (BP), which is also considered not biologically plausible. In this paper, a Biologically-plausible Reward Propagation (BRP) algorithm is proposed and applied to the SNN architecture with both spiking-convolution (with both 1D and 2D convolutional kernels) and full-connection layers. Unlike the standard BP that propagates error signals from post to presynaptic neurons layer by layer, the BRP propagates target labels instead of errors directly from the output layer to all pre-hidden layers. This effort is more consistent with the top-down reward-guiding learning in cortical columns of the neocortex. Synaptic modifications with only local gradient differences are induced with pseudo-BP that might also be replaced with the Spike-Timing Dependent Plasticity (STDP). The performance of the proposed BRP-SNN is further verified on the spatial (including MNIST and Cifar-10) and temporal (including TIDigits and DvsGesture) tasks, where the SNN using BRP has reached a similar accuracy compared to other state-of-the-art BP-based SNNs and saved 50% more computational cost than ANNs. We think the introduction of biologically plausible learning rules to the training procedure of biologically realistic SNNs will give us more hints and inspirations toward a better understanding of the biological system's intelligent nature.
2202.11269
Qingsong Wen
Chaoli Zhang, Zhiqiang Zhou, Yingying Zhang, Linxiao Yang, Kai He, Qingsong Wen, Liang Sun
NetRCA: An Effective Network Fault Cause Localization Algorithm
Accepted by ICASSP 2022. NetRCA is the solution of the First Place of 2022 ICASSP AIOps Challenge. All authors are contributed equally, and Qingsong Wen is the team leader (Team Name: MindOps). The website of 2022 ICASSP AIOps Challenge is https://www.aiops.sribd.cn/home/introduction
null
null
null
cs.LG cs.AI cs.NI eess.SP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Localizing the root cause of network faults is crucial to network operation and maintenance. However, due to the complicated network architectures and wireless environments, as well as limited labeled data, accurately localizing the true root cause is challenging. In this paper, we propose a novel algorithm named NetRCA to deal with this problem. Firstly, we extract effective derived features from the original raw data by considering temporal, directional, attribution, and interaction characteristics. Secondly, we adopt multivariate time series similarity and label propagation to generate new training data from both labeled and unlabeled data to overcome the lack of labeled samples. Thirdly, we design an ensemble model which combines XGBoost, rule set learning, attribution model, and graph algorithm, to fully utilize all data information and enhance performance. Finally, experiments and analysis are conducted on the real-world dataset from ICASSP 2022 AIOps Challenge to demonstrate the superiority and effectiveness of our approach.
[ { "created": "Wed, 23 Feb 2022 02:03:35 GMT", "version": "v1" }, { "created": "Mon, 7 Mar 2022 00:15:13 GMT", "version": "v2" } ]
2022-03-08
[ [ "Zhang", "Chaoli", "" ], [ "Zhou", "Zhiqiang", "" ], [ "Zhang", "Yingying", "" ], [ "Yang", "Linxiao", "" ], [ "He", "Kai", "" ], [ "Wen", "Qingsong", "" ], [ "Sun", "Liang", "" ] ]
Localizing the root cause of network faults is crucial to network operation and maintenance. However, due to the complicated network architectures and wireless environments, as well as limited labeled data, accurately localizing the true root cause is challenging. In this paper, we propose a novel algorithm named NetRCA to deal with this problem. Firstly, we extract effective derived features from the original raw data by considering temporal, directional, attribution, and interaction characteristics. Secondly, we adopt multivariate time series similarity and label propagation to generate new training data from both labeled and unlabeled data to overcome the lack of labeled samples. Thirdly, we design an ensemble model which combines XGBoost, rule set learning, attribution model, and graph algorithm, to fully utilize all data information and enhance performance. Finally, experiments and analysis are conducted on the real-world dataset from ICASSP 2022 AIOps Challenge to demonstrate the superiority and effectiveness of our approach.
1908.01441
Kazuo Misue
Kazuo Misue and Katsuya Akasaka
Graph Drawing with Morphing Partial Edges
Appears in the Proceedings of the 27th International Symposium on Graph Drawing and Network Visualization (GD 2019)
null
null
null
cs.DS cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A partial edge drawing (PED) of a graph is a variation of a node-link diagram. PED draws a link, which is a partial visual representation of an edge, and reduces visual clutter of the node-link diagram. However, more time is required to read a PED to infer undrawn parts. The authors propose a morphing edge drawing (MED), which is a PED that changes with time. In MED, links morph between partial and complete drawings; thus, a reduced load for estimation of undrawn parts in a PED is expected. Herein, a formalization of MED is shown based on a formalization of PED. Then, requirements for the scheduling of morphing are specified. The requirements inhibit morphing from crossing and shorten the overall time for morphing the edges. Moreover, an algorithm for a scheduling method implemented by the authors is illustrated and the effectiveness of PED from a reading time viewpoint is shown through an experimental evaluation.
[ { "created": "Mon, 5 Aug 2019 01:56:23 GMT", "version": "v1" }, { "created": "Tue, 6 Aug 2019 05:06:19 GMT", "version": "v2" }, { "created": "Mon, 12 Aug 2019 12:16:00 GMT", "version": "v3" }, { "created": "Thu, 22 Aug 2019 05:28:23 GMT", "version": "v4" }, { "created": "Wed, 2 Oct 2019 02:04:42 GMT", "version": "v5" } ]
2019-10-03
[ [ "Misue", "Kazuo", "" ], [ "Akasaka", "Katsuya", "" ] ]
A partial edge drawing (PED) of a graph is a variation of a node-link diagram. PED draws a link, which is a partial visual representation of an edge, and reduces visual clutter of the node-link diagram. However, more time is required to read a PED to infer undrawn parts. The authors propose a morphing edge drawing (MED), which is a PED that changes with time. In MED, links morph between partial and complete drawings; thus, a reduced load for estimation of undrawn parts in a PED is expected. Herein, a formalization of MED is shown based on a formalization of PED. Then, requirements for the scheduling of morphing are specified. The requirements inhibit morphing from crossing and shorten the overall time for morphing the edges. Moreover, an algorithm for a scheduling method implemented by the authors is illustrated and the effectiveness of PED from a reading time viewpoint is shown through an experimental evaluation.
2106.15125
Yi-Fan Song
Yi-Fan Song, Zhang Zhang, Caifeng Shan, Liang Wang
Constructing Stronger and Faster Baselines for Skeleton-based Action Recognition
15 pages, 12 tables, 10 figures, Accepted by IEEE T-PAMI. arXiv admin note: text overlap with arXiv:2010.09978
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One essential problem in skeleton-based action recognition is how to extract discriminative features over all skeleton joints. However, the complexity of the recent State-Of-The-Art (SOTA) models for this task tends to be exceedingly sophisticated and over-parameterized. The low efficiency in model training and inference has increased the validation costs of model architectures in large-scale datasets. To address the above issue, recent advanced separable convolutional layers are embedded into an early fused Multiple Input Branches (MIB) network, constructing an efficient Graph Convolutional Network (GCN) baseline for skeleton-based action recognition. In addition, based on such the baseline, we design a compound scaling strategy to expand the model's width and depth synchronously, and eventually obtain a family of efficient GCN baselines with high accuracies and small amounts of trainable parameters, termed EfficientGCN-Bx, where "x" denotes the scaling coefficient. On two large-scale datasets, i.e., NTU RGB+D 60 and 120, the proposed EfficientGCN-B4 baseline outperforms other SOTA methods, e.g., achieving 91.7% accuracy on the cross-subject benchmark of NTU 60 dataset, while being 3.15x smaller and 3.21x faster than MS-G3D, which is one of the best SOTA methods. The source code in PyTorch version and the pretrained models are available at https://github.com/yfsong0709/EfficientGCNv1.
[ { "created": "Tue, 29 Jun 2021 07:09:11 GMT", "version": "v1" }, { "created": "Thu, 3 Mar 2022 11:03:52 GMT", "version": "v2" } ]
2022-03-04
[ [ "Song", "Yi-Fan", "" ], [ "Zhang", "Zhang", "" ], [ "Shan", "Caifeng", "" ], [ "Wang", "Liang", "" ] ]
One essential problem in skeleton-based action recognition is how to extract discriminative features over all skeleton joints. However, the complexity of the recent State-Of-The-Art (SOTA) models for this task tends to be exceedingly sophisticated and over-parameterized. The low efficiency in model training and inference has increased the validation costs of model architectures in large-scale datasets. To address the above issue, recent advanced separable convolutional layers are embedded into an early fused Multiple Input Branches (MIB) network, constructing an efficient Graph Convolutional Network (GCN) baseline for skeleton-based action recognition. In addition, based on such the baseline, we design a compound scaling strategy to expand the model's width and depth synchronously, and eventually obtain a family of efficient GCN baselines with high accuracies and small amounts of trainable parameters, termed EfficientGCN-Bx, where "x" denotes the scaling coefficient. On two large-scale datasets, i.e., NTU RGB+D 60 and 120, the proposed EfficientGCN-B4 baseline outperforms other SOTA methods, e.g., achieving 91.7% accuracy on the cross-subject benchmark of NTU 60 dataset, while being 3.15x smaller and 3.21x faster than MS-G3D, which is one of the best SOTA methods. The source code in PyTorch version and the pretrained models are available at https://github.com/yfsong0709/EfficientGCNv1.
2002.05966
Hao Cheng
Hao Cheng, Wentong Liao, Michael Ying Yang, Monika Sester, Bodo Rosenhahn
MCENET: Multi-Context Encoder Network for Homogeneous Agent Trajectory Prediction in Mixed Traffic
8 pages, 5 figures, code is available on https://github.com/haohao11/MCENET
null
null
null
cs.CV cs.CY cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trajectory prediction in urban mixed-traffic zones (a.k.a. shared spaces) is critical for many intelligent transportation systems, such as intent detection for autonomous driving. However, there are many challenges to predict the trajectories of heterogeneous road agents (pedestrians, cyclists and vehicles) at a microscopical level. For example, an agent might be able to choose multiple plausible paths in complex interactions with other agents in varying environments. To this end, we propose an approach named Multi-Context Encoder Network (MCENET) that is trained by encoding both past and future scene context, interaction context and motion information to capture the patterns and variations of the future trajectories using a set of stochastic latent variables. In inference time, we combine the past context and motion information of the target agent with samplings of the latent variables to predict multiple realistic trajectories in the future. Through experiments on several datasets of varying scenes, our method outperforms some of the recent state-of-the-art methods for mixed traffic trajectory prediction by a large margin and more robust in a very challenging environment. The impact of each context is justified via ablation studies.
[ { "created": "Fri, 14 Feb 2020 11:04:41 GMT", "version": "v1" }, { "created": "Mon, 17 Feb 2020 15:53:02 GMT", "version": "v2" }, { "created": "Tue, 3 Mar 2020 13:39:05 GMT", "version": "v3" }, { "created": "Sun, 5 Apr 2020 12:08:51 GMT", "version": "v4" }, { "created": "Tue, 23 Jun 2020 13:06:17 GMT", "version": "v5" } ]
2020-06-24
[ [ "Cheng", "Hao", "" ], [ "Liao", "Wentong", "" ], [ "Yang", "Michael Ying", "" ], [ "Sester", "Monika", "" ], [ "Rosenhahn", "Bodo", "" ] ]
Trajectory prediction in urban mixed-traffic zones (a.k.a. shared spaces) is critical for many intelligent transportation systems, such as intent detection for autonomous driving. However, there are many challenges to predict the trajectories of heterogeneous road agents (pedestrians, cyclists and vehicles) at a microscopical level. For example, an agent might be able to choose multiple plausible paths in complex interactions with other agents in varying environments. To this end, we propose an approach named Multi-Context Encoder Network (MCENET) that is trained by encoding both past and future scene context, interaction context and motion information to capture the patterns and variations of the future trajectories using a set of stochastic latent variables. In inference time, we combine the past context and motion information of the target agent with samplings of the latent variables to predict multiple realistic trajectories in the future. Through experiments on several datasets of varying scenes, our method outperforms some of the recent state-of-the-art methods for mixed traffic trajectory prediction by a large margin and more robust in a very challenging environment. The impact of each context is justified via ablation studies.
2206.11436
Siamak Ghodsi
Siamak Ghodsi, Harith Alani, and Eirini Ntoutsi
Context matters for fairness -- a case study on the effect of spatial distribution shifts
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
With the ever growing involvement of data-driven AI-based decision making technologies in our daily social lives, the fairness of these systems is becoming a crucial phenomenon. However, an important and often challenging aspect in utilizing such systems is to distinguish validity for the range of their application especially under distribution shifts, i.e., when a model is deployed on data with different distribution than the training set. In this paper, we present a case study on the newly released American Census datasets, a reconstruction of the popular Adult dataset, to illustrate the importance of context for fairness and show how remarkably can spatial distribution shifts affect predictive- and fairness-related performance of a model. The problem persists for fairness-aware learning models with the effects of context-specific fairness interventions differing across the states and different population groups. Our study suggests that robustness to distribution shifts is necessary before deploying a model to another context.
[ { "created": "Thu, 23 Jun 2022 01:09:46 GMT", "version": "v1" }, { "created": "Fri, 24 Jun 2022 21:09:45 GMT", "version": "v2" } ]
2022-06-28
[ [ "Ghodsi", "Siamak", "" ], [ "Alani", "Harith", "" ], [ "Ntoutsi", "Eirini", "" ] ]
With the ever growing involvement of data-driven AI-based decision making technologies in our daily social lives, the fairness of these systems is becoming a crucial phenomenon. However, an important and often challenging aspect in utilizing such systems is to distinguish validity for the range of their application especially under distribution shifts, i.e., when a model is deployed on data with different distribution than the training set. In this paper, we present a case study on the newly released American Census datasets, a reconstruction of the popular Adult dataset, to illustrate the importance of context for fairness and show how remarkably can spatial distribution shifts affect predictive- and fairness-related performance of a model. The problem persists for fairness-aware learning models with the effects of context-specific fairness interventions differing across the states and different population groups. Our study suggests that robustness to distribution shifts is necessary before deploying a model to another context.
2210.12067
Ramin Hasani
Noel Loo, Ramin Hasani, Alexander Amini, Daniela Rus
Efficient Dataset Distillation Using Random Feature Approximation
Accepted to the Conference on the Advances in Neural Information Processing Systems (NeurIPS) 2022
null
null
null
cs.LG cs.AI cs.NE stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Dataset distillation compresses large datasets into smaller synthetic coresets which retain performance with the aim of reducing the storage and computational burden of processing the entire dataset. Today's best-performing algorithm, \textit{Kernel Inducing Points} (KIP), which makes use of the correspondence between infinite-width neural networks and kernel-ridge regression, is prohibitively slow due to the exact computation of the neural tangent kernel matrix, scaling $O(|S|^2)$, with $|S|$ being the coreset size. To improve this, we propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel, which reduces the kernel matrix computation to $O(|S|)$. Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU. Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets, both in kernel regression and finite-width network training. We demonstrate the effectiveness of our approach on tasks involving model interpretability and privacy preservation.
[ { "created": "Fri, 21 Oct 2022 15:56:13 GMT", "version": "v1" } ]
2022-10-24
[ [ "Loo", "Noel", "" ], [ "Hasani", "Ramin", "" ], [ "Amini", "Alexander", "" ], [ "Rus", "Daniela", "" ] ]
Dataset distillation compresses large datasets into smaller synthetic coresets which retain performance with the aim of reducing the storage and computational burden of processing the entire dataset. Today's best-performing algorithm, \textit{Kernel Inducing Points} (KIP), which makes use of the correspondence between infinite-width neural networks and kernel-ridge regression, is prohibitively slow due to the exact computation of the neural tangent kernel matrix, scaling $O(|S|^2)$, with $|S|$ being the coreset size. To improve this, we propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel, which reduces the kernel matrix computation to $O(|S|)$. Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU. Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets, both in kernel regression and finite-width network training. We demonstrate the effectiveness of our approach on tasks involving model interpretability and privacy preservation.
2112.10038
Peng Xu Mr
Peng Xu
Android-COCO: Android Malware Detection with Graph Neural Network for Byte- and Native-Code
10 pages, 3 figures, 3 tables
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
With the popularity of Android growing exponentially, the amount of malware has significantly exploded. It is arguably one of the most viral problems on mobile platforms. Recently, various approaches have been introduced to detect Android malware, the majority of these are either based on the Manifest File features or the structural information, such as control flow graph and API calls. Among those methods, nearly all of them only consider the Java byte-code as the target to detect malicious behaviors. However, Recent research and our own statistics show that native payloads are commonly used in both benign and malicious apps. Current state-of-the-art Android static analysis tools avoid handling native method invocation. None of those tools have the capability to capture the inter-language behaviors. In this work, we explore an ensemble mechanism, which presents how the combination of byte-code and native-code analysis of Android applications can be efficiently used to cope with the advanced sophistication of Android malware. We, therefore, present a multi-layer approach that utilizes deep learning, natural language processing (NLP), as well as graph embedding techniques to handle the threats of Android malware, both from the Java byte-code and native code. After that, we design an ensemble algorithm to get the final result of malware detection system. To be specific, the first layer of our detection approach operates on the byte-code of application and the native code level, whereas the second layer focuses on the ensemble algorithm. Large-scale experiments on 100,113 samples (35,113 malware and 65,000 benign) show that only byte-code sub-system yields 99.8% accuracy and native-code sub-system yields an accuracy of 96.6%, whereas the Android-COCO method attains an accuracy of 99.86% which outperforms various related works.
[ { "created": "Sun, 19 Dec 2021 01:46:01 GMT", "version": "v1" }, { "created": "Mon, 24 Jan 2022 14:11:00 GMT", "version": "v2" } ]
2022-01-25
[ [ "Xu", "Peng", "" ] ]
With the popularity of Android growing exponentially, the amount of malware has significantly exploded. It is arguably one of the most viral problems on mobile platforms. Recently, various approaches have been introduced to detect Android malware, the majority of these are either based on the Manifest File features or the structural information, such as control flow graph and API calls. Among those methods, nearly all of them only consider the Java byte-code as the target to detect malicious behaviors. However, Recent research and our own statistics show that native payloads are commonly used in both benign and malicious apps. Current state-of-the-art Android static analysis tools avoid handling native method invocation. None of those tools have the capability to capture the inter-language behaviors. In this work, we explore an ensemble mechanism, which presents how the combination of byte-code and native-code analysis of Android applications can be efficiently used to cope with the advanced sophistication of Android malware. We, therefore, present a multi-layer approach that utilizes deep learning, natural language processing (NLP), as well as graph embedding techniques to handle the threats of Android malware, both from the Java byte-code and native code. After that, we design an ensemble algorithm to get the final result of malware detection system. To be specific, the first layer of our detection approach operates on the byte-code of application and the native code level, whereas the second layer focuses on the ensemble algorithm. Large-scale experiments on 100,113 samples (35,113 malware and 65,000 benign) show that only byte-code sub-system yields 99.8% accuracy and native-code sub-system yields an accuracy of 96.6%, whereas the Android-COCO method attains an accuracy of 99.86% which outperforms various related works.
0911.1765
Ion Mandoiu
Justin Kennedy, Ion I. Mandoiu, and Bogdan Pasaniuc
GEDI: Scalable Algorithms for Genotype Error Detection and Imputation
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genome-wide association studies generate very large datasets that require scalable analysis algorithms. In this report we describe the GEDI software package, which implements efficient algorithms for performing several common tasks in the analysis of population genotype data, including genotype error detection and correction, imputation of both randomly missing and untyped genotypes, and genotype phasing. Experimental results show that GEDI achieves high accuracy with a runtime scaling linearly with the number of markers and samples. The open source C++ code of GEDI, released under the GNU General Public License, is available for download at http://dna.engr.uconn.edu/software/GEDI/
[ { "created": "Mon, 9 Nov 2009 23:35:41 GMT", "version": "v1" } ]
2016-09-08
[ [ "Kennedy", "Justin", "" ], [ "Mandoiu", "Ion I.", "" ], [ "Pasaniuc", "Bogdan", "" ] ]
Genome-wide association studies generate very large datasets that require scalable analysis algorithms. In this report we describe the GEDI software package, which implements efficient algorithms for performing several common tasks in the analysis of population genotype data, including genotype error detection and correction, imputation of both randomly missing and untyped genotypes, and genotype phasing. Experimental results show that GEDI achieves high accuracy with a runtime scaling linearly with the number of markers and samples. The open source C++ code of GEDI, released under the GNU General Public License, is available for download at http://dna.engr.uconn.edu/software/GEDI/
2210.05148
Kin Wai Cheuk
Kin Wai Cheuk, Ryosuke Sawata, Toshimitsu Uesaka, Naoki Murata, Naoya Takahashi, Shusuke Takahashi, Dorien Herremans, Yuki Mitsufuji
DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability
null
Proceedings of ICASSP - IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5. IEEE, 2023
null
null
cs.SD cs.AI cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
In this paper we propose a novel generative approach, DiffRoll, to tackle automatic music transcription (AMT). Instead of treating AMT as a discriminative task in which the model is trained to convert spectrograms into piano rolls, we think of it as a conditional generative task where we train our model to generate realistic looking piano rolls from pure Gaussian noise conditioned on spectrograms. This new AMT formulation enables DiffRoll to transcribe, generate and even inpaint music. Due to the classifier-free nature, DiffRoll is also able to be trained on unpaired datasets where only piano rolls are available. Our experiments show that DiffRoll outperforms its discriminative counterpart by 19 percentage points (ppt.) and our ablation studies also indicate that it outperforms similar existing methods by 4.8 ppt. Source code and demonstration are available https://sony.github.io/DiffRoll/.
[ { "created": "Tue, 11 Oct 2022 05:02:11 GMT", "version": "v1" }, { "created": "Thu, 20 Oct 2022 05:47:43 GMT", "version": "v2" } ]
2024-06-03
[ [ "Cheuk", "Kin Wai", "" ], [ "Sawata", "Ryosuke", "" ], [ "Uesaka", "Toshimitsu", "" ], [ "Murata", "Naoki", "" ], [ "Takahashi", "Naoya", "" ], [ "Takahashi", "Shusuke", "" ], [ "Herremans", "Dorien", "" ], [ "Mitsufuji", "Yuki", "" ] ]
In this paper we propose a novel generative approach, DiffRoll, to tackle automatic music transcription (AMT). Instead of treating AMT as a discriminative task in which the model is trained to convert spectrograms into piano rolls, we think of it as a conditional generative task where we train our model to generate realistic looking piano rolls from pure Gaussian noise conditioned on spectrograms. This new AMT formulation enables DiffRoll to transcribe, generate and even inpaint music. Due to the classifier-free nature, DiffRoll is also able to be trained on unpaired datasets where only piano rolls are available. Our experiments show that DiffRoll outperforms its discriminative counterpart by 19 percentage points (ppt.) and our ablation studies also indicate that it outperforms similar existing methods by 4.8 ppt. Source code and demonstration are available https://sony.github.io/DiffRoll/.
2305.13723
Yunyi Zhang
Yunyi Zhang, Minhao Jiang, Yu Meng, Yu Zhang, Jiawei Han
PIEClass: Weakly-Supervised Text Classification with Prompting and Noise-Robust Iterative Ensemble Training
Accepted to EMNLP 2023 Main Conference
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Weakly-supervised text classification trains a classifier using the label name of each target class as the only supervision, which largely reduces human annotation efforts. Most existing methods first use the label names as static keyword-based features to generate pseudo labels, which are then used for final classifier training. While reasonable, such a commonly adopted framework suffers from two limitations: (1) keywords can have different meanings in different contexts and some text may not have any keyword, so keyword matching can induce noisy and inadequate pseudo labels; (2) the errors made in the pseudo label generation stage will directly propagate to the classifier training stage without a chance of being corrected. In this paper, we propose a new method, PIEClass, consisting of two modules: (1) a pseudo label acquisition module that uses zero-shot prompting of pre-trained language models (PLM) to get pseudo labels based on contextualized text understanding beyond static keyword matching, and (2) a noise-robust iterative ensemble training module that iteratively trains classifiers and updates pseudo labels by utilizing two PLM fine-tuning methods that regularize each other. Extensive experiments show that PIEClass achieves overall better performance than existing strong baselines on seven benchmark datasets and even achieves similar performance to fully-supervised classifiers on sentiment classification tasks.
[ { "created": "Tue, 23 May 2023 06:19:14 GMT", "version": "v1" }, { "created": "Fri, 20 Oct 2023 15:14:34 GMT", "version": "v2" } ]
2023-10-23
[ [ "Zhang", "Yunyi", "" ], [ "Jiang", "Minhao", "" ], [ "Meng", "Yu", "" ], [ "Zhang", "Yu", "" ], [ "Han", "Jiawei", "" ] ]
Weakly-supervised text classification trains a classifier using the label name of each target class as the only supervision, which largely reduces human annotation efforts. Most existing methods first use the label names as static keyword-based features to generate pseudo labels, which are then used for final classifier training. While reasonable, such a commonly adopted framework suffers from two limitations: (1) keywords can have different meanings in different contexts and some text may not have any keyword, so keyword matching can induce noisy and inadequate pseudo labels; (2) the errors made in the pseudo label generation stage will directly propagate to the classifier training stage without a chance of being corrected. In this paper, we propose a new method, PIEClass, consisting of two modules: (1) a pseudo label acquisition module that uses zero-shot prompting of pre-trained language models (PLM) to get pseudo labels based on contextualized text understanding beyond static keyword matching, and (2) a noise-robust iterative ensemble training module that iteratively trains classifiers and updates pseudo labels by utilizing two PLM fine-tuning methods that regularize each other. Extensive experiments show that PIEClass achieves overall better performance than existing strong baselines on seven benchmark datasets and even achieves similar performance to fully-supervised classifiers on sentiment classification tasks.
2302.04225
Panagiotis Mpakos
Panagiotis Mpakos, Dimitrios Galanopoulos, Petros Anastasiadis, Nikela Papadopoulou, Nectarios Koziris, Georgios Goumas
Feature-based SpMV Performance Analysis on Contemporary Devices
to appear at IPDPS'23
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The SpMV kernel is characterized by high performance variation per input matrix and computing platform. While GPUs were considered State-of-the-Art for SpMV, with the emergence of advanced multicore CPUs and low-power FPGA accelerators, we need to revisit its performance and energy efficiency. This paper provides a high-level SpMV performance analysis based on structural features of matrices related to common bottlenecks of memory-bandwidth intensity, low ILP, load imbalance and memory latency overheads. Towards this, we create a wide artificial matrix dataset that spans these features and study the performance of different storage formats in nine modern HPC platforms; five CPUs, three GPUs and an FPGA. After validating our proposed methodology using real-world matrices, we analyze our extensive experimental results and draw key insights on the competitiveness of different target architectures for SpMV and the impact of each feature/bottleneck on its performance.
[ { "created": "Wed, 8 Feb 2023 17:51:58 GMT", "version": "v1" } ]
2023-02-09
[ [ "Mpakos", "Panagiotis", "" ], [ "Galanopoulos", "Dimitrios", "" ], [ "Anastasiadis", "Petros", "" ], [ "Papadopoulou", "Nikela", "" ], [ "Koziris", "Nectarios", "" ], [ "Goumas", "Georgios", "" ] ]
The SpMV kernel is characterized by high performance variation per input matrix and computing platform. While GPUs were considered State-of-the-Art for SpMV, with the emergence of advanced multicore CPUs and low-power FPGA accelerators, we need to revisit its performance and energy efficiency. This paper provides a high-level SpMV performance analysis based on structural features of matrices related to common bottlenecks of memory-bandwidth intensity, low ILP, load imbalance and memory latency overheads. Towards this, we create a wide artificial matrix dataset that spans these features and study the performance of different storage formats in nine modern HPC platforms; five CPUs, three GPUs and an FPGA. After validating our proposed methodology using real-world matrices, we analyze our extensive experimental results and draw key insights on the competitiveness of different target architectures for SpMV and the impact of each feature/bottleneck on its performance.