id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1608.03066
Benjamin Drayer
Benjamin Drayer and Thomas Brox
Object Detection, Tracking, and Motion Segmentation for Object-level Video Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an approach for object segmentation in videos that combines frame-level object detection with concepts from object tracking and motion segmentation. The approach extracts temporally consistent object tubes based on an off-the-shelf detector. Besides the class label for each tube, this provides a location prior that is independent of motion. For the final video segmentation, we combine this information with motion cues. The method overcomes the typical problems of weakly supervised/unsupervised video segmentation, such as scenes with no motion, dominant camera motion, and objects that move as a unit. In contrast to most tracking methods, it provides an accurate, temporally consistent segmentation of each object. We report results on four video segmentation datasets: YouTube Objects, SegTrackv2, egoMotion, and FBMS.
[ { "created": "Wed, 10 Aug 2016 07:46:56 GMT", "version": "v1" } ]
2016-08-11
[ [ "Drayer", "Benjamin", "" ], [ "Brox", "Thomas", "" ] ]
We present an approach for object segmentation in videos that combines frame-level object detection with concepts from object tracking and motion segmentation. The approach extracts temporally consistent object tubes based on an off-the-shelf detector. Besides the class label for each tube, this provides a location prior that is independent of motion. For the final video segmentation, we combine this information with motion cues. The method overcomes the typical problems of weakly supervised/unsupervised video segmentation, such as scenes with no motion, dominant camera motion, and objects that move as a unit. In contrast to most tracking methods, it provides an accurate, temporally consistent segmentation of each object. We report results on four video segmentation datasets: YouTube Objects, SegTrackv2, egoMotion, and FBMS.
2211.08283
Ralf Klasing
Subhadeep Ranjan Dev, Sanjana Dey, Florent Foucaud, Ralf Klasing, Tuomo Lehtil\"a
The RED-BLUE SEPARATION problem on graphs
null
Theoretical Computer Science 970:114061, 2023
10.1016/j.tcs.2023.114061
null
cs.DM cs.DS math.CO
http://creativecommons.org/licenses/by/4.0/
We introduce the Red-Blue Separation problem on graphs, where we are given a graph $G=(V,E)$ whose vertices are colored either red or blue, and we want to select a (small) subset $S \subseteq V$, called red-blue separating set, such that for every red-blue pair of vertices, there is a vertex $s \in S$ whose closed neighborhood contains exactly one of the two vertices of the pair. We study the computational complexity of Red-Blue Separation, in which one asks whether a given red-blue colored graph has a red-blue separating set of size at most a given integer. We prove that the problem is NP-complete even for restricted graph classes. We also show that it is always approximable in polynomial time within a factor of $2\ln n$, where $n$ is the input graph's order. In contrast, for triangle-free graphs and for graphs of bounded maximum degree, we show that Red-Blue Separation is solvable in polynomial time when the size of the smaller color class is bounded by a constant. However, on general graphs, we show that the problem is $W[2]$-hard even when parameterized by the solution size plus the size of the smaller color class. We also consider the problem Max Red-Blue Separation where the coloring is not part of the input. Here, given an input graph $G$, we want to determine the smallest integer $k$ such that, for every possible red-blue coloring of $G$, there is a red-blue separating set of size at most $k$. We derive tight bounds on the cardinality of an optimal solution of Max Red-Blue Separation, showing that it can range from logarithmic in the graph order, up to the order minus one. We also give bounds with respect to related parameters. For trees however we prove an upper bound of two-thirds the order. We then show that Max Red-Blue Separation is NP-hard, even for graphs of bounded maximum degree, but can be approximated in polynomial time within a factor of $O(\ln^2 n)$.
[ { "created": "Tue, 15 Nov 2022 16:33:40 GMT", "version": "v1" } ]
2023-07-17
[ [ "Dev", "Subhadeep Ranjan", "" ], [ "Dey", "Sanjana", "" ], [ "Foucaud", "Florent", "" ], [ "Klasing", "Ralf", "" ], [ "Lehtilä", "Tuomo", "" ] ]
We introduce the Red-Blue Separation problem on graphs, where we are given a graph $G=(V,E)$ whose vertices are colored either red or blue, and we want to select a (small) subset $S \subseteq V$, called red-blue separating set, such that for every red-blue pair of vertices, there is a vertex $s \in S$ whose closed neighborhood contains exactly one of the two vertices of the pair. We study the computational complexity of Red-Blue Separation, in which one asks whether a given red-blue colored graph has a red-blue separating set of size at most a given integer. We prove that the problem is NP-complete even for restricted graph classes. We also show that it is always approximable in polynomial time within a factor of $2\ln n$, where $n$ is the input graph's order. In contrast, for triangle-free graphs and for graphs of bounded maximum degree, we show that Red-Blue Separation is solvable in polynomial time when the size of the smaller color class is bounded by a constant. However, on general graphs, we show that the problem is $W[2]$-hard even when parameterized by the solution size plus the size of the smaller color class. We also consider the problem Max Red-Blue Separation where the coloring is not part of the input. Here, given an input graph $G$, we want to determine the smallest integer $k$ such that, for every possible red-blue coloring of $G$, there is a red-blue separating set of size at most $k$. We derive tight bounds on the cardinality of an optimal solution of Max Red-Blue Separation, showing that it can range from logarithmic in the graph order, up to the order minus one. We also give bounds with respect to related parameters. For trees however we prove an upper bound of two-thirds the order. We then show that Max Red-Blue Separation is NP-hard, even for graphs of bounded maximum degree, but can be approximated in polynomial time within a factor of $O(\ln^2 n)$.
1911.03860
Jason Weston
Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, Jason Weston
Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, contain logical flaws. In this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019) to these cases. We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues. For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative models with greater reasoning ability. We demonstrate the efficacy of our approach across several dialogue tasks.
[ { "created": "Sun, 10 Nov 2019 05:53:40 GMT", "version": "v1" }, { "created": "Wed, 6 May 2020 14:13:02 GMT", "version": "v2" } ]
2020-05-07
[ [ "Li", "Margaret", "" ], [ "Roller", "Stephen", "" ], [ "Kulikov", "Ilia", "" ], [ "Welleck", "Sean", "" ], [ "Boureau", "Y-Lan", "" ], [ "Cho", "Kyunghyun", "" ], [ "Weston", "Jason", "" ] ]
Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, contain logical flaws. In this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019) to these cases. We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues. For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative models with greater reasoning ability. We demonstrate the efficacy of our approach across several dialogue tasks.
2010.12416
Shuai Shao
Shuai Shao and Rui Xu and Yan-Jiang Wang and Weifeng Liu and Bao-Di Liu
SAHDL: Sparse Attention Hypergraph Regularized Dictionary Learning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, the attention mechanism contributes significantly to hypergraph based neural networks. However, these methods update the attention weights with the network propagating. That is to say, this type of attention mechanism is only suitable for deep learning-based methods while not applicable to the traditional machine learning approaches. In this paper, we propose a hypergraph based sparse attention mechanism to tackle this issue and embed it into dictionary learning. More specifically, we first construct a sparse attention hypergraph, asset attention weights to samples by employing the $\ell_1$-norm sparse regularization to mine the high-order relationship among sample features. Then, we introduce the hypergraph Laplacian operator to preserve the local structure for subspace transformation in dictionary learning. Besides, we incorporate the discriminative information into the hypergraph as the guidance to aggregate samples. Unlike previous works, our method updates attention weights independently, does not rely on the deep network. We demonstrate the efficacy of our approach on four benchmark datasets.
[ { "created": "Fri, 23 Oct 2020 14:07:00 GMT", "version": "v1" } ]
2020-10-26
[ [ "Shao", "Shuai", "" ], [ "Xu", "Rui", "" ], [ "Wang", "Yan-Jiang", "" ], [ "Liu", "Weifeng", "" ], [ "Liu", "Bao-Di", "" ] ]
In recent years, the attention mechanism contributes significantly to hypergraph based neural networks. However, these methods update the attention weights with the network propagating. That is to say, this type of attention mechanism is only suitable for deep learning-based methods while not applicable to the traditional machine learning approaches. In this paper, we propose a hypergraph based sparse attention mechanism to tackle this issue and embed it into dictionary learning. More specifically, we first construct a sparse attention hypergraph, asset attention weights to samples by employing the $\ell_1$-norm sparse regularization to mine the high-order relationship among sample features. Then, we introduce the hypergraph Laplacian operator to preserve the local structure for subspace transformation in dictionary learning. Besides, we incorporate the discriminative information into the hypergraph as the guidance to aggregate samples. Unlike previous works, our method updates attention weights independently, does not rely on the deep network. We demonstrate the efficacy of our approach on four benchmark datasets.
2308.14695
Mary S\'anchez-Gord\'on
Mary S\'anchez-Gord\'on, Ricardo Colomo-Palacios, Cathy Guevara-Vega, Antonio Qui\~na-Mera
The Effect of Stereotypes on Perceived Competence of Indigenous Software Practitioners: A Professional Photo
Accepted registered report at the 17th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM 2023)
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Context: Potential employers can readily find job candidates' photos through various online sources such as former employers' websites or professional and social networks. The alignment or 'fit' between a candidate and an organization is inferred in online photos through dress style and presentations of self. On the other hand, for candidates from under-represented groups like Indigenous people traditional clothing is an important and lively aspect that allows them to express belonging, enter ceremony, and show resistance.Objective: This exploratory study aims to empirically demonstrate whether traditional clothing in a picture affects the evaluation of candidates' competence for a position like a software developer in which clothing should not be crucial. Method: We plan a quasi-experimental design with both candidates (photo models) and participants (evaluators) from IT companies. It follows a 2 x 2 x 2 design with dress style (traditional / non-traditional clothing), gender and race/ethnicity of the candidates as within-subjects factors. In addition, we will explore the evaluator's gender and experience in hiring as between-subjects factors.
[ { "created": "Mon, 28 Aug 2023 16:38:27 GMT", "version": "v1" } ]
2023-08-29
[ [ "Sánchez-Gordón", "Mary", "" ], [ "Colomo-Palacios", "Ricardo", "" ], [ "Guevara-Vega", "Cathy", "" ], [ "Quiña-Mera", "Antonio", "" ] ]
Context: Potential employers can readily find job candidates' photos through various online sources such as former employers' websites or professional and social networks. The alignment or 'fit' between a candidate and an organization is inferred in online photos through dress style and presentations of self. On the other hand, for candidates from under-represented groups like Indigenous people traditional clothing is an important and lively aspect that allows them to express belonging, enter ceremony, and show resistance.Objective: This exploratory study aims to empirically demonstrate whether traditional clothing in a picture affects the evaluation of candidates' competence for a position like a software developer in which clothing should not be crucial. Method: We plan a quasi-experimental design with both candidates (photo models) and participants (evaluators) from IT companies. It follows a 2 x 2 x 2 design with dress style (traditional / non-traditional clothing), gender and race/ethnicity of the candidates as within-subjects factors. In addition, we will explore the evaluator's gender and experience in hiring as between-subjects factors.
2111.12878
Jiahui Huang
Jiahui Huang, Tolga Birdal, Zan Gojcic, Leonidas J. Guibas, Shi-Min Hu
Multiway Non-rigid Point Cloud Registration via Learned Functional Map Synchronization
null
IEEE Transactions on Pattern Analysis and Machine Intelligence 2022
10.1109/TPAMI.2022.3164653
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
We present SyNoRiM, a novel way to jointly register multiple non-rigid shapes by synchronizing the maps relating learned functions defined on the point clouds. Even though the ability to process non-rigid shapes is critical in various applications ranging from computer animation to 3D digitization, the literature still lacks a robust and flexible framework to match and align a collection of real, noisy scans observed under occlusions. Given a set of such point clouds, our method first computes the pairwise correspondences parameterized via functional maps. We simultaneously learn potentially non-orthogonal basis functions to effectively regularize the deformations, while handling the occlusions in an elegant way. To maximally benefit from the multi-way information provided by the inferred pairwise deformation fields, we synchronize the pairwise functional maps into a cycle-consistent whole thanks to our novel and principled optimization formulation. We demonstrate via extensive experiments that our method achieves a state-of-the-art performance in registration accuracy, while being flexible and efficient as we handle both non-rigid and multi-body cases in a unified framework and avoid the costly optimization over point-wise permutations by the use of basis function maps.
[ { "created": "Thu, 25 Nov 2021 02:37:59 GMT", "version": "v1" }, { "created": "Fri, 1 Apr 2022 08:13:03 GMT", "version": "v2" } ]
2022-04-12
[ [ "Huang", "Jiahui", "" ], [ "Birdal", "Tolga", "" ], [ "Gojcic", "Zan", "" ], [ "Guibas", "Leonidas J.", "" ], [ "Hu", "Shi-Min", "" ] ]
We present SyNoRiM, a novel way to jointly register multiple non-rigid shapes by synchronizing the maps relating learned functions defined on the point clouds. Even though the ability to process non-rigid shapes is critical in various applications ranging from computer animation to 3D digitization, the literature still lacks a robust and flexible framework to match and align a collection of real, noisy scans observed under occlusions. Given a set of such point clouds, our method first computes the pairwise correspondences parameterized via functional maps. We simultaneously learn potentially non-orthogonal basis functions to effectively regularize the deformations, while handling the occlusions in an elegant way. To maximally benefit from the multi-way information provided by the inferred pairwise deformation fields, we synchronize the pairwise functional maps into a cycle-consistent whole thanks to our novel and principled optimization formulation. We demonstrate via extensive experiments that our method achieves a state-of-the-art performance in registration accuracy, while being flexible and efficient as we handle both non-rigid and multi-body cases in a unified framework and avoid the costly optimization over point-wise permutations by the use of basis function maps.
2103.05533
Daniel Lopes
Daniel Lopes, J\'essica Parente, Pedro Silva, Lic\'inio Roque, Penousal machado
Performing Creativity With Computational Tools
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
The introduction of new tools in people's workflow has always been promotive of new creative paths. This paper discusses the impact of using computational tools in the performance of creative tasks, especially focusing on graphic design. The study was driven by a grounded theory methodology, applied to a set of semi-structured interviews, made to twelve people working in the areas of graphic design, data science, computer art, music and data visualisation. Among other questions, the results suggest some scenarios in which it is or it is not worth investing in the development of new intelligent creativity-aiding tools.
[ { "created": "Tue, 9 Mar 2021 16:24:43 GMT", "version": "v1" }, { "created": "Wed, 10 Mar 2021 10:38:26 GMT", "version": "v2" } ]
2021-03-11
[ [ "Lopes", "Daniel", "" ], [ "Parente", "Jéssica", "" ], [ "Silva", "Pedro", "" ], [ "Roque", "Licínio", "" ], [ "machado", "Penousal", "" ] ]
The introduction of new tools in people's workflow has always been promotive of new creative paths. This paper discusses the impact of using computational tools in the performance of creative tasks, especially focusing on graphic design. The study was driven by a grounded theory methodology, applied to a set of semi-structured interviews, made to twelve people working in the areas of graphic design, data science, computer art, music and data visualisation. Among other questions, the results suggest some scenarios in which it is or it is not worth investing in the development of new intelligent creativity-aiding tools.
2406.17641
Viktor Kaptelinin
Victor Kaptelinin, Suna Bensch, Thomas Hellstr\"om, Patrik Bj\"ornfot, and Shikhar Kumar
The experience of humans' and robots' mutual (im)politeness in enacted service scenarios: An empirical study
19 pages, 5 figures, 7 tables
null
null
null
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper reports an empirical study of the effect of human treatment of a robot on the social perception of the robot's behavior. The study employed an enacted interaction between an anthropomorphic "waiter" robot and two customers. The robot and one of the customers (acted out by a researcher) were following four different interaction scripts, representing all combinations of mutual politeness and impoliteness of the robot and the customer. The participants (N=24, within-subject design) were assigned the role of an "included observer", that is, a fellow customer who was present in the situation without being actively involved in the interactions. The participants assessed how they experienced the interaction scenarios by providing Likert scale scores and free-text responses. The results indicate that while impolite robots' behavior was generally assessed negatively, it was commonly perceived as more justifiable and fairer if the robot was treated impolitely by the human. Politeness reciprocity expectations in the context of the social perception of robots are discussed.
[ { "created": "Tue, 25 Jun 2024 15:24:08 GMT", "version": "v1" } ]
2024-06-26
[ [ "Kaptelinin", "Victor", "" ], [ "Bensch", "Suna", "" ], [ "Hellström", "Thomas", "" ], [ "Björnfot", "Patrik", "" ], [ "Kumar", "Shikhar", "" ] ]
The paper reports an empirical study of the effect of human treatment of a robot on the social perception of the robot's behavior. The study employed an enacted interaction between an anthropomorphic "waiter" robot and two customers. The robot and one of the customers (acted out by a researcher) were following four different interaction scripts, representing all combinations of mutual politeness and impoliteness of the robot and the customer. The participants (N=24, within-subject design) were assigned the role of an "included observer", that is, a fellow customer who was present in the situation without being actively involved in the interactions. The participants assessed how they experienced the interaction scenarios by providing Likert scale scores and free-text responses. The results indicate that while impolite robots' behavior was generally assessed negatively, it was commonly perceived as more justifiable and fairer if the robot was treated impolitely by the human. Politeness reciprocity expectations in the context of the social perception of robots are discussed.
1911.06478
Mingi Ji
Mingi Ji, Weonyoung Joo, Kyungwoo Song, Yoon-Yeong Kim, Il-Chul Moon
Sequential Recommendation with Relation-Aware Kernelized Self-Attention
8 pages, 5 figures, AAAI
AAAI 2020
null
null
cs.LG cs.IR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies identified that sequential Recommendation is improved by the attention mechanism. By following this development, we propose Relation-Aware Kernelized Self-Attention (RKSA) adopting a self-attention mechanism of the Transformer with augmentation of a probabilistic model. The original self-attention of Transformer is a deterministic measure without relation-awareness. Therefore, we introduce a latent space to the self-attention, and the latent space models the recommendation context from relation as a multivariate skew-normal distribution with a kernelized covariance matrix from co-occurrences, item characteristics, and user information. This work merges the self-attention of the Transformer and the sequential recommendation by adding a probabilistic model of the recommendation task specifics. We experimented RKSA over the benchmark datasets, and RKSA shows significant improvements compared to the recent baseline models. Also, RKSA were able to produce a latent space model that answers the reasons for recommendation.
[ { "created": "Fri, 15 Nov 2019 04:54:54 GMT", "version": "v1" } ]
2019-11-18
[ [ "Ji", "Mingi", "" ], [ "Joo", "Weonyoung", "" ], [ "Song", "Kyungwoo", "" ], [ "Kim", "Yoon-Yeong", "" ], [ "Moon", "Il-Chul", "" ] ]
Recent studies identified that sequential Recommendation is improved by the attention mechanism. By following this development, we propose Relation-Aware Kernelized Self-Attention (RKSA) adopting a self-attention mechanism of the Transformer with augmentation of a probabilistic model. The original self-attention of Transformer is a deterministic measure without relation-awareness. Therefore, we introduce a latent space to the self-attention, and the latent space models the recommendation context from relation as a multivariate skew-normal distribution with a kernelized covariance matrix from co-occurrences, item characteristics, and user information. This work merges the self-attention of the Transformer and the sequential recommendation by adding a probabilistic model of the recommendation task specifics. We experimented RKSA over the benchmark datasets, and RKSA shows significant improvements compared to the recent baseline models. Also, RKSA were able to produce a latent space model that answers the reasons for recommendation.
1901.09486
Yonas Tadesse
Lokesh Saharan, Lianjun Wu, and Yonas Tadesse,
Modeling and Simulation of Robotic Finger Powered by Nylon Artificial Muscles- Equations with Simulink model
11 pages, 11 pages, this paper show the modeling equations and it has two figures. Simulation and Experimental results are to be submitted to Journal of Mechanism and Robotics (JMR). The detailed model and equations can be used for numerical simulation of other robotic fingers by utilizing any contractile actuators
null
null
null
cs.RO math.DS
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper shows a detailed modeling of three-link robotic finger that is actuated by nylon artificial muscles and a simulink model that can be used for numerical study of a robotic finger. The robotic hand prototype was recently demonstrated in recent publication Wu, L., Jung de Andrade, M., Saharan, L.,Rome, R., Baughman, R., and Tadesse, Y., 2017, Compact and Low-cost Humanoid Hand Powered by Nylon Artificial Muscles, Bioinspiration & Biomimetics, 12 (2). The robotic hand is a 3D printed, lightweight and compact hand actuated by silver-coated nylon muscles, often called Twisted and coiled Polymer (TCP) muscles. TCP muscles are thermal actuators that contract when they are heated and they are getting attention for application in robotics. The purpose of this paper is to demonstrate the modeling equations that were derived based on Euler Lagrangian approach that is suitable for implementation in simulink model.
[ { "created": "Mon, 28 Jan 2019 02:31:45 GMT", "version": "v1" } ]
2019-01-29
[ [ "Saharan", "Lokesh", "" ], [ "Wu", "Lianjun", "" ], [ "Tadesse", "Yonas", "" ] ]
This paper shows a detailed modeling of three-link robotic finger that is actuated by nylon artificial muscles and a simulink model that can be used for numerical study of a robotic finger. The robotic hand prototype was recently demonstrated in recent publication Wu, L., Jung de Andrade, M., Saharan, L.,Rome, R., Baughman, R., and Tadesse, Y., 2017, Compact and Low-cost Humanoid Hand Powered by Nylon Artificial Muscles, Bioinspiration & Biomimetics, 12 (2). The robotic hand is a 3D printed, lightweight and compact hand actuated by silver-coated nylon muscles, often called Twisted and coiled Polymer (TCP) muscles. TCP muscles are thermal actuators that contract when they are heated and they are getting attention for application in robotics. The purpose of this paper is to demonstrate the modeling equations that were derived based on Euler Lagrangian approach that is suitable for implementation in simulink model.
2302.05728
Muhammad Ahmed
Muhammad Ahmed, Anam Qureshi, Jawwad Ahmed Shamsi, Murk Marvi
Sequential Embedding-based Attentive (SEA) classifier for malware classification
null
2022 International Conference on Cyber Warfare and Security (ICCWS), Islamabad, Pakistan, 2022, pp. 28-35
10.1109/ICCWS56285.2022.9998431
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
The tremendous growth in smart devices has uplifted several security threats. One of the most prominent threats is malicious software also known as malware. Malware has the capability of corrupting a device and collapsing an entire network. Therefore, its early detection and mitigation are extremely important to avoid catastrophic effects. In this work, we came up with a solution for malware detection using state-of-the-art natural language processing (NLP) techniques. Our main focus is to provide a lightweight yet effective classifier for malware detection which can be used for heterogeneous devices, be it a resource constraint device or a resourceful machine. Our proposed model is tested on the benchmark data set with an accuracy and log loss score of 99.13 percent and 0.04 respectively.
[ { "created": "Sat, 11 Feb 2023 15:48:16 GMT", "version": "v1" } ]
2023-02-14
[ [ "Ahmed", "Muhammad", "" ], [ "Qureshi", "Anam", "" ], [ "Shamsi", "Jawwad Ahmed", "" ], [ "Marvi", "Murk", "" ] ]
The tremendous growth in smart devices has uplifted several security threats. One of the most prominent threats is malicious software also known as malware. Malware has the capability of corrupting a device and collapsing an entire network. Therefore, its early detection and mitigation are extremely important to avoid catastrophic effects. In this work, we came up with a solution for malware detection using state-of-the-art natural language processing (NLP) techniques. Our main focus is to provide a lightweight yet effective classifier for malware detection which can be used for heterogeneous devices, be it a resource constraint device or a resourceful machine. Our proposed model is tested on the benchmark data set with an accuracy and log loss score of 99.13 percent and 0.04 respectively.
1907.05967
Hossein Kazemi
Hossein Kazemi, Majid Safari and Harald Haas
Multi-Hop Wireless Optical Backhauling for LiFi Attocell Networks: Bandwidth Scheduling and Power Control
36 pages, 21 figures, 1 table
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The backhaul of hundreds of light fidelity (LiFi) base stations (BSs) constitutes a major challenge. Indoor wireless optical backhauling is a novel approach whereby the interconnections between adjacent LiFi BSs are provided by way of directed line-of-sight (LOS) wireless infrared (IR) links. Building on the aforesaid approach, this paper presents the top-down design of a multi-hop wireless backhaul configuration for multi-tier optical attocell networks by proposing the novel idea of super cells. Such cells incorporate multiple clusters of attocells that are connected to the core network via a single gateway based on multi-hop decode-and-forward (DF) relaying. Consequently, new challenges arise for managing the bandwidth and power resources of the bottleneck backhaul. By putting forward user-based bandwidth scheduling (UBS) and cell-based bandwidth scheduling (CBS) policies, the system-level modeling and analysis of the end-to-end multi-user sum rate is elaborated. In addition, optimal bandwidth scheduling under both UBS and CBS policies are formulated as constrained convex optimization problems, which are solved by using the projected subgradient method. Furthermore, the transmission power of the backhaul system is opportunistically reduced by way of an innovative fixed power control (FPC) strategy. The notion of backhaul bottleneck occurrence (BBO) is introduced. An accurate approximate expression of the probability of BBO is derived, and then verified using Monte Carlo simulations. Several insights are provided into the offered gains of the proposed schemes through extensive computer simulations, by studying different aspects of the performance of super cells including the average sum rate, the BBO probability and the backhaul power efficiency (PE).
[ { "created": "Fri, 12 Jul 2019 21:47:11 GMT", "version": "v1" } ]
2019-07-16
[ [ "Kazemi", "Hossein", "" ], [ "Safari", "Majid", "" ], [ "Haas", "Harald", "" ] ]
The backhaul of hundreds of light fidelity (LiFi) base stations (BSs) constitutes a major challenge. Indoor wireless optical backhauling is a novel approach whereby the interconnections between adjacent LiFi BSs are provided by way of directed line-of-sight (LOS) wireless infrared (IR) links. Building on the aforesaid approach, this paper presents the top-down design of a multi-hop wireless backhaul configuration for multi-tier optical attocell networks by proposing the novel idea of super cells. Such cells incorporate multiple clusters of attocells that are connected to the core network via a single gateway based on multi-hop decode-and-forward (DF) relaying. Consequently, new challenges arise for managing the bandwidth and power resources of the bottleneck backhaul. By putting forward user-based bandwidth scheduling (UBS) and cell-based bandwidth scheduling (CBS) policies, the system-level modeling and analysis of the end-to-end multi-user sum rate is elaborated. In addition, optimal bandwidth scheduling under both UBS and CBS policies are formulated as constrained convex optimization problems, which are solved by using the projected subgradient method. Furthermore, the transmission power of the backhaul system is opportunistically reduced by way of an innovative fixed power control (FPC) strategy. The notion of backhaul bottleneck occurrence (BBO) is introduced. An accurate approximate expression of the probability of BBO is derived, and then verified using Monte Carlo simulations. Several insights are provided into the offered gains of the proposed schemes through extensive computer simulations, by studying different aspects of the performance of super cells including the average sum rate, the BBO probability and the backhaul power efficiency (PE).
2111.14434
Pedro Miguel Sanchez Sanchez
Pedro Miguel S\'anchez S\'anchez, Alberto Huertas Celdr\'an, Jos\'e Rafael Buend\'ia Rubio, G\'er\^ome Bovet, Gregorio Mart\'inez P\'erez
Robust Federated Learning for execution time-based device model identification under label-flipping attack
null
null
10.1007/s10586-022-03949-w
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
The computing device deployment explosion experienced in recent years, motivated by the advances of technologies such as Internet-of-Things (IoT) and 5G, has led to a global scenario with increasing cybersecurity risks and threats. Among them, device spoofing and impersonation cyberattacks stand out due to their impact and, usually, low complexity required to be launched. To solve this issue, several solutions have emerged to identify device models and types based on the combination of behavioral fingerprinting and Machine/Deep Learning (ML/DL) techniques. However, these solutions are not appropriated for scenarios where data privacy and protection is a must, as they require data centralization for processing. In this context, newer approaches such as Federated Learning (FL) have not been fully explored yet, especially when malicious clients are present in the scenario setup. The present work analyzes and compares the device model identification performance of a centralized DL model with an FL one while using execution time-based events. For experimental purposes, a dataset containing execution-time features of 55 Raspberry Pis belonging to four different models has been collected and published. Using this dataset, the proposed solution achieved 0.9999 accuracy in both setups, centralized and federated, showing no performance decrease while preserving data privacy. Later, the impact of a label-flipping attack during the federated model training is evaluated, using several aggregation mechanisms as countermeasure. Zeno and coordinate-wise median aggregation show the best performance, although their performance greatly degrades when the percentage of fully malicious clients (all training samples poisoned) grows over 50%.
[ { "created": "Mon, 29 Nov 2021 10:27:14 GMT", "version": "v1" } ]
2023-01-06
[ [ "Sánchez", "Pedro Miguel Sánchez", "" ], [ "Celdrán", "Alberto Huertas", "" ], [ "Rubio", "José Rafael Buendía", "" ], [ "Bovet", "Gérôme", "" ], [ "Pérez", "Gregorio Martínez", "" ] ]
The computing device deployment explosion experienced in recent years, motivated by the advances of technologies such as Internet-of-Things (IoT) and 5G, has led to a global scenario with increasing cybersecurity risks and threats. Among them, device spoofing and impersonation cyberattacks stand out due to their impact and, usually, low complexity required to be launched. To solve this issue, several solutions have emerged to identify device models and types based on the combination of behavioral fingerprinting and Machine/Deep Learning (ML/DL) techniques. However, these solutions are not appropriated for scenarios where data privacy and protection is a must, as they require data centralization for processing. In this context, newer approaches such as Federated Learning (FL) have not been fully explored yet, especially when malicious clients are present in the scenario setup. The present work analyzes and compares the device model identification performance of a centralized DL model with an FL one while using execution time-based events. For experimental purposes, a dataset containing execution-time features of 55 Raspberry Pis belonging to four different models has been collected and published. Using this dataset, the proposed solution achieved 0.9999 accuracy in both setups, centralized and federated, showing no performance decrease while preserving data privacy. Later, the impact of a label-flipping attack during the federated model training is evaluated, using several aggregation mechanisms as countermeasure. Zeno and coordinate-wise median aggregation show the best performance, although their performance greatly degrades when the percentage of fully malicious clients (all training samples poisoned) grows over 50%.
1904.11597
Nilanjan Roy Chowdhury. Dr.
Nilanjan Roy Chowdhury, Nandini Negi, Aranya Chakrabortty
A New Cyber-Secure Countermeasure for LTI systems under DoS attacks
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new counter-measure to mitigate denial-of-service cyber-attacks in linear time-invariant (LTI) systems. We first design a sparse linear quadratic regulator (LQR) optimal controller for a given LTI plant and evaluate the priority of the feedback communication links in terms of the loss of closed-loop performance when the corresponding block of the feedback gain matrix is removed. An attacker may know about this priority ordering, and thereby attack the links with the highest priority. To prevent this, we present a message rerouting strategy by which the states that are scheduled to be transmitted through the high priority links can be rerouted through lower priority ones in case the former get attacked. Since the attacked link is not available for service, and the states of the low priority links can no longer be accommodated either, we run a structured $\mathcal{H}_2$ control algorithm to determine the post-attack optimal feedback gains. We illustrate various aspects of the proposed algorithms by simulations.
[ { "created": "Thu, 25 Apr 2019 21:29:26 GMT", "version": "v1" } ]
2019-04-29
[ [ "Chowdhury", "Nilanjan Roy", "" ], [ "Negi", "Nandini", "" ], [ "Chakrabortty", "Aranya", "" ] ]
This paper presents a new counter-measure to mitigate denial-of-service cyber-attacks in linear time-invariant (LTI) systems. We first design a sparse linear quadratic regulator (LQR) optimal controller for a given LTI plant and evaluate the priority of the feedback communication links in terms of the loss of closed-loop performance when the corresponding block of the feedback gain matrix is removed. An attacker may know about this priority ordering, and thereby attack the links with the highest priority. To prevent this, we present a message rerouting strategy by which the states that are scheduled to be transmitted through the high priority links can be rerouted through lower priority ones in case the former get attacked. Since the attacked link is not available for service, and the states of the low priority links can no longer be accommodated either, we run a structured $\mathcal{H}_2$ control algorithm to determine the post-attack optimal feedback gains. We illustrate various aspects of the proposed algorithms by simulations.
2407.13937
Jen-Hao Cheng
Sheng-Yao Kuan, Jen-Hao Cheng, Hsiang-Wei Huang, Wenhao Chai, Cheng-Yen Yang, Hugo Latapie, Gaowen Liu, Bing-Fei Wu, Jenq-Neng Hwang
Boosting Online 3D Multi-Object Tracking through Camera-Radar Cross Check
2024 IEEE Intelligent Vehicles Symposium (IV)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In the domain of autonomous driving, the integration of multi-modal perception techniques based on data from diverse sensors has demonstrated substantial progress. Effectively surpassing the capabilities of state-of-the-art single-modality detectors through sensor fusion remains an active challenge. This work leverages the respective advantages of cameras in perspective view and radars in Bird's Eye View (BEV) to greatly enhance overall detection and tracking performance. Our approach, Camera-Radar Associated Fusion Tracking Booster (CRAFTBooster), represents a pioneering effort to enhance radar-camera fusion in the tracking stage, contributing to improved 3D MOT accuracy. The superior experimental results on the K-Radaar dataset, which exhibit 5-6% on IDF1 tracking performance gain, validate the potential of effective sensor fusion in advancing autonomous driving.
[ { "created": "Thu, 18 Jul 2024 23:32:27 GMT", "version": "v1" } ]
2024-07-22
[ [ "Kuan", "Sheng-Yao", "" ], [ "Cheng", "Jen-Hao", "" ], [ "Huang", "Hsiang-Wei", "" ], [ "Chai", "Wenhao", "" ], [ "Yang", "Cheng-Yen", "" ], [ "Latapie", "Hugo", "" ], [ "Liu", "Gaowen", "" ], [ "Wu", "Bing-Fei", "" ], [ "Hwang", "Jenq-Neng", "" ] ]
In the domain of autonomous driving, the integration of multi-modal perception techniques based on data from diverse sensors has demonstrated substantial progress. Effectively surpassing the capabilities of state-of-the-art single-modality detectors through sensor fusion remains an active challenge. This work leverages the respective advantages of cameras in perspective view and radars in Bird's Eye View (BEV) to greatly enhance overall detection and tracking performance. Our approach, Camera-Radar Associated Fusion Tracking Booster (CRAFTBooster), represents a pioneering effort to enhance radar-camera fusion in the tracking stage, contributing to improved 3D MOT accuracy. The superior experimental results on the K-Radaar dataset, which exhibit 5-6% on IDF1 tracking performance gain, validate the potential of effective sensor fusion in advancing autonomous driving.
1806.09094
Yanxiang Jiang
Wenlong Huang, Yanxiang Jiang, Mehdi Bennis, Fu-Chun Zheng, Haris Gacanin, and Xiaohu You
Decentralized Asynchronous Coded Caching in Fog-RAN
6 pages, 2 figures. This work is accepted by IEEE VTC 2018 FALL
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate asynchronous coded caching in fog radio access networks (F-RAN). To minimize the fronthaul load, the encoding set collapsing rule and encoding set partition method are proposed to establish the relationship between the coded-multicasting contents in asynchronous and synchronous coded caching. Furthermore, a decentralized asynchronous coded caching scheme is proposed, which provides asynchronous and synchronous transmission methods for different delay requirements. The simulation results show that our proposed scheme creates considerable coded-multicasting opportunities in asynchronous request scenarios.
[ { "created": "Sun, 24 Jun 2018 07:07:07 GMT", "version": "v1" } ]
2018-06-26
[ [ "Huang", "Wenlong", "" ], [ "Jiang", "Yanxiang", "" ], [ "Bennis", "Mehdi", "" ], [ "Zheng", "Fu-Chun", "" ], [ "Gacanin", "Haris", "" ], [ "You", "Xiaohu", "" ] ]
In this paper, we investigate asynchronous coded caching in fog radio access networks (F-RAN). To minimize the fronthaul load, the encoding set collapsing rule and encoding set partition method are proposed to establish the relationship between the coded-multicasting contents in asynchronous and synchronous coded caching. Furthermore, a decentralized asynchronous coded caching scheme is proposed, which provides asynchronous and synchronous transmission methods for different delay requirements. The simulation results show that our proposed scheme creates considerable coded-multicasting opportunities in asynchronous request scenarios.
1912.00127
Chowdhury Rahman
Md. Hasibur Rahman, Chowdhury Rafeed Rahman, Ruhul Amin, Md. Habibur Rahman Sifat and Afra Anika
A Hybrid Approach Towards Two Stage Bengali Question Classification Utilizing Smart Data Balancing Technique
null
null
10.1007/978-3-030-52856-0_36
null
cs.CL cs.IR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Question classification (QC) is the primary step of the Question Answering (QA) system. Question Classification (QC) system classifies the questions in particular classes so that Question Answering (QA) System can provide correct answers for the questions. Our system categorizes the factoid type questions asked in natural language after extracting features of the questions. We present a two stage QC system for Bengali. It utilizes one dimensional convolutional neural network for classifying questions into coarse classes in the first stage. Word2vec representation of existing words of the question corpus have been constructed and used for assisting 1D CNN. A smart data balancing technique has been employed for giving data hungry convolutional neural network the advantage of a greater number of effective samples to learn from. For each coarse class, a separate Stochastic Gradient Descent (SGD) based classifier has been used in order to differentiate among the finer classes within that coarse class. TF-IDF representation of each word has been used as feature for the SGD classifiers implemented as part of second stage classification. Experiments show the effectiveness of our proposed method for Bengali question classification.
[ { "created": "Sat, 30 Nov 2019 04:00:31 GMT", "version": "v1" }, { "created": "Sun, 8 Dec 2019 02:15:32 GMT", "version": "v2" }, { "created": "Tue, 3 Mar 2020 03:53:55 GMT", "version": "v3" } ]
2020-08-11
[ [ "Rahman", "Md. Hasibur", "" ], [ "Rahman", "Chowdhury Rafeed", "" ], [ "Amin", "Ruhul", "" ], [ "Sifat", "Md. Habibur Rahman", "" ], [ "Anika", "Afra", "" ] ]
Question classification (QC) is the primary step of the Question Answering (QA) system. Question Classification (QC) system classifies the questions in particular classes so that Question Answering (QA) System can provide correct answers for the questions. Our system categorizes the factoid type questions asked in natural language after extracting features of the questions. We present a two stage QC system for Bengali. It utilizes one dimensional convolutional neural network for classifying questions into coarse classes in the first stage. Word2vec representation of existing words of the question corpus have been constructed and used for assisting 1D CNN. A smart data balancing technique has been employed for giving data hungry convolutional neural network the advantage of a greater number of effective samples to learn from. For each coarse class, a separate Stochastic Gradient Descent (SGD) based classifier has been used in order to differentiate among the finer classes within that coarse class. TF-IDF representation of each word has been used as feature for the SGD classifiers implemented as part of second stage classification. Experiments show the effectiveness of our proposed method for Bengali question classification.
1401.1140
Axel Bacher
Axel Bacher, Olivier Bodini, Alice Jacquot
Efficient random sampling of binary and unary-binary trees via holonomic equations
null
null
null
null
cs.DS math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new uniform random sampler for binary trees with $n$ internal nodes consuming $2n + \Theta(\log(n)^2)$ random bits on average. This makes it quasi-optimal and out-performs the classical Remy algorithm. We also present a sampler for unary-binary trees with $n$ nodes taking $\Theta(n)$ random bits on average. Both are the first linear-time algorithms to be optimal up to a constant.
[ { "created": "Mon, 6 Jan 2014 17:04:52 GMT", "version": "v1" }, { "created": "Tue, 7 Jan 2014 08:36:22 GMT", "version": "v2" } ]
2018-02-20
[ [ "Bacher", "Axel", "" ], [ "Bodini", "Olivier", "" ], [ "Jacquot", "Alice", "" ] ]
We present a new uniform random sampler for binary trees with $n$ internal nodes consuming $2n + \Theta(\log(n)^2)$ random bits on average. This makes it quasi-optimal and out-performs the classical Remy algorithm. We also present a sampler for unary-binary trees with $n$ nodes taking $\Theta(n)$ random bits on average. Both are the first linear-time algorithms to be optimal up to a constant.
1107.4711
Tamir Tassa
Tamir Tassa
Finding All Allowed Edges in a Bipartite Graph
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of finding all allowed edges in a bipartite graph $G=(V,E)$, i.e., all edges that are included in some maximum matching. We show that given any maximum matching in the graph, it is possible to perform this computation in linear time $O(n+m)$ (where $n=|V|$ and $m=|E|$). Hence, the time complexity of finding all allowed edges reduces to that of finding a single maximum matching, which is $O(n^{1/2}m)$ [Hopcroft and Karp 1973], or $O((n/\log n)^{1/2}m)$ for dense graphs with $m=\Theta(n^2)$ [Alt et al. 1991]. This time complexity improves upon that of the best known algorithms for the problem, which is $O(nm)$ ([Costa 1994] for bipartite graphs, and [Carvalho and Cheriyan 2005] for general graphs). Other algorithms for solving that problem are randomized algorithms due to [Rabin and Vazirani 1989] and [Cheriyan 1997], the runtime of which is $\tilde{O}(n^{2.376})$. Our algorithm, apart from being deterministic, improves upon that time complexity for bipartite graphs when $m=O(n^r)$ and $r<1.876$. In addition, our algorithm is elementary, conceptually simple, and easy to implement.
[ { "created": "Sat, 23 Jul 2011 21:32:55 GMT", "version": "v1" } ]
2011-07-26
[ [ "Tassa", "Tamir", "" ] ]
We consider the problem of finding all allowed edges in a bipartite graph $G=(V,E)$, i.e., all edges that are included in some maximum matching. We show that given any maximum matching in the graph, it is possible to perform this computation in linear time $O(n+m)$ (where $n=|V|$ and $m=|E|$). Hence, the time complexity of finding all allowed edges reduces to that of finding a single maximum matching, which is $O(n^{1/2}m)$ [Hopcroft and Karp 1973], or $O((n/\log n)^{1/2}m)$ for dense graphs with $m=\Theta(n^2)$ [Alt et al. 1991]. This time complexity improves upon that of the best known algorithms for the problem, which is $O(nm)$ ([Costa 1994] for bipartite graphs, and [Carvalho and Cheriyan 2005] for general graphs). Other algorithms for solving that problem are randomized algorithms due to [Rabin and Vazirani 1989] and [Cheriyan 1997], the runtime of which is $\tilde{O}(n^{2.376})$. Our algorithm, apart from being deterministic, improves upon that time complexity for bipartite graphs when $m=O(n^r)$ and $r<1.876$. In addition, our algorithm is elementary, conceptually simple, and easy to implement.
1501.00318
Muhammad Jawaherul Alam
Md. Jawaherul Alam, David Eppstein, Michael Kaufmann, Stephen G. Kobourov, Sergey Pupyrev, Andre Schulz, and Torsten Ueckerdt
Contact Representations of Sparse Planar Graphs
null
null
null
null
cs.CG cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study representations of graphs by contacts of circular arcs, CCA-representations for short, where the vertices are interior-disjoint circular arcs in the plane and each edge is realized by an endpoint of one arc touching the interior of another. A graph is (2,k)-sparse if every s-vertex subgraph has at most 2s - k edges, and (2, k)-tight if in addition it has exactly 2n - k edges, where n is the number of vertices. Every graph with a CCA- representation is planar and (2, 0)-sparse, and it follows from known results on contacts of line segments that for k >= 3 every (2, k)-sparse graph has a CCA-representation. Hence the question of CCA-representability is open for (2, k)-sparse graphs with 0 <= k <= 2. We partially answer this question by computing CCA-representations for several subclasses of planar (2,0)-sparse graphs. In particular, we show that every plane (2, 2)-sparse graph has a CCA-representation, and that any plane (2, 1)-tight graph or (2, 0)-tight graph dual to a (2, 3)-tight graph or (2, 4)-tight graph has a CCA-representation. Next, we study CCA-representations in which each arc has an empty convex hull. We characterize the plane graphs that have such a representation, based on the existence of a special orientation of the graph edges. Using this characterization, we show that every plane graph of maximum degree 4 has such a representation, but that finding such a representation for a plane (2, 0)-tight graph with maximum degree 5 is an NP-complete problem. Finally, we describe a simple algorithm for representing plane (2, 0)-sparse graphs with wedges, where each vertex is represented with a sequence of two circular arcs (straight-line segments).
[ { "created": "Thu, 1 Jan 2015 21:22:30 GMT", "version": "v1" } ]
2015-01-05
[ [ "Alam", "Md. Jawaherul", "" ], [ "Eppstein", "David", "" ], [ "Kaufmann", "Michael", "" ], [ "Kobourov", "Stephen G.", "" ], [ "Pupyrev", "Sergey", "" ], [ "Schulz", "Andre", "" ], [ "Ueckerdt", "Torsten", "" ] ]
We study representations of graphs by contacts of circular arcs, CCA-representations for short, where the vertices are interior-disjoint circular arcs in the plane and each edge is realized by an endpoint of one arc touching the interior of another. A graph is (2,k)-sparse if every s-vertex subgraph has at most 2s - k edges, and (2, k)-tight if in addition it has exactly 2n - k edges, where n is the number of vertices. Every graph with a CCA- representation is planar and (2, 0)-sparse, and it follows from known results on contacts of line segments that for k >= 3 every (2, k)-sparse graph has a CCA-representation. Hence the question of CCA-representability is open for (2, k)-sparse graphs with 0 <= k <= 2. We partially answer this question by computing CCA-representations for several subclasses of planar (2,0)-sparse graphs. In particular, we show that every plane (2, 2)-sparse graph has a CCA-representation, and that any plane (2, 1)-tight graph or (2, 0)-tight graph dual to a (2, 3)-tight graph or (2, 4)-tight graph has a CCA-representation. Next, we study CCA-representations in which each arc has an empty convex hull. We characterize the plane graphs that have such a representation, based on the existence of a special orientation of the graph edges. Using this characterization, we show that every plane graph of maximum degree 4 has such a representation, but that finding such a representation for a plane (2, 0)-tight graph with maximum degree 5 is an NP-complete problem. Finally, we describe a simple algorithm for representing plane (2, 0)-sparse graphs with wedges, where each vertex is represented with a sequence of two circular arcs (straight-line segments).
2208.02953
Tarik A. Rashid
Nitesh Banskota, Abeer Alsadoon, P.W.C. Prasad, Ahmed Dawoud, Tarik A. Rashid, Omar Hisham Alsadoon
A Novel Enhanced Convolution Neural Network with Extreme Learning Machine: Facial Emotional Recognition in Psychology Practices
19 pages
Multimedia Tools and Applications, 2022
10.1007/s11042-022-13567-8
null
cs.CV cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Facial emotional recognition is one of the essential tools used by recognition psychology to diagnose patients. Face and facial emotional recognition are areas where machine learning is excelling. Facial Emotion Recognition in an unconstrained environment is an open challenge for digital image processing due to different environments, such as lighting conditions, pose variation, yaw motion, and occlusions. Deep learning approaches have shown significant improvements in image recognition. However, accuracy and time still need improvements. This research aims to improve facial emotion recognition accuracy during the training session and reduce processing time using a modified Convolution Neural Network Enhanced with Extreme Learning Machine (CNNEELM). The system entails (CNNEELM) improving the accuracy in image registration during the training session. Furthermore, the system recognizes six facial emotions happy, sad, disgust, fear, surprise, and neutral with the proposed CNNEELM model. The study shows that the overall facial emotion recognition accuracy is improved by 2% than the state of art solutions with a modified Stochastic Gradient Descent (SGD) technique. With the Extreme Learning Machine (ELM) classifier, the processing time is brought down to 65ms from 113ms, which can smoothly classify each frame from a video clip at 20fps. With the pre-trained InceptionV3 model, the proposed CNNEELM model is trained with JAFFE, CK+, and FER2013 expression datasets. The simulation results show significant improvements in accuracy and processing time, making the model suitable for the video analysis process. Besides, the study solves the issue of the large processing time required to process the facial images.
[ { "created": "Fri, 5 Aug 2022 02:21:34 GMT", "version": "v1" } ]
2022-08-08
[ [ "Banskota", "Nitesh", "" ], [ "Alsadoon", "Abeer", "" ], [ "Prasad", "P. W. C.", "" ], [ "Dawoud", "Ahmed", "" ], [ "Rashid", "Tarik A.", "" ], [ "Alsadoon", "Omar Hisham", "" ] ]
Facial emotional recognition is one of the essential tools used by recognition psychology to diagnose patients. Face and facial emotional recognition are areas where machine learning is excelling. Facial Emotion Recognition in an unconstrained environment is an open challenge for digital image processing due to different environments, such as lighting conditions, pose variation, yaw motion, and occlusions. Deep learning approaches have shown significant improvements in image recognition. However, accuracy and time still need improvements. This research aims to improve facial emotion recognition accuracy during the training session and reduce processing time using a modified Convolution Neural Network Enhanced with Extreme Learning Machine (CNNEELM). The system entails (CNNEELM) improving the accuracy in image registration during the training session. Furthermore, the system recognizes six facial emotions happy, sad, disgust, fear, surprise, and neutral with the proposed CNNEELM model. The study shows that the overall facial emotion recognition accuracy is improved by 2% than the state of art solutions with a modified Stochastic Gradient Descent (SGD) technique. With the Extreme Learning Machine (ELM) classifier, the processing time is brought down to 65ms from 113ms, which can smoothly classify each frame from a video clip at 20fps. With the pre-trained InceptionV3 model, the proposed CNNEELM model is trained with JAFFE, CK+, and FER2013 expression datasets. The simulation results show significant improvements in accuracy and processing time, making the model suitable for the video analysis process. Besides, the study solves the issue of the large processing time required to process the facial images.
2003.04315
Benjamin Lee
Benjamin Charles Germain Lee, Doug Downey, Kyle Lo, Daniel S. Weld
LIMEADE: From AI Explanations to Advice Taking
18 pages, 7 figures
null
null
null
cs.IR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow an AI to take advice from humans in response to explanations are similarly useful. While both capabilities are well-developed for transparent learning models (e.g., linear models and GA$^2$Ms), and recent techniques (e.g., LIME and SHAP) can generate explanations for opaque models, little attention has been given to advice methods for opaque models. This paper introduces LIMEADE, the first general framework that translates both positive and negative advice (expressed using high-level vocabulary such as that employed by post-hoc explanations) into an update to an arbitrary, underlying opaque model. We demonstrate the generality of our approach with case studies on seventy real-world models across two broad domains: image classification and text recommendation. We show our method improves accuracy compared to a rigorous baseline on the image classification domains. For the text modality, we apply our framework to a neural recommender system for scientific papers on a public website; our user study shows that our framework leads to significantly higher perceived user control, trust, and satisfaction.
[ { "created": "Mon, 9 Mar 2020 18:00:00 GMT", "version": "v1" }, { "created": "Fri, 22 Oct 2021 03:13:39 GMT", "version": "v2" }, { "created": "Tue, 1 Mar 2022 23:42:10 GMT", "version": "v3" }, { "created": "Wed, 12 Oct 2022 22:45:19 GMT", "version": "v4" }, { "created": "Tue, 17 Jan 2023 23:29:15 GMT", "version": "v5" } ]
2023-01-19
[ [ "Lee", "Benjamin Charles Germain", "" ], [ "Downey", "Doug", "" ], [ "Lo", "Kyle", "" ], [ "Weld", "Daniel S.", "" ] ]
Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow an AI to take advice from humans in response to explanations are similarly useful. While both capabilities are well-developed for transparent learning models (e.g., linear models and GA$^2$Ms), and recent techniques (e.g., LIME and SHAP) can generate explanations for opaque models, little attention has been given to advice methods for opaque models. This paper introduces LIMEADE, the first general framework that translates both positive and negative advice (expressed using high-level vocabulary such as that employed by post-hoc explanations) into an update to an arbitrary, underlying opaque model. We demonstrate the generality of our approach with case studies on seventy real-world models across two broad domains: image classification and text recommendation. We show our method improves accuracy compared to a rigorous baseline on the image classification domains. For the text modality, we apply our framework to a neural recommender system for scientific papers on a public website; our user study shows that our framework leads to significantly higher perceived user control, trust, and satisfaction.
2304.00932
Sijie Wang
Sijie Wang, Qiyu Kang, Rui She, Wei Wang, Kai Zhao, Yang Song, Wee Peng Tay
HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion
Accepted by CVPR 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
LiDAR relocalization plays a crucial role in many fields, including robotics, autonomous driving, and computer vision. LiDAR-based retrieval from a database typically incurs high computation storage costs and can lead to globally inaccurate pose estimations if the database is too sparse. On the other hand, pose regression methods take images or point clouds as inputs and directly regress global poses in an end-to-end manner. They do not perform database matching and are more computationally efficient than retrieval techniques. We propose HypLiLoc, a new model for LiDAR pose regression. We use two branched backbones to extract 3D features and 2D projection features, respectively. We consider multi-modal feature fusion in both Euclidean and hyperbolic spaces to obtain more effective feature representations. Experimental results indicate that HypLiLoc achieves state-of-the-art performance in both outdoor and indoor datasets. We also conduct extensive ablation studies on the framework design, which demonstrate the effectiveness of multi-modal feature extraction and multi-space embedding. Our code is released at: https://github.com/sijieaaa/HypLiLoc
[ { "created": "Mon, 3 Apr 2023 12:43:34 GMT", "version": "v1" }, { "created": "Thu, 25 May 2023 12:10:38 GMT", "version": "v2" } ]
2023-05-26
[ [ "Wang", "Sijie", "" ], [ "Kang", "Qiyu", "" ], [ "She", "Rui", "" ], [ "Wang", "Wei", "" ], [ "Zhao", "Kai", "" ], [ "Song", "Yang", "" ], [ "Tay", "Wee Peng", "" ] ]
LiDAR relocalization plays a crucial role in many fields, including robotics, autonomous driving, and computer vision. LiDAR-based retrieval from a database typically incurs high computation storage costs and can lead to globally inaccurate pose estimations if the database is too sparse. On the other hand, pose regression methods take images or point clouds as inputs and directly regress global poses in an end-to-end manner. They do not perform database matching and are more computationally efficient than retrieval techniques. We propose HypLiLoc, a new model for LiDAR pose regression. We use two branched backbones to extract 3D features and 2D projection features, respectively. We consider multi-modal feature fusion in both Euclidean and hyperbolic spaces to obtain more effective feature representations. Experimental results indicate that HypLiLoc achieves state-of-the-art performance in both outdoor and indoor datasets. We also conduct extensive ablation studies on the framework design, which demonstrate the effectiveness of multi-modal feature extraction and multi-space embedding. Our code is released at: https://github.com/sijieaaa/HypLiLoc
1908.03179
Alexey Gotsman
Artem Khyzha, Hagit Attiya and Alexey Gotsman
Privatization-Safe Transactional Memories (Extended Version)
Extended version of a paper from DISC'19 (International Symposium on Distributed Computing). arXiv admin note: substantial text overlap with arXiv:1801.04249
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transactional memory (TM) facilitates the development of concurrent applications by letting the programmer designate certain code blocks as atomic. Programmers using a TM often would like to access the same data both inside and outside transactions, and would prefer their programs to have a strongly atomic semantics, which allows transactions to be viewed as executing atomically with respect to non-transactional accesses. Since guaranteeing such semantics for arbitrary programs is prohibitively expensive, researchers have suggested guaranteeing it only for certain data-race free (DRF) programs, particularly those that follow the privatization idiom: from some point on, threads agree that a given object can be accessed non-transactionally. In this paper we show that a variant of Transactional DRF (TDRF) by Dalessandro et al. is appropriate for a class of privatization-safe TMs, which allow using privatization idioms. We prove that, if such a TM satisfies a condition we call privatization-safe opacity and a program using the TM is TDRF under strongly atomic semantics, then the program indeed has such semantics. We also present a method for proving privatization-safe opacity that reduces proving this generalization to proving the usual opacity, and apply the method to a TM based on two-phase locking and a privatization-safe version of TL2. Finally, we establish the inherent cost of privatization-safety: we prove that a TM cannot be progressive and have invisible reads if it guarantees strongly atomic semantics for TDRF programs.
[ { "created": "Thu, 8 Aug 2019 17:19:42 GMT", "version": "v1" } ]
2019-08-09
[ [ "Khyzha", "Artem", "" ], [ "Attiya", "Hagit", "" ], [ "Gotsman", "Alexey", "" ] ]
Transactional memory (TM) facilitates the development of concurrent applications by letting the programmer designate certain code blocks as atomic. Programmers using a TM often would like to access the same data both inside and outside transactions, and would prefer their programs to have a strongly atomic semantics, which allows transactions to be viewed as executing atomically with respect to non-transactional accesses. Since guaranteeing such semantics for arbitrary programs is prohibitively expensive, researchers have suggested guaranteeing it only for certain data-race free (DRF) programs, particularly those that follow the privatization idiom: from some point on, threads agree that a given object can be accessed non-transactionally. In this paper we show that a variant of Transactional DRF (TDRF) by Dalessandro et al. is appropriate for a class of privatization-safe TMs, which allow using privatization idioms. We prove that, if such a TM satisfies a condition we call privatization-safe opacity and a program using the TM is TDRF under strongly atomic semantics, then the program indeed has such semantics. We also present a method for proving privatization-safe opacity that reduces proving this generalization to proving the usual opacity, and apply the method to a TM based on two-phase locking and a privatization-safe version of TL2. Finally, we establish the inherent cost of privatization-safety: we prove that a TM cannot be progressive and have invisible reads if it guarantees strongly atomic semantics for TDRF programs.
2306.07565
Hao Xu
Hao Xu, Yunqing Sun, Zihao Li, Yao Sun, Lei Zhang and Xiaoshuai Zhang
deController: A Web3 Native Cyberspace Infrastructure Perspective
null
null
10.1109/MCOM.005.2200481
null
cs.NI cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Web3 brings an emerging outlook for the value of decentralization, boosting the decentralized infrastructure. People can benefit from Web3, facilitated by the advances in distributed ledger technology, to read, write and own web content, services and applications more freely without revealing their real identities. Although the features and merits of Web3 have been widely discussed, the network architecture of Web3 and how to achieve complete decentralization considering law compliance in Web3 are still unclear. Here, we propose a perspective of Web3 architecture, deController, consisting of underlay and overlay network as Web3 infrastructures to underpin services and applications. The functions of underlay and overlay and their interactions are illustrated. Meanwhile, the security and privacy of Web3 are analyzed based on a novel design of three-tier identities cooperating with deController. Furthermore, the impacts of laws on privacy and cyber sovereignty to achieve Web3 are discussed.
[ { "created": "Tue, 13 Jun 2023 06:28:40 GMT", "version": "v1" } ]
2023-08-08
[ [ "Xu", "Hao", "" ], [ "Sun", "Yunqing", "" ], [ "Li", "Zihao", "" ], [ "Sun", "Yao", "" ], [ "Zhang", "Lei", "" ], [ "Zhang", "Xiaoshuai", "" ] ]
Web3 brings an emerging outlook for the value of decentralization, boosting the decentralized infrastructure. People can benefit from Web3, facilitated by the advances in distributed ledger technology, to read, write and own web content, services and applications more freely without revealing their real identities. Although the features and merits of Web3 have been widely discussed, the network architecture of Web3 and how to achieve complete decentralization considering law compliance in Web3 are still unclear. Here, we propose a perspective of Web3 architecture, deController, consisting of underlay and overlay network as Web3 infrastructures to underpin services and applications. The functions of underlay and overlay and their interactions are illustrated. Meanwhile, the security and privacy of Web3 are analyzed based on a novel design of three-tier identities cooperating with deController. Furthermore, the impacts of laws on privacy and cyber sovereignty to achieve Web3 are discussed.
2209.00812
Yue Liu
Yue Liu, Chakkrit Tantithamthavorn, Li Li, Yepang Liu
Explainable AI for Android Malware Detection: Towards Understanding Why the Models Perform So Well?
Accepted by the 33rd IEEE International Symposium on Software Reliability Engineering (ISSRE 2022)
null
null
null
cs.CR cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning (ML)-based Android malware detection has been one of the most popular research topics in the mobile security community. An increasing number of research studies have demonstrated that machine learning is an effective and promising approach for malware detection, and some works have even claimed that their proposed models could achieve 99\% detection accuracy, leaving little room for further improvement. However, numerous prior studies have suggested that unrealistic experimental designs bring substantial biases, resulting in over-optimistic performance in malware detection. Unlike previous research that examined the detection performance of ML classifiers to locate the causes, this study employs Explainable AI (XAI) approaches to explore what ML-based models learned during the training process, inspecting and interpreting why ML-based malware classifiers perform so well under unrealistic experimental settings. We discover that temporal sample inconsistency in the training dataset brings over-optimistic classification performance (up to 99\% F1 score and accuracy). Importantly, our results indicate that ML models classify malware based on temporal differences between malware and benign, rather than the actual malicious behaviors. Our evaluation also confirms the fact that unrealistic experimental designs lead to not only unrealistic detection performance but also poor reliability, posing a significant obstacle to real-world applications. These findings suggest that XAI approaches should be used to help practitioners/researchers better understand how do AI/ML models (i.e., malware detection) work -- not just focusing on accuracy improvement.
[ { "created": "Fri, 2 Sep 2022 04:30:47 GMT", "version": "v1" } ]
2022-09-05
[ [ "Liu", "Yue", "" ], [ "Tantithamthavorn", "Chakkrit", "" ], [ "Li", "Li", "" ], [ "Liu", "Yepang", "" ] ]
Machine learning (ML)-based Android malware detection has been one of the most popular research topics in the mobile security community. An increasing number of research studies have demonstrated that machine learning is an effective and promising approach for malware detection, and some works have even claimed that their proposed models could achieve 99\% detection accuracy, leaving little room for further improvement. However, numerous prior studies have suggested that unrealistic experimental designs bring substantial biases, resulting in over-optimistic performance in malware detection. Unlike previous research that examined the detection performance of ML classifiers to locate the causes, this study employs Explainable AI (XAI) approaches to explore what ML-based models learned during the training process, inspecting and interpreting why ML-based malware classifiers perform so well under unrealistic experimental settings. We discover that temporal sample inconsistency in the training dataset brings over-optimistic classification performance (up to 99\% F1 score and accuracy). Importantly, our results indicate that ML models classify malware based on temporal differences between malware and benign, rather than the actual malicious behaviors. Our evaluation also confirms the fact that unrealistic experimental designs lead to not only unrealistic detection performance but also poor reliability, posing a significant obstacle to real-world applications. These findings suggest that XAI approaches should be used to help practitioners/researchers better understand how do AI/ML models (i.e., malware detection) work -- not just focusing on accuracy improvement.
1004.3165
Ashwin Nayak
Rahul Jain and Ashwin Nayak
The space complexity of recognizing well-parenthesized expressions in the streaming model: the Index function revisited
36 pages. Added more explanations for information cost, the proofs, and the notation; introduced abbreviations for random variables in Section 2 to simplify expressions; corrected typos and minor errors; updated references. To appear in IEEE Transactions on Information Theory
null
null
null
cs.CC cs.IT math.IT quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show an Omega(sqrt{n}/T) lower bound for the space required by any unidirectional constant-error randomized T-pass streaming algorithm that recognizes whether an expression over two types of parenthesis is well-parenthesized. This proves a conjecture due to Magniez, Mathieu, and Nayak (2009) and rigorously establishes that bidirectional streams are exponentially more efficient in space usage as compared with unidirectional ones. We obtain the lower bound by establishing the minimum amount of information that is necessarily revealed by the players about their respective inputs in a two-party communication protocol for a variant of the Index function, namely Augmented Index. The information cost trade-off is obtained by a novel application of the conceptually simple and familiar ideas such as average encoding and the cut-and-paste property of randomized protocols. Motivated by recent examples of exponential savings in space by streaming quantum algorithms, we also study quantum protocols for Augmented Index. Defining an appropriate notion of information cost for quantum protocols involves a delicate balancing act between its applicability and the ease with which we can analyze it. We define a notion of quantum information cost which reflects some of the non-intuitive properties of quantum information and give a trade-off for this notion. While this trade-off demonstrates the strength of our proof techniques, it does not lead to a space lower bound for checking parentheses. We leave such an implication for quantum streaming algorithms as an intriguing open question.
[ { "created": "Mon, 19 Apr 2010 11:52:28 GMT", "version": "v1" }, { "created": "Mon, 5 Jul 2010 22:23:03 GMT", "version": "v2" }, { "created": "Tue, 26 Jul 2011 17:48:53 GMT", "version": "v3" }, { "created": "Thu, 16 May 2013 16:07:34 GMT", "version": "v4" }, { "created": "Thu, 10 Jul 2014 14:38:29 GMT", "version": "v5" } ]
2014-07-11
[ [ "Jain", "Rahul", "" ], [ "Nayak", "Ashwin", "" ] ]
We show an Omega(sqrt{n}/T) lower bound for the space required by any unidirectional constant-error randomized T-pass streaming algorithm that recognizes whether an expression over two types of parenthesis is well-parenthesized. This proves a conjecture due to Magniez, Mathieu, and Nayak (2009) and rigorously establishes that bidirectional streams are exponentially more efficient in space usage as compared with unidirectional ones. We obtain the lower bound by establishing the minimum amount of information that is necessarily revealed by the players about their respective inputs in a two-party communication protocol for a variant of the Index function, namely Augmented Index. The information cost trade-off is obtained by a novel application of the conceptually simple and familiar ideas such as average encoding and the cut-and-paste property of randomized protocols. Motivated by recent examples of exponential savings in space by streaming quantum algorithms, we also study quantum protocols for Augmented Index. Defining an appropriate notion of information cost for quantum protocols involves a delicate balancing act between its applicability and the ease with which we can analyze it. We define a notion of quantum information cost which reflects some of the non-intuitive properties of quantum information and give a trade-off for this notion. While this trade-off demonstrates the strength of our proof techniques, it does not lead to a space lower bound for checking parentheses. We leave such an implication for quantum streaming algorithms as an intriguing open question.
1812.07941
Temitayo Olugbade
Temitayo A. Olugbade, Joseph Newbold, Rose Johnson, Erica Volta, Paolo Alborno, Radoslaw Niewiadomski, Max Dillon, Gualtiero Volpe, and Nadia Bianchi-Berthouze
Automatic Detection of Reflective Thinking in Mathematical Problem Solving based on Unconstrained Bodily Exploration
null
null
10.1109/TAFFC.2020.2978069
null
cs.HC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For technology (like serious games) that aims to deliver interactive learning, it is important to address relevant mental experiences such as reflective thinking during problem solving. To facilitate research in this direction, we present the weDraw-1 Movement Dataset of body movement sensor data and reflective thinking labels for 26 children solving mathematical problems in unconstrained settings where the body (full or parts) was required to explore these problems. Further, we provide qualitative analysis of behaviours that observers used in identifying reflective thinking moments in these sessions. The body movement cues from our compilation informed features that lead to average F1 score of 0.73 for automatic detection of reflective thinking based on Long Short-Term Memory neural networks. We further obtained 0.79 average F1 score for end-to-end detection of reflective thinking periods, i.e. based on raw sensor data. Finally, the algorithms resulted in 0.64 average F1 score for period subsegments as short as 4 seconds. Overall, our results show the possibility of detecting reflective thinking moments from body movement behaviours of a child exploring mathematical concepts bodily, such as within serious game play.
[ { "created": "Tue, 18 Dec 2018 17:38:19 GMT", "version": "v1" }, { "created": "Mon, 23 Mar 2020 08:23:19 GMT", "version": "v2" } ]
2020-03-24
[ [ "Olugbade", "Temitayo A.", "" ], [ "Newbold", "Joseph", "" ], [ "Johnson", "Rose", "" ], [ "Volta", "Erica", "" ], [ "Alborno", "Paolo", "" ], [ "Niewiadomski", "Radoslaw", "" ], [ "Dillon", "Max", "" ], [ "Volpe", "Gualtiero", "" ], [ "Bianchi-Berthouze", "Nadia", "" ] ]
For technology (like serious games) that aims to deliver interactive learning, it is important to address relevant mental experiences such as reflective thinking during problem solving. To facilitate research in this direction, we present the weDraw-1 Movement Dataset of body movement sensor data and reflective thinking labels for 26 children solving mathematical problems in unconstrained settings where the body (full or parts) was required to explore these problems. Further, we provide qualitative analysis of behaviours that observers used in identifying reflective thinking moments in these sessions. The body movement cues from our compilation informed features that lead to average F1 score of 0.73 for automatic detection of reflective thinking based on Long Short-Term Memory neural networks. We further obtained 0.79 average F1 score for end-to-end detection of reflective thinking periods, i.e. based on raw sensor data. Finally, the algorithms resulted in 0.64 average F1 score for period subsegments as short as 4 seconds. Overall, our results show the possibility of detecting reflective thinking moments from body movement behaviours of a child exploring mathematical concepts bodily, such as within serious game play.
2207.11609
Hossein A. Rahmani
Hossein A. Rahmani, Mohammadmehdi Naghiaei, Ali Tourani, Yashar Deldjoo
Exploring the Impact of Temporal Bias in Point-of-Interest Recommendation
RecSys 2022
null
10.1145/3523227.3551481
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Recommending appropriate travel destinations to consumers based on contextual information such as their check-in time and location is a primary objective of Point-of-Interest (POI) recommender systems. However, the issue of contextual bias (i.e., how much consumers prefer one situation over another) has received little attention from the research community. This paper examines the effect of temporal bias, defined as the difference between users' check-in hours, leisure vs.~work hours, on the consumer-side fairness of context-aware recommendation algorithms. We believe that eliminating this type of temporal (and geographical) bias might contribute to a drop in traffic-related air pollution, noting that rush-hour traffic may be more congested. To surface effective POI recommendations, we evaluated the sensitivity of state-of-the-art context-aware models to the temporal bias contained in users' check-in activities on two POI datasets, namely Gowalla and Yelp. The findings show that the examined context-aware recommendation models prefer one group of users over another based on the time of check-in and that this preference persists even when users have the same amount of interactions.
[ { "created": "Sat, 23 Jul 2022 21:25:19 GMT", "version": "v1" } ]
2022-07-26
[ [ "Rahmani", "Hossein A.", "" ], [ "Naghiaei", "Mohammadmehdi", "" ], [ "Tourani", "Ali", "" ], [ "Deldjoo", "Yashar", "" ] ]
Recommending appropriate travel destinations to consumers based on contextual information such as their check-in time and location is a primary objective of Point-of-Interest (POI) recommender systems. However, the issue of contextual bias (i.e., how much consumers prefer one situation over another) has received little attention from the research community. This paper examines the effect of temporal bias, defined as the difference between users' check-in hours, leisure vs.~work hours, on the consumer-side fairness of context-aware recommendation algorithms. We believe that eliminating this type of temporal (and geographical) bias might contribute to a drop in traffic-related air pollution, noting that rush-hour traffic may be more congested. To surface effective POI recommendations, we evaluated the sensitivity of state-of-the-art context-aware models to the temporal bias contained in users' check-in activities on two POI datasets, namely Gowalla and Yelp. The findings show that the examined context-aware recommendation models prefer one group of users over another based on the time of check-in and that this preference persists even when users have the same amount of interactions.
1706.10192
Andrew Yates
Kai Hui, Andrew Yates, Klaus Berberich, Gerard de Melo
Co-PACRR: A Context-Aware Neural IR Model for Ad-hoc Retrieval
To appear in WSDM 2018
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals. We argue that the context of these matching signals is also important. Intuitively, when extracting, modeling, and combining matching signals, one would like to consider the surrounding text (local context) as well as other signals from the same document that can contribute to the overall relevance score. In this work, we highlight three potential shortcomings caused by not considering context information and propose three neural ingredients to address them: a disambiguation component, cascade k-max pooling, and a shuffling combination layer. Incorporating these components into the PACRR model yields Co-PACRR, a novel context-aware neural IR model. Extensive comparisons with established models on Trec Web Track data confirm that the proposed model can achieve superior search results. In addition, an ablation analysis is conducted to gain insights into the impact of and interactions between different components. We release our code to enable future comparisons.
[ { "created": "Fri, 30 Jun 2017 13:39:03 GMT", "version": "v1" }, { "created": "Mon, 24 Jul 2017 13:42:11 GMT", "version": "v2" }, { "created": "Tue, 28 Nov 2017 13:43:56 GMT", "version": "v3" } ]
2017-11-29
[ [ "Hui", "Kai", "" ], [ "Yates", "Andrew", "" ], [ "Berberich", "Klaus", "" ], [ "de Melo", "Gerard", "" ] ]
Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals. We argue that the context of these matching signals is also important. Intuitively, when extracting, modeling, and combining matching signals, one would like to consider the surrounding text (local context) as well as other signals from the same document that can contribute to the overall relevance score. In this work, we highlight three potential shortcomings caused by not considering context information and propose three neural ingredients to address them: a disambiguation component, cascade k-max pooling, and a shuffling combination layer. Incorporating these components into the PACRR model yields Co-PACRR, a novel context-aware neural IR model. Extensive comparisons with established models on Trec Web Track data confirm that the proposed model can achieve superior search results. In addition, an ablation analysis is conducted to gain insights into the impact of and interactions between different components. We release our code to enable future comparisons.
2308.10093
Sara Moshtaghi Largani
Hamid Hoorfar, Faraneh Fathi, Sara Moshtaghi Largani, and Alireza Bagheri
Securing Pathways with Orthogonal Robots
8 pages, 5 figures
The 21st International Conference on Scientific Computing in The 2023 World Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE'23)
10.1109/CSCE60160.2023.00103
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
The protection of pathways holds immense significance across various domains, including urban planning, transportation, surveillance, and security. This article introduces a groundbreaking approach to safeguarding pathways by employing orthogonal robots. The study specifically addresses the challenge of efficiently guarding orthogonal areas with the minimum number of orthogonal robots. The primary focus is on orthogonal pathways, characterized by a path-like dual graph of vertical decomposition. It is demonstrated that determining the minimum number of orthogonal robots for pathways can be achieved in linear time. However, it is essential to note that the general problem of finding the minimum number of robots for simple polygons with general visibility, even in the orthogonal case, is known to be NP-hard. Emphasis is placed on the flexibility of placing robots anywhere within the polygon, whether on the boundary or in the interior.
[ { "created": "Sat, 19 Aug 2023 19:05:13 GMT", "version": "v1" } ]
2024-05-27
[ [ "Hoorfar", "Hamid", "" ], [ "Fathi", "Faraneh", "" ], [ "Largani", "Sara Moshtaghi", "" ], [ "Bagheri", "Alireza", "" ] ]
The protection of pathways holds immense significance across various domains, including urban planning, transportation, surveillance, and security. This article introduces a groundbreaking approach to safeguarding pathways by employing orthogonal robots. The study specifically addresses the challenge of efficiently guarding orthogonal areas with the minimum number of orthogonal robots. The primary focus is on orthogonal pathways, characterized by a path-like dual graph of vertical decomposition. It is demonstrated that determining the minimum number of orthogonal robots for pathways can be achieved in linear time. However, it is essential to note that the general problem of finding the minimum number of robots for simple polygons with general visibility, even in the orthogonal case, is known to be NP-hard. Emphasis is placed on the flexibility of placing robots anywhere within the polygon, whether on the boundary or in the interior.
2211.10649
Hua Wei
Hao Mei, Xiaoliang Lei, Longchao Da, Bin Shi, Hua Wei
LibSignal: An Open Library for Traffic Signal Control
11 pages + 6 pages appendix. Accepted by Machine Learning Journal (2023). A short version is accepted by NeurIPS 2022 Workshop: Reinforcement Learning for Real Life. Website: https://darl-libsignal.github.io/
null
10.1007/s10994-023-06412-y
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper introduces a library for cross-simulator comparison of reinforcement learning models in traffic signal control tasks. This library is developed to implement recent state-of-the-art reinforcement learning models with extensible interfaces and unified cross-simulator evaluation metrics. It supports commonly-used simulators in traffic signal control tasks, including Simulation of Urban MObility(SUMO) and CityFlow, and multiple benchmark datasets for fair comparisons. We conducted experiments to validate our implementation of the models and to calibrate the simulators so that the experiments from one simulator could be referential to the other. Based on the validated models and calibrated environments, this paper compares and reports the performance of current state-of-the-art RL algorithms across different datasets and simulators. This is the first time that these methods have been compared fairly under the same datasets with different simulators.
[ { "created": "Sat, 19 Nov 2022 10:21:50 GMT", "version": "v1" }, { "created": "Wed, 29 Nov 2023 18:45:05 GMT", "version": "v2" } ]
2023-11-30
[ [ "Mei", "Hao", "" ], [ "Lei", "Xiaoliang", "" ], [ "Da", "Longchao", "" ], [ "Shi", "Bin", "" ], [ "Wei", "Hua", "" ] ]
This paper introduces a library for cross-simulator comparison of reinforcement learning models in traffic signal control tasks. This library is developed to implement recent state-of-the-art reinforcement learning models with extensible interfaces and unified cross-simulator evaluation metrics. It supports commonly-used simulators in traffic signal control tasks, including Simulation of Urban MObility(SUMO) and CityFlow, and multiple benchmark datasets for fair comparisons. We conducted experiments to validate our implementation of the models and to calibrate the simulators so that the experiments from one simulator could be referential to the other. Based on the validated models and calibrated environments, this paper compares and reports the performance of current state-of-the-art RL algorithms across different datasets and simulators. This is the first time that these methods have been compared fairly under the same datasets with different simulators.
1809.03363
Ethan Harris
Ethan Harris, Matthew Painter and Jonathon Hare
Torchbearer: A Model Fitting Library for PyTorch
5 pages
null
null
null
cs.LG cs.AI cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce torchbearer, a model fitting library for pytorch aimed at researchers working on deep learning or differentiable programming. The torchbearer library provides a high level metric and callback API that can be used for a wide range of applications. We also include a series of built in callbacks that can be used for: model persistence, learning rate decay, logging, data visualization and more. The extensive documentation includes an example library for deep learning and dynamic programming problems and can be found at http://torchbearer.readthedocs.io. The code is licensed under the MIT License and available at https://github.com/ecs-vlc/torchbearer.
[ { "created": "Mon, 10 Sep 2018 14:46:35 GMT", "version": "v1" } ]
2018-09-11
[ [ "Harris", "Ethan", "" ], [ "Painter", "Matthew", "" ], [ "Hare", "Jonathon", "" ] ]
We introduce torchbearer, a model fitting library for pytorch aimed at researchers working on deep learning or differentiable programming. The torchbearer library provides a high level metric and callback API that can be used for a wide range of applications. We also include a series of built in callbacks that can be used for: model persistence, learning rate decay, logging, data visualization and more. The extensive documentation includes an example library for deep learning and dynamic programming problems and can be found at http://torchbearer.readthedocs.io. The code is licensed under the MIT License and available at https://github.com/ecs-vlc/torchbearer.
1804.07703
Martin Bromberger
Martin Bromberger
A Reduction from Unbounded Linear Mixed Arithmetic Problems into Bounded Problems
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a combination of the Mixed-Echelon-Hermite transformation and the Double-Bounded Reduction for systems of linear mixed arithmetic that preserve satisfiability and can be computed in polynomial time. Together, the two transformations turn any system of linear mixed constraints into a bounded system, i.e., a system for which termination can be achieved easily. Existing approaches for linear mixed arithmetic, e.g., branch-and-bound and cuts from proofs, only explore a finite search space after application of our two transformations. Instead of generating a priori bounds for the variables, e.g., as suggested by Papadimitriou, unbounded variables are eliminated through the two transformations. The transformations orient themselves on the structure of an input system instead of computing a priori (over-)approximations out of the available constants. Experiments provide further evidence to the efficiency of the transformations in practice. We also present a polynomial method for converting certificates of (un)satisfiability from the transformed to the original system.
[ { "created": "Fri, 20 Apr 2018 16:13:24 GMT", "version": "v1" } ]
2018-04-23
[ [ "Bromberger", "Martin", "" ] ]
We present a combination of the Mixed-Echelon-Hermite transformation and the Double-Bounded Reduction for systems of linear mixed arithmetic that preserve satisfiability and can be computed in polynomial time. Together, the two transformations turn any system of linear mixed constraints into a bounded system, i.e., a system for which termination can be achieved easily. Existing approaches for linear mixed arithmetic, e.g., branch-and-bound and cuts from proofs, only explore a finite search space after application of our two transformations. Instead of generating a priori bounds for the variables, e.g., as suggested by Papadimitriou, unbounded variables are eliminated through the two transformations. The transformations orient themselves on the structure of an input system instead of computing a priori (over-)approximations out of the available constants. Experiments provide further evidence to the efficiency of the transformations in practice. We also present a polynomial method for converting certificates of (un)satisfiability from the transformed to the original system.
1804.01238
Trevor Barron
Trevor Barron, Oliver Obst, Heni Ben Amor
Information Maximizing Exploration with a Latent Dynamics Model
Presented at the NIPS 2017 Deep Reinforcement Learning Symposium
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
All reinforcement learning algorithms must handle the trade-off between exploration and exploitation. Many state-of-the-art deep reinforcement learning methods use noise in the action selection, such as Gaussian noise in policy gradient methods or $\epsilon$-greedy in Q-learning. While these methods are appealing due to their simplicity, they do not explore the state space in a methodical manner. We present an approach that uses a model to derive reward bonuses as a means of intrinsic motivation to improve model-free reinforcement learning. A key insight of our approach is that this dynamics model can be learned in the latent feature space of a value function, representing the dynamics of the agent and the environment. This method is both theoretically grounded and computationally advantageous, permitting the efficient use of Bayesian information-theoretic methods in high-dimensional state spaces. We evaluate our method on several continuous control tasks, focusing on improving exploration.
[ { "created": "Wed, 4 Apr 2018 05:04:41 GMT", "version": "v1" } ]
2018-04-05
[ [ "Barron", "Trevor", "" ], [ "Obst", "Oliver", "" ], [ "Amor", "Heni Ben", "" ] ]
All reinforcement learning algorithms must handle the trade-off between exploration and exploitation. Many state-of-the-art deep reinforcement learning methods use noise in the action selection, such as Gaussian noise in policy gradient methods or $\epsilon$-greedy in Q-learning. While these methods are appealing due to their simplicity, they do not explore the state space in a methodical manner. We present an approach that uses a model to derive reward bonuses as a means of intrinsic motivation to improve model-free reinforcement learning. A key insight of our approach is that this dynamics model can be learned in the latent feature space of a value function, representing the dynamics of the agent and the environment. This method is both theoretically grounded and computationally advantageous, permitting the efficient use of Bayesian information-theoretic methods in high-dimensional state spaces. We evaluate our method on several continuous control tasks, focusing on improving exploration.
2404.16035
Chuong Huynh
Chuong Huynh, Seoung Wug Oh, Abhinav Shrivastava, Joon-Young Lee
MaGGIe: Masked Guided Gradual Human Instance Matting
CVPR 2024. Project link: https://maggie-matt.github.io
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Human matting is a foundation task in image and video processing, where human foreground pixels are extracted from the input. Prior works either improve the accuracy by additional guidance or improve the temporal consistency of a single instance across frames. We propose a new framework MaGGIe, Masked Guided Gradual Human Instance Matting, which predicts alpha mattes progressively for each human instances while maintaining the computational cost, precision, and consistency. Our method leverages modern architectures, including transformer attention and sparse convolution, to output all instance mattes simultaneously without exploding memory and latency. Although keeping constant inference costs in the multiple-instance scenario, our framework achieves robust and versatile performance on our proposed synthesized benchmarks. With the higher quality image and video matting benchmarks, the novel multi-instance synthesis approach from publicly available sources is introduced to increase the generalization of models in real-world scenarios.
[ { "created": "Wed, 24 Apr 2024 17:59:53 GMT", "version": "v1" } ]
2024-04-25
[ [ "Huynh", "Chuong", "" ], [ "Oh", "Seoung Wug", "" ], [ "Shrivastava", "Abhinav", "" ], [ "Lee", "Joon-Young", "" ] ]
Human matting is a foundation task in image and video processing, where human foreground pixels are extracted from the input. Prior works either improve the accuracy by additional guidance or improve the temporal consistency of a single instance across frames. We propose a new framework MaGGIe, Masked Guided Gradual Human Instance Matting, which predicts alpha mattes progressively for each human instances while maintaining the computational cost, precision, and consistency. Our method leverages modern architectures, including transformer attention and sparse convolution, to output all instance mattes simultaneously without exploding memory and latency. Although keeping constant inference costs in the multiple-instance scenario, our framework achieves robust and versatile performance on our proposed synthesized benchmarks. With the higher quality image and video matting benchmarks, the novel multi-instance synthesis approach from publicly available sources is introduced to increase the generalization of models in real-world scenarios.
1001.2261
Rdv Ijcsis
Bancha Burapattanasiri
High Precision MultiWave Rectifier Circuit Operating in Low Voltage 1.5 Volt Current Mode
5 pages IEEE format, International Journal of Computer Science and Information Security, IJCSIS December 2009, ISSN 1947 5500, http://sites.google.com/site/ijcsis/
International Journal of Computer Science and Information Security, IJCSIS, Vol. 6, No. 3, pp. 160-164, December 2009, USA
null
Volume 6, No. 3, ISSN 1947 5500
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article is present high precision multiwave rectifier circuit operating in low voltage plus or minus 1.5 Volt current modes by CMOS technology 0.5 \mum, receive input and give output in current mode, respond at high frequency period. The structure compound with high speed current comparator circuit, current mirror circuit, and CMOS inverter circuit. PSpice program used for confirmation the performance of testing. The PSpice program shows operating of circuit is able to working at maximum input current 400 \muAp p, maximum frequency responding 200 MHz, high precision and low power losses, and non-precision zero crossing output signal.
[ { "created": "Wed, 13 Jan 2010 18:44:37 GMT", "version": "v1" } ]
2010-01-14
[ [ "Burapattanasiri", "Bancha", "" ] ]
This article is present high precision multiwave rectifier circuit operating in low voltage plus or minus 1.5 Volt current modes by CMOS technology 0.5 \mum, receive input and give output in current mode, respond at high frequency period. The structure compound with high speed current comparator circuit, current mirror circuit, and CMOS inverter circuit. PSpice program used for confirmation the performance of testing. The PSpice program shows operating of circuit is able to working at maximum input current 400 \muAp p, maximum frequency responding 200 MHz, high precision and low power losses, and non-precision zero crossing output signal.
2307.06981
Jeremy Blackburn
Utkucan Balc{\i}, Michael Sirivianos, Jeremy Blackburn
A Data-driven Understanding of Left-Wing Extremists on Social Media
null
null
null
null
cs.SI cs.CY
http://creativecommons.org/licenses/by/4.0/
Social media's role in the spread and evolution of extremism is a focus of intense study. Online extremists have been involved in the spread of online hate, mis/disinformation, and real-world violence. However, the overwhelming majority of existing work has focused on right-wing extremism. In this paper, we perform a first of its kind large-scale, data-driven study exploring left-wing extremism. We focus on "tankies," a left-wing community that first arose in the 1950s in support of hardline actions of the USSR and has evolved to support what they call "actually existing socialist countries," e.g., CCP run China, the USSR, former soviet countries, and North Korea. We collect 1.3M posts from 53K authors from tankies subreddits, and explore the position of tankies within the broader far-left community on Reddit. Among other things, we find that tankies are clearly on the periphery of the larger far-left community. When examining the contents of posts, we find misalignments and conceptual homomorphisms that confirm the description of tankies in the theoretical work. We also discover that tankies focus more on state-level political events rather than social issues in comparison to other far-left communities. Finally, we show that tankies exhibit some of the same worrying behaviors as right-wing extremists, e.g., relatively high toxicity and an organized response to deplatforming events.
[ { "created": "Thu, 13 Jul 2023 15:05:59 GMT", "version": "v1" } ]
2023-07-17
[ [ "Balcı", "Utkucan", "" ], [ "Sirivianos", "Michael", "" ], [ "Blackburn", "Jeremy", "" ] ]
Social media's role in the spread and evolution of extremism is a focus of intense study. Online extremists have been involved in the spread of online hate, mis/disinformation, and real-world violence. However, the overwhelming majority of existing work has focused on right-wing extremism. In this paper, we perform a first of its kind large-scale, data-driven study exploring left-wing extremism. We focus on "tankies," a left-wing community that first arose in the 1950s in support of hardline actions of the USSR and has evolved to support what they call "actually existing socialist countries," e.g., CCP run China, the USSR, former soviet countries, and North Korea. We collect 1.3M posts from 53K authors from tankies subreddits, and explore the position of tankies within the broader far-left community on Reddit. Among other things, we find that tankies are clearly on the periphery of the larger far-left community. When examining the contents of posts, we find misalignments and conceptual homomorphisms that confirm the description of tankies in the theoretical work. We also discover that tankies focus more on state-level political events rather than social issues in comparison to other far-left communities. Finally, we show that tankies exhibit some of the same worrying behaviors as right-wing extremists, e.g., relatively high toxicity and an organized response to deplatforming events.
1909.07650
Georgios Amanatidis
Georgios Amanatidis, Apostolos Ntokos, Evangelos Markakis
Multiple Birds with One Stone: Beating $1/2$ for EFX and GMMS via Envy Cycle Elimination
null
Theor. Comput. Sci. 841: 94-109 (2020)
10.1016/j.tcs.2020.07.006
null
cs.GT cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most of these concepts, great attention has been paid to establishing approximation guarantees. In this work, we propose a simple algorithm that is universally fair in the sense that it returns allocations that have good approximation guarantees with respect to four such fairness notions at once. In particular, this is the first algorithm achieving a $(\phi-1)$-approximation of envy-freeness up to any good (EFX) and a $\frac{2}{\phi +2}$-approximation of groupwise maximin share fairness (GMMS), where $\phi$ is the golden ratio ($\phi \approx 1.618$). The best known approximation factor for either one of these fairness notions prior to this work was $1/2$. Moreover, the returned allocation achieves envy-freeness up to one good (EF1) and a $2/3$-approximation of pairwise maximin share fairness (PMMS). While EFX is our primary focus, we also exhibit how to fine-tune our algorithm and improve the guarantees for GMMS or PMMS. Finally, we show that GMMS -- and thus PMMS and EFX -- allocations always exist when the number of goods does not exceed the number of agents by more than two.
[ { "created": "Tue, 17 Sep 2019 08:46:46 GMT", "version": "v1" }, { "created": "Sat, 6 Mar 2021 02:33:37 GMT", "version": "v2" } ]
2021-03-09
[ [ "Amanatidis", "Georgios", "" ], [ "Ntokos", "Apostolos", "" ], [ "Markakis", "Evangelos", "" ] ]
Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most of these concepts, great attention has been paid to establishing approximation guarantees. In this work, we propose a simple algorithm that is universally fair in the sense that it returns allocations that have good approximation guarantees with respect to four such fairness notions at once. In particular, this is the first algorithm achieving a $(\phi-1)$-approximation of envy-freeness up to any good (EFX) and a $\frac{2}{\phi +2}$-approximation of groupwise maximin share fairness (GMMS), where $\phi$ is the golden ratio ($\phi \approx 1.618$). The best known approximation factor for either one of these fairness notions prior to this work was $1/2$. Moreover, the returned allocation achieves envy-freeness up to one good (EF1) and a $2/3$-approximation of pairwise maximin share fairness (PMMS). While EFX is our primary focus, we also exhibit how to fine-tune our algorithm and improve the guarantees for GMMS or PMMS. Finally, we show that GMMS -- and thus PMMS and EFX -- allocations always exist when the number of goods does not exceed the number of agents by more than two.
2011.09684
Omar Sharif
Eftekhar Hossain, Omar Sharif, Mohammed Moshiul Hoque, Iqbal H. Sarker
SentiLSTM: A Deep Learning Approach for Sentiment Analysis of Restaurant Reviews
13 page, will appear in 20th International Conference on Hybrid Intelligent Systems (HIS 2020)
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The amount of textual data generation has increased enormously due to the effortless access of the Internet and the evolution of various web 2.0 applications. These textual data productions resulted because of the people express their opinion, emotion or sentiment about any product or service in the form of tweets, Facebook post or status, blog write up, and reviews. Sentiment analysis deals with the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer's attitude toward a particular topic is positive, negative, or neutral. The impact of customer review is significant to perceive the customer attitude towards a restaurant. Thus, the automatic detection of sentiment from reviews is advantageous for the restaurant owners, or service providers and customers to make their decisions or services more satisfactory. This paper proposes, a deep learning-based technique (i.e., BiLSTM) to classify the reviews provided by the clients of the restaurant into positive and negative polarities. A corpus consists of 8435 reviews is constructed to evaluate the proposed technique. In addition, a comparative analysis of the proposed technique with other machine learning algorithms presented. The results of the evaluation on test dataset show that BiLSTM technique produced in the highest accuracy of 91.35%.
[ { "created": "Thu, 19 Nov 2020 06:24:42 GMT", "version": "v1" } ]
2020-11-20
[ [ "Hossain", "Eftekhar", "" ], [ "Sharif", "Omar", "" ], [ "Hoque", "Mohammed Moshiul", "" ], [ "Sarker", "Iqbal H.", "" ] ]
The amount of textual data generation has increased enormously due to the effortless access of the Internet and the evolution of various web 2.0 applications. These textual data productions resulted because of the people express their opinion, emotion or sentiment about any product or service in the form of tweets, Facebook post or status, blog write up, and reviews. Sentiment analysis deals with the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer's attitude toward a particular topic is positive, negative, or neutral. The impact of customer review is significant to perceive the customer attitude towards a restaurant. Thus, the automatic detection of sentiment from reviews is advantageous for the restaurant owners, or service providers and customers to make their decisions or services more satisfactory. This paper proposes, a deep learning-based technique (i.e., BiLSTM) to classify the reviews provided by the clients of the restaurant into positive and negative polarities. A corpus consists of 8435 reviews is constructed to evaluate the proposed technique. In addition, a comparative analysis of the proposed technique with other machine learning algorithms presented. The results of the evaluation on test dataset show that BiLSTM technique produced in the highest accuracy of 91.35%.
2202.00403
Quentin Possama\"i
Quentin Possama\"i, Steeven Janny, Guillaume Bono, Madiha Nadri, Laurent Bako and Christian Wolf
MoCap-less Quantitative Evaluation of Ego-Pose Estimation Without Ground Truth Measurements
7 pages, 6 figures, 1 table. Submitted to International Conference on Pattern Recognition. For associated videos: https://www.youtube.com/playlist?list=PLRsYEUUGzW54jqsfRdkNAYjZUnoEM4uhM
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emergence of data-driven approaches for control and planning in robotics have highlighted the need for developing experimental robotic platforms for data collection. However, their implementation is often complex and expensive, in particular for flying and terrestrial robots where the precise estimation of the position requires motion capture devices (MoCap) or Lidar. In order to simplify the use of a robotic platform dedicated to research on a wide range of indoor and outdoor environments, we present a data validation tool for ego-pose estimation that does not require any equipment other than the on-board camera. The method and tool allow a rapid, visual and quantitative evaluation of the quality of ego-pose sensors and are sensitive to different sources of flaws in the acquisition chain, ranging from desynchronization of the sensor flows to misevaluation of the geometric parameters of the robotic platform. Using computer vision, the information from the sensors is used to calculate the motion of a semantic scene point through its projection to the 2D image space of the on-board camera. The deviations of these keypoints from references created with a semi-automatic tool allow rapid and simple quality assessment of the data collected on the platform. To demonstrate the performance of our method, we evaluate it on two challenging standard UAV datasets as well as one dataset taken from a terrestrial robot.
[ { "created": "Tue, 1 Feb 2022 13:35:16 GMT", "version": "v1" } ]
2022-02-02
[ [ "Possamaï", "Quentin", "" ], [ "Janny", "Steeven", "" ], [ "Bono", "Guillaume", "" ], [ "Nadri", "Madiha", "" ], [ "Bako", "Laurent", "" ], [ "Wolf", "Christian", "" ] ]
The emergence of data-driven approaches for control and planning in robotics have highlighted the need for developing experimental robotic platforms for data collection. However, their implementation is often complex and expensive, in particular for flying and terrestrial robots where the precise estimation of the position requires motion capture devices (MoCap) or Lidar. In order to simplify the use of a robotic platform dedicated to research on a wide range of indoor and outdoor environments, we present a data validation tool for ego-pose estimation that does not require any equipment other than the on-board camera. The method and tool allow a rapid, visual and quantitative evaluation of the quality of ego-pose sensors and are sensitive to different sources of flaws in the acquisition chain, ranging from desynchronization of the sensor flows to misevaluation of the geometric parameters of the robotic platform. Using computer vision, the information from the sensors is used to calculate the motion of a semantic scene point through its projection to the 2D image space of the on-board camera. The deviations of these keypoints from references created with a semi-automatic tool allow rapid and simple quality assessment of the data collected on the platform. To demonstrate the performance of our method, we evaluate it on two challenging standard UAV datasets as well as one dataset taken from a terrestrial robot.
2104.14146
David Schaller
Marc Hellmuth, David Schaller, Peter F. Stadler
Compatibility of Partitions with Trees, Hierarchies, and Split Systems
null
null
null
null
cs.DM math.CO
http://creativecommons.org/licenses/by-nc-sa/4.0/
The question whether a partition $\mathcal{P}$ and a hierarchy $\mathcal{H}$ or a tree-like split system $\mathfrak{S}$ are compatible naturally arises in a wide range of classification problems. In the setting of phylogenetic trees, one asks whether the sets of $\mathcal{P}$coincide with leaf sets of connected components obtained by deleting some edges from the tree $T$ that represents $\mathcal{H}$ or $\mathfrak{S}$, respectively. More generally, we ask whether a refinement $T^*$ of $T$ exists such that $T^*$ and $\mathcal{P}$ are compatible in this sense. The latter is closely related to the question as to whether there exists a tree at all that is compatible with $\mathcal{P}$. We report several characterizations for (refinements of) hierarchies and split systems that are compatible with (systems of) partitions. In addition, we provide a linear-time algorithm to check whether refinements of trees and a given partition are compatible. The latter problem becomes NP-complete but fixed-parameter tractable if a system of partitions is considered instead of a single partition. In this context, we also explore the close relationship of the concept of compatibility and so-called Fitch maps.
[ { "created": "Thu, 29 Apr 2021 06:55:35 GMT", "version": "v1" }, { "created": "Tue, 30 Nov 2021 15:42:40 GMT", "version": "v2" } ]
2021-12-01
[ [ "Hellmuth", "Marc", "" ], [ "Schaller", "David", "" ], [ "Stadler", "Peter F.", "" ] ]
The question whether a partition $\mathcal{P}$ and a hierarchy $\mathcal{H}$ or a tree-like split system $\mathfrak{S}$ are compatible naturally arises in a wide range of classification problems. In the setting of phylogenetic trees, one asks whether the sets of $\mathcal{P}$coincide with leaf sets of connected components obtained by deleting some edges from the tree $T$ that represents $\mathcal{H}$ or $\mathfrak{S}$, respectively. More generally, we ask whether a refinement $T^*$ of $T$ exists such that $T^*$ and $\mathcal{P}$ are compatible in this sense. The latter is closely related to the question as to whether there exists a tree at all that is compatible with $\mathcal{P}$. We report several characterizations for (refinements of) hierarchies and split systems that are compatible with (systems of) partitions. In addition, we provide a linear-time algorithm to check whether refinements of trees and a given partition are compatible. The latter problem becomes NP-complete but fixed-parameter tractable if a system of partitions is considered instead of a single partition. In this context, we also explore the close relationship of the concept of compatibility and so-called Fitch maps.
1502.02362
Adith Swaminathan
Adith Swaminathan and Thorsten Joachims
Counterfactual Risk Minimization: Learning from Logged Bandit Feedback
10 pages
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a learning principle and an efficient algorithm for batch learning from logged bandit feedback. This learning setting is ubiquitous in online systems (e.g., ad placement, web search, recommendation), where an algorithm makes a prediction (e.g., ad ranking) for a given input (e.g., query) and observes bandit feedback (e.g., user clicks on presented ads). We first address the counterfactual nature of the learning problem through propensity scoring. Next, we prove generalization error bounds that account for the variance of the propensity-weighted empirical risk estimator. These constructive bounds give rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM can be used to derive a new learning method -- called Policy Optimizer for Exponential Models (POEM) -- for learning stochastic linear rules for structured output prediction. We present a decomposition of the POEM objective that enables efficient stochastic gradient optimization. POEM is evaluated on several multi-label classification problems showing substantially improved robustness and generalization performance compared to the state-of-the-art.
[ { "created": "Mon, 9 Feb 2015 05:09:25 GMT", "version": "v1" }, { "created": "Wed, 20 May 2015 23:29:49 GMT", "version": "v2" } ]
2015-05-22
[ [ "Swaminathan", "Adith", "" ], [ "Joachims", "Thorsten", "" ] ]
We develop a learning principle and an efficient algorithm for batch learning from logged bandit feedback. This learning setting is ubiquitous in online systems (e.g., ad placement, web search, recommendation), where an algorithm makes a prediction (e.g., ad ranking) for a given input (e.g., query) and observes bandit feedback (e.g., user clicks on presented ads). We first address the counterfactual nature of the learning problem through propensity scoring. Next, we prove generalization error bounds that account for the variance of the propensity-weighted empirical risk estimator. These constructive bounds give rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM can be used to derive a new learning method -- called Policy Optimizer for Exponential Models (POEM) -- for learning stochastic linear rules for structured output prediction. We present a decomposition of the POEM objective that enables efficient stochastic gradient optimization. POEM is evaluated on several multi-label classification problems showing substantially improved robustness and generalization performance compared to the state-of-the-art.
2109.02032
Zhenhui Ye
Zhenhui Ye, Xiaohong Jiang, Guanghua Song, Bowei Yang
Soft Hierarchical Graph Recurrent Networks for Many-Agent Partially Observable Environments
9 pages, 6 figures, 1 table. Under review
null
null
null
cs.LG cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent progress in multi-agent deep reinforcement learning(MADRL) makes it more practical in real-world tasks, but its relatively poor scalability and the partially observable constraints raise challenges to its performance and deployment. Based on our intuitive observation that the human society could be regarded as a large-scale partially observable environment, where each individual has the function of communicating with neighbors and remembering its own experience, we propose a novel network structure called hierarchical graph recurrent network(HGRN) for multi-agent cooperation under partial observability. Specifically, we construct the multi-agent system as a graph, use the hierarchical graph attention network(HGAT) to achieve communication between neighboring agents, and exploit GRU to enable agents to record historical information. To encourage exploration and improve robustness, we design a maximum-entropy learning method to learn stochastic policies of a configurable target action entropy. Based on the above technologies, we proposed a value-based MADRL algorithm called Soft-HGRN and its actor-critic variant named SAC-HRGN. Experimental results based on three homogeneous tasks and one heterogeneous environment not only show that our approach achieves clear improvements compared with four baselines, but also demonstrates the interpretability, scalability, and transferability of the proposed model. Ablation studies prove the function and necessity of each component.
[ { "created": "Sun, 5 Sep 2021 09:51:25 GMT", "version": "v1" } ]
2021-09-07
[ [ "Ye", "Zhenhui", "" ], [ "Jiang", "Xiaohong", "" ], [ "Song", "Guanghua", "" ], [ "Yang", "Bowei", "" ] ]
The recent progress in multi-agent deep reinforcement learning(MADRL) makes it more practical in real-world tasks, but its relatively poor scalability and the partially observable constraints raise challenges to its performance and deployment. Based on our intuitive observation that the human society could be regarded as a large-scale partially observable environment, where each individual has the function of communicating with neighbors and remembering its own experience, we propose a novel network structure called hierarchical graph recurrent network(HGRN) for multi-agent cooperation under partial observability. Specifically, we construct the multi-agent system as a graph, use the hierarchical graph attention network(HGAT) to achieve communication between neighboring agents, and exploit GRU to enable agents to record historical information. To encourage exploration and improve robustness, we design a maximum-entropy learning method to learn stochastic policies of a configurable target action entropy. Based on the above technologies, we proposed a value-based MADRL algorithm called Soft-HGRN and its actor-critic variant named SAC-HRGN. Experimental results based on three homogeneous tasks and one heterogeneous environment not only show that our approach achieves clear improvements compared with four baselines, but also demonstrates the interpretability, scalability, and transferability of the proposed model. Ablation studies prove the function and necessity of each component.
2106.06002
Austin Blodgett
Austin Blodgett and Nathan Schneider
Probabilistic, Structure-Aware Algorithms for Improved Variety, Accuracy, and Coverage of AMR Alignments
ACL 2021 Camera-ready
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present algorithms for aligning components of Abstract Meaning Representation (AMR) graphs to spans in English sentences. We leverage unsupervised learning in combination with heuristics, taking the best of both worlds from previous AMR aligners. Our unsupervised models, however, are more sensitive to graph substructures, without requiring a separate syntactic parse. Our approach covers a wider variety of AMR substructures than previously considered, achieves higher coverage of nodes and edges, and does so with higher accuracy. We will release our LEAMR datasets and aligner for use in research on AMR parsing, generation, and evaluation.
[ { "created": "Thu, 10 Jun 2021 18:46:32 GMT", "version": "v1" } ]
2021-06-14
[ [ "Blodgett", "Austin", "" ], [ "Schneider", "Nathan", "" ] ]
We present algorithms for aligning components of Abstract Meaning Representation (AMR) graphs to spans in English sentences. We leverage unsupervised learning in combination with heuristics, taking the best of both worlds from previous AMR aligners. Our unsupervised models, however, are more sensitive to graph substructures, without requiring a separate syntactic parse. Our approach covers a wider variety of AMR substructures than previously considered, achieves higher coverage of nodes and edges, and does so with higher accuracy. We will release our LEAMR datasets and aligner for use in research on AMR parsing, generation, and evaluation.
2112.01970
Tomoyoshi Shimobaba Dr.
Yoshiyuki Ishii, Tomoyoshi Shimobaba, David Blinder, Tobias Birnbaum, Peter Schelkens, Takashi Kakue, Tomoyoshi Ito
Optimization of phase-only holograms calculated with scaled diffraction calculation through deep neural networks
null
null
10.1007/s00340-022-07753-7
null
cs.CV cs.GR physics.optics
http://creativecommons.org/licenses/by/4.0/
Computer-generated holograms (CGHs) are used in holographic three-dimensional (3D) displays and holographic projections. The quality of the reconstructed images using phase-only CGHs is degraded because the amplitude of the reconstructed image is difficult to control. Iterative optimization methods such as the Gerchberg-Saxton (GS) algorithm are one option for improving image quality. They optimize CGHs in an iterative fashion to obtain a higher image quality. However, such iterative computation is time consuming, and the improvement in image quality is often stagnant. Recently, deep learning-based hologram computation has been proposed. Deep neural networks directly infer CGHs from input image data. However, it is limited to reconstructing images that are the same size as the hologram. In this study, we use deep learning to optimize phase-only CGHs generated using scaled diffraction computations and the random phase-free method. By combining the random phase-free method with the scaled diffraction computation, it is possible to handle a zoomable reconstructed image larger than the hologram. In comparison to the GS algorithm, the proposed method optimizes both high quality and speed.
[ { "created": "Thu, 2 Dec 2021 00:14:11 GMT", "version": "v1" } ]
2022-02-02
[ [ "Ishii", "Yoshiyuki", "" ], [ "Shimobaba", "Tomoyoshi", "" ], [ "Blinder", "David", "" ], [ "Birnbaum", "Tobias", "" ], [ "Schelkens", "Peter", "" ], [ "Kakue", "Takashi", "" ], [ "Ito", "Tomoyoshi", "" ] ]
Computer-generated holograms (CGHs) are used in holographic three-dimensional (3D) displays and holographic projections. The quality of the reconstructed images using phase-only CGHs is degraded because the amplitude of the reconstructed image is difficult to control. Iterative optimization methods such as the Gerchberg-Saxton (GS) algorithm are one option for improving image quality. They optimize CGHs in an iterative fashion to obtain a higher image quality. However, such iterative computation is time consuming, and the improvement in image quality is often stagnant. Recently, deep learning-based hologram computation has been proposed. Deep neural networks directly infer CGHs from input image data. However, it is limited to reconstructing images that are the same size as the hologram. In this study, we use deep learning to optimize phase-only CGHs generated using scaled diffraction computations and the random phase-free method. By combining the random phase-free method with the scaled diffraction computation, it is possible to handle a zoomable reconstructed image larger than the hologram. In comparison to the GS algorithm, the proposed method optimizes both high quality and speed.
1901.00324
Torgeir Dings{\o}yr
Torgeir Dings{\o}yr, Davide Falessi, Ken Power
Agile Development at Scale: The Next Frontier
introduction to special issue on large-scale agile development
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Agile methods have transformed the way software is developed, emphasizing active end-user involvement, tolerance to change, and evolutionary delivery of products. The first special issue on agile development described the methods as focusing on "feedback and change". These methods have led to major changes in how software is developed. Scrum is now the most common framework for development in most countries, and other methods like extreme programming (XP) and elements of lean software development and Kanban are widely used. What started as a bottom-up movement amongst software practitioners and consultants has been taken up by major international consulting companies who prescribe agile development, particularly for contexts where learning and innovation are key. Agile development methods have attracted interest primarily in software engineering, but also in a number of other disciplines including information systems and project management. The agile software development methods were originally targeted towards small, co-located development teams, but are increasingly applied in other contexts. They were initially used to develop Web systems and internal IT systems, but are now used in a range of domains, including mission-critical systems. Methods that were designed for single teams of 5-9 developers have been adapted for use in projects with tens of teams, hundreds of developers, which can involve integration with hundreds of existing systems and affect hundreds of thousands of users.
[ { "created": "Wed, 2 Jan 2019 11:21:29 GMT", "version": "v1" } ]
2019-01-03
[ [ "Dingsøyr", "Torgeir", "" ], [ "Falessi", "Davide", "" ], [ "Power", "Ken", "" ] ]
Agile methods have transformed the way software is developed, emphasizing active end-user involvement, tolerance to change, and evolutionary delivery of products. The first special issue on agile development described the methods as focusing on "feedback and change". These methods have led to major changes in how software is developed. Scrum is now the most common framework for development in most countries, and other methods like extreme programming (XP) and elements of lean software development and Kanban are widely used. What started as a bottom-up movement amongst software practitioners and consultants has been taken up by major international consulting companies who prescribe agile development, particularly for contexts where learning and innovation are key. Agile development methods have attracted interest primarily in software engineering, but also in a number of other disciplines including information systems and project management. The agile software development methods were originally targeted towards small, co-located development teams, but are increasingly applied in other contexts. They were initially used to develop Web systems and internal IT systems, but are now used in a range of domains, including mission-critical systems. Methods that were designed for single teams of 5-9 developers have been adapted for use in projects with tens of teams, hundreds of developers, which can involve integration with hundreds of existing systems and affect hundreds of thousands of users.
2404.11100
Qiyu Hou
Qiyu Hou, Jun Wang, Meixuan Qiao, Lujun Tian
Synthesizing Realistic Data for Table Recognition
ICDAR 2024
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To overcome the limitations and challenges of current automatic table data annotation methods and random table data synthesis approaches, we propose a novel method for synthesizing annotation data specifically designed for table recognition. This method utilizes the structure and content of existing complex tables, facilitating the efficient creation of tables that closely replicate the authentic styles found in the target domain. By leveraging the actual structure and content of tables from Chinese financial announcements, we have developed the first extensive table annotation dataset in this domain. We used this dataset to train several recent deep learning-based end-to-end table recognition models. Additionally, we have established the inaugural benchmark for real-world complex tables in the Chinese financial announcement domain, using it to assess the performance of models trained on our synthetic data, thereby effectively validating our method's practicality and effectiveness. Furthermore, we applied our synthesis method to augment the FinTabNet dataset, extracted from English financial announcements, by increasing the proportion of tables with multiple spanning cells to introduce greater complexity. Our experiments show that models trained on this augmented dataset achieve comprehensive improvements in performance, especially in the recognition of tables with multiple spanning cells.
[ { "created": "Wed, 17 Apr 2024 06:36:17 GMT", "version": "v1" }, { "created": "Tue, 9 Jul 2024 12:09:32 GMT", "version": "v2" } ]
2024-07-10
[ [ "Hou", "Qiyu", "" ], [ "Wang", "Jun", "" ], [ "Qiao", "Meixuan", "" ], [ "Tian", "Lujun", "" ] ]
To overcome the limitations and challenges of current automatic table data annotation methods and random table data synthesis approaches, we propose a novel method for synthesizing annotation data specifically designed for table recognition. This method utilizes the structure and content of existing complex tables, facilitating the efficient creation of tables that closely replicate the authentic styles found in the target domain. By leveraging the actual structure and content of tables from Chinese financial announcements, we have developed the first extensive table annotation dataset in this domain. We used this dataset to train several recent deep learning-based end-to-end table recognition models. Additionally, we have established the inaugural benchmark for real-world complex tables in the Chinese financial announcement domain, using it to assess the performance of models trained on our synthetic data, thereby effectively validating our method's practicality and effectiveness. Furthermore, we applied our synthesis method to augment the FinTabNet dataset, extracted from English financial announcements, by increasing the proportion of tables with multiple spanning cells to introduce greater complexity. Our experiments show that models trained on this augmented dataset achieve comprehensive improvements in performance, especially in the recognition of tables with multiple spanning cells.
2305.14342
Hong Liu
Hong Liu, Zhiyuan Li, David Hall, Percy Liang, Tengyu Ma
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
null
null
null
null
cs.LG cs.CL math.OC
http://creativecommons.org/licenses/by/4.0/
Given the massive cost of language model pre-training, a non-trivial improvement of the optimization algorithm would lead to a material reduction on the time and cost of training. Adam and its variants have been state-of-the-art for years, and more sophisticated second-order (Hessian-based) optimizers often incur too much per-step overhead. In this paper, we propose Sophia, Second-order Clipped Stochastic Optimization, a simple scalable second-order optimizer that uses a light-weight estimate of the diagonal Hessian as the pre-conditioner. The update is the moving average of the gradients divided by the moving average of the estimated Hessian, followed by element-wise clipping. The clipping controls the worst-case update size and tames the negative impact of non-convexity and rapid change of Hessian along the trajectory. Sophia only estimates the diagonal Hessian every handful of iterations, which has negligible average per-step time and memory overhead. On language modeling with GPT models of sizes ranging from 125M to 1.5B, Sophia achieves a 2x speed-up compared to Adam in the number of steps, total compute, and wall-clock time, achieving the same perplexity with 50% fewer steps, less total compute, and reduced wall-clock time. Theoretically, we show that Sophia, in a much simplified setting, adapts to the heterogeneous curvatures in different parameter dimensions, and thus has a run-time bound that does not depend on the condition number of the loss.
[ { "created": "Tue, 23 May 2023 17:59:21 GMT", "version": "v1" }, { "created": "Mon, 9 Oct 2023 19:54:09 GMT", "version": "v2" }, { "created": "Tue, 17 Oct 2023 07:44:16 GMT", "version": "v3" }, { "created": "Tue, 5 Mar 2024 17:07:16 GMT", "version": "v4" } ]
2024-03-06
[ [ "Liu", "Hong", "" ], [ "Li", "Zhiyuan", "" ], [ "Hall", "David", "" ], [ "Liang", "Percy", "" ], [ "Ma", "Tengyu", "" ] ]
Given the massive cost of language model pre-training, a non-trivial improvement of the optimization algorithm would lead to a material reduction on the time and cost of training. Adam and its variants have been state-of-the-art for years, and more sophisticated second-order (Hessian-based) optimizers often incur too much per-step overhead. In this paper, we propose Sophia, Second-order Clipped Stochastic Optimization, a simple scalable second-order optimizer that uses a light-weight estimate of the diagonal Hessian as the pre-conditioner. The update is the moving average of the gradients divided by the moving average of the estimated Hessian, followed by element-wise clipping. The clipping controls the worst-case update size and tames the negative impact of non-convexity and rapid change of Hessian along the trajectory. Sophia only estimates the diagonal Hessian every handful of iterations, which has negligible average per-step time and memory overhead. On language modeling with GPT models of sizes ranging from 125M to 1.5B, Sophia achieves a 2x speed-up compared to Adam in the number of steps, total compute, and wall-clock time, achieving the same perplexity with 50% fewer steps, less total compute, and reduced wall-clock time. Theoretically, we show that Sophia, in a much simplified setting, adapts to the heterogeneous curvatures in different parameter dimensions, and thus has a run-time bound that does not depend on the condition number of the loss.
1908.07123
Siqi Wu
Siqi Wu, Marian-Andrei Rizoiu, Lexing Xie
Estimating Attention Flow in Online Video Networks
CSCW 2019, the code and datasets are publicly available at https://github.com/avalanchesiqi/networked-popularity
null
null
null
cs.SI cs.HC cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online videos have shown tremendous increase in Internet traffic. Most video hosting sites implement recommender systems, which connect the videos into a directed network and conceptually act as a source of pathways for users to navigate. At present, little is known about how human attention is allocated over such large-scale networks, and about the impacts of the recommender systems. In this paper, we first construct the Vevo network -- a YouTube video network with 60,740 music videos interconnected by the recommendation links, and we collect their associated viewing dynamics. This results in a total of 310 million views every day over a period of 9 weeks. Next, we present large-scale measurements that connect the structure of the recommendation network and the video attention dynamics. We use the bow-tie structure to characterize the Vevo network and we find that its core component (23.1% of the videos), which occupies most of the attention (82.6% of the views), is made out of videos that are mainly recommended among themselves. This is indicative of the links between video recommendation and the inequality of attention allocation. Finally, we address the task of estimating the attention flow in the video recommendation network. We propose a model that accounts for the network effects for predicting video popularity, and we show it consistently outperforms the baselines. This model also identifies a group of artists gaining attention because of the recommendation network. Altogether, our observations and our models provide a new set of tools to better understand the impacts of recommender systems on collective social attention.
[ { "created": "Tue, 20 Aug 2019 01:37:26 GMT", "version": "v1" }, { "created": "Fri, 23 Aug 2019 08:47:27 GMT", "version": "v2" }, { "created": "Fri, 20 Mar 2020 04:55:02 GMT", "version": "v3" } ]
2020-03-23
[ [ "Wu", "Siqi", "" ], [ "Rizoiu", "Marian-Andrei", "" ], [ "Xie", "Lexing", "" ] ]
Online videos have shown tremendous increase in Internet traffic. Most video hosting sites implement recommender systems, which connect the videos into a directed network and conceptually act as a source of pathways for users to navigate. At present, little is known about how human attention is allocated over such large-scale networks, and about the impacts of the recommender systems. In this paper, we first construct the Vevo network -- a YouTube video network with 60,740 music videos interconnected by the recommendation links, and we collect their associated viewing dynamics. This results in a total of 310 million views every day over a period of 9 weeks. Next, we present large-scale measurements that connect the structure of the recommendation network and the video attention dynamics. We use the bow-tie structure to characterize the Vevo network and we find that its core component (23.1% of the videos), which occupies most of the attention (82.6% of the views), is made out of videos that are mainly recommended among themselves. This is indicative of the links between video recommendation and the inequality of attention allocation. Finally, we address the task of estimating the attention flow in the video recommendation network. We propose a model that accounts for the network effects for predicting video popularity, and we show it consistently outperforms the baselines. This model also identifies a group of artists gaining attention because of the recommendation network. Altogether, our observations and our models provide a new set of tools to better understand the impacts of recommender systems on collective social attention.
2402.05264
Dmitry Kamzolov
Petr Ostroukhov, Aigerim Zhumabayeva, Chulu Xiang, Alexander Gasnikov, Martin Tak\'a\v{c}, Dmitry Kamzolov
AdaBatchGrad: Combining Adaptive Batch Size and Adaptive Step Size
null
null
null
null
cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel adaptation of the Stochastic Gradient Descent (SGD), termed AdaBatchGrad. This modification seamlessly integrates an adaptive step size with an adjustable batch size. An increase in batch size and a decrease in step size are well-known techniques to tighten the area of convergence of SGD and decrease its variance. A range of studies by R. Byrd and J. Nocedal introduced various testing techniques to assess the quality of mini-batch gradient approximations and choose the appropriate batch sizes at every step. Methods that utilized exact tests were observed to converge within $O(LR^2/\varepsilon)$ iterations. Conversely, inexact test implementations sometimes resulted in non-convergence and erratic performance. To address these challenges, AdaBatchGrad incorporates both adaptive batch and step sizes, enhancing the method's robustness and stability. For exact tests, our approach converges in $O(LR^2/\varepsilon)$ iterations, analogous to standard gradient descent. For inexact tests, it achieves convergence in $O(\max\lbrace LR^2/\varepsilon, \sigma^2 R^2/\varepsilon^2 \rbrace )$ iterations. This makes AdaBatchGrad markedly more robust and computationally efficient relative to prevailing methods. To substantiate the efficacy of our method, we experimentally show, how the introduction of adaptive step size and adaptive batch size gradually improves the performance of regular SGD. The results imply that AdaBatchGrad surpasses alternative methods, especially when applied to inexact tests.
[ { "created": "Wed, 7 Feb 2024 21:19:05 GMT", "version": "v1" } ]
2024-02-09
[ [ "Ostroukhov", "Petr", "" ], [ "Zhumabayeva", "Aigerim", "" ], [ "Xiang", "Chulu", "" ], [ "Gasnikov", "Alexander", "" ], [ "Takáč", "Martin", "" ], [ "Kamzolov", "Dmitry", "" ] ]
This paper presents a novel adaptation of the Stochastic Gradient Descent (SGD), termed AdaBatchGrad. This modification seamlessly integrates an adaptive step size with an adjustable batch size. An increase in batch size and a decrease in step size are well-known techniques to tighten the area of convergence of SGD and decrease its variance. A range of studies by R. Byrd and J. Nocedal introduced various testing techniques to assess the quality of mini-batch gradient approximations and choose the appropriate batch sizes at every step. Methods that utilized exact tests were observed to converge within $O(LR^2/\varepsilon)$ iterations. Conversely, inexact test implementations sometimes resulted in non-convergence and erratic performance. To address these challenges, AdaBatchGrad incorporates both adaptive batch and step sizes, enhancing the method's robustness and stability. For exact tests, our approach converges in $O(LR^2/\varepsilon)$ iterations, analogous to standard gradient descent. For inexact tests, it achieves convergence in $O(\max\lbrace LR^2/\varepsilon, \sigma^2 R^2/\varepsilon^2 \rbrace )$ iterations. This makes AdaBatchGrad markedly more robust and computationally efficient relative to prevailing methods. To substantiate the efficacy of our method, we experimentally show, how the introduction of adaptive step size and adaptive batch size gradually improves the performance of regular SGD. The results imply that AdaBatchGrad surpasses alternative methods, especially when applied to inexact tests.
2402.03311
Shengcao Cao
Shengcao Cao, Dhiraj Joshi, Liang-Yan Gui, Yu-Xiong Wang
HASSOD: Hierarchical Adaptive Self-Supervised Object Detection
NeurIPS 2023
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
The human visual perception system demonstrates exceptional capabilities in learning without explicit supervision and understanding the part-to-whole composition of objects. Drawing inspiration from these two abilities, we propose Hierarchical Adaptive Self-Supervised Object Detection (HASSOD), a novel approach that learns to detect objects and understand their compositions without human supervision. HASSOD employs a hierarchical adaptive clustering strategy to group regions into object masks based on self-supervised visual representations, adaptively determining the number of objects per image. Furthermore, HASSOD identifies the hierarchical levels of objects in terms of composition, by analyzing coverage relations between masks and constructing tree structures. This additional self-supervised learning task leads to improved detection performance and enhanced interpretability. Lastly, we abandon the inefficient multi-round self-training process utilized in prior methods and instead adapt the Mean Teacher framework from semi-supervised learning, which leads to a smoother and more efficient training process. Through extensive experiments on prevalent image datasets, we demonstrate the superiority of HASSOD over existing methods, thereby advancing the state of the art in self-supervised object detection. Notably, we improve Mask AR from 20.2 to 22.5 on LVIS, and from 17.0 to 26.0 on SA-1B. Project page: https://HASSOD-NeurIPS23.github.io.
[ { "created": "Mon, 5 Feb 2024 18:59:41 GMT", "version": "v1" } ]
2024-02-06
[ [ "Cao", "Shengcao", "" ], [ "Joshi", "Dhiraj", "" ], [ "Gui", "Liang-Yan", "" ], [ "Wang", "Yu-Xiong", "" ] ]
The human visual perception system demonstrates exceptional capabilities in learning without explicit supervision and understanding the part-to-whole composition of objects. Drawing inspiration from these two abilities, we propose Hierarchical Adaptive Self-Supervised Object Detection (HASSOD), a novel approach that learns to detect objects and understand their compositions without human supervision. HASSOD employs a hierarchical adaptive clustering strategy to group regions into object masks based on self-supervised visual representations, adaptively determining the number of objects per image. Furthermore, HASSOD identifies the hierarchical levels of objects in terms of composition, by analyzing coverage relations between masks and constructing tree structures. This additional self-supervised learning task leads to improved detection performance and enhanced interpretability. Lastly, we abandon the inefficient multi-round self-training process utilized in prior methods and instead adapt the Mean Teacher framework from semi-supervised learning, which leads to a smoother and more efficient training process. Through extensive experiments on prevalent image datasets, we demonstrate the superiority of HASSOD over existing methods, thereby advancing the state of the art in self-supervised object detection. Notably, we improve Mask AR from 20.2 to 22.5 on LVIS, and from 17.0 to 26.0 on SA-1B. Project page: https://HASSOD-NeurIPS23.github.io.
2405.00358
Zhiyu Fang
Zhiyu Fang, Jingyan Qin, Xiaobin Zhu, Chun Yang, Xu-Cheng Yin
Arbitrary Time Information Modeling via Polynomial Approximation for Temporal Knowledge Graph Embedding
Accepted by LREC-COLING 2024 (long paper, camera-ready version)
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distinguished from traditional knowledge graphs (KGs), temporal knowledge graphs (TKGs) must explore and reason over temporally evolving facts adequately. However, existing TKG approaches still face two main challenges, i.e., the limited capability to model arbitrary timestamps continuously and the lack of rich inference patterns under temporal constraints. In this paper, we propose an innovative TKGE method (PTBox) via polynomial decomposition-based temporal representation and box embedding-based entity representation to tackle the above-mentioned problems. Specifically, we decompose time information by polynomials and then enhance the model's capability to represent arbitrary timestamps flexibly by incorporating the learnable temporal basis tensor. In addition, we model every entity as a hyperrectangle box and define each relation as a transformation on the head and tail entity boxes. The entity boxes can capture complex geometric structures and learn robust representations, improving the model's inductive capability for rich inference patterns. Theoretically, our PTBox can encode arbitrary time information or even unseen timestamps while capturing rich inference patterns and higher-arity relations of the knowledge base. Extensive experiments on real-world datasets demonstrate the effectiveness of our method.
[ { "created": "Wed, 1 May 2024 07:27:04 GMT", "version": "v1" } ]
2024-05-02
[ [ "Fang", "Zhiyu", "" ], [ "Qin", "Jingyan", "" ], [ "Zhu", "Xiaobin", "" ], [ "Yang", "Chun", "" ], [ "Yin", "Xu-Cheng", "" ] ]
Distinguished from traditional knowledge graphs (KGs), temporal knowledge graphs (TKGs) must explore and reason over temporally evolving facts adequately. However, existing TKG approaches still face two main challenges, i.e., the limited capability to model arbitrary timestamps continuously and the lack of rich inference patterns under temporal constraints. In this paper, we propose an innovative TKGE method (PTBox) via polynomial decomposition-based temporal representation and box embedding-based entity representation to tackle the above-mentioned problems. Specifically, we decompose time information by polynomials and then enhance the model's capability to represent arbitrary timestamps flexibly by incorporating the learnable temporal basis tensor. In addition, we model every entity as a hyperrectangle box and define each relation as a transformation on the head and tail entity boxes. The entity boxes can capture complex geometric structures and learn robust representations, improving the model's inductive capability for rich inference patterns. Theoretically, our PTBox can encode arbitrary time information or even unseen timestamps while capturing rich inference patterns and higher-arity relations of the knowledge base. Extensive experiments on real-world datasets demonstrate the effectiveness of our method.
2202.04966
Lorenzo Vaquero
Lorenzo Vaquero, V\'ictor M. Brea, Manuel Mucientes
Real-Time Siamese Multiple Object Tracker with Enhanced Proposals
Accepted at Pattern Recognition. Code available at https://github.com/lorenzovaquero/SiamMOTION
null
10.1016/j.patcog.2022.109141
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Maintaining the identity of multiple objects in real-time video is a challenging task, as it is not always feasible to run a detector on every frame. Thus, motion estimation systems are often employed, which either do not scale well with the number of targets or produce features with limited semantic information. To solve the aforementioned problems and allow the tracking of dozens of arbitrary objects in real-time, we propose SiamMOTION. SiamMOTION includes a novel proposal engine that produces quality features through an attention mechanism and a region-of-interest extractor fed by an inertia module and powered by a feature pyramid network. Finally, the extracted tensors enter a comparison head that efficiently matches pairs of exemplars and search areas, generating quality predictions via a pairwise depthwise region proposal network and a multi-object penalization module. SiamMOTION has been validated on five public benchmarks, achieving leading performance against current state-of-the-art trackers. Code available at: https://github.com/lorenzovaquero/SiamMOTION
[ { "created": "Thu, 10 Feb 2022 11:41:27 GMT", "version": "v1" }, { "created": "Tue, 8 Nov 2022 10:33:32 GMT", "version": "v2" } ]
2022-11-09
[ [ "Vaquero", "Lorenzo", "" ], [ "Brea", "Víctor M.", "" ], [ "Mucientes", "Manuel", "" ] ]
Maintaining the identity of multiple objects in real-time video is a challenging task, as it is not always feasible to run a detector on every frame. Thus, motion estimation systems are often employed, which either do not scale well with the number of targets or produce features with limited semantic information. To solve the aforementioned problems and allow the tracking of dozens of arbitrary objects in real-time, we propose SiamMOTION. SiamMOTION includes a novel proposal engine that produces quality features through an attention mechanism and a region-of-interest extractor fed by an inertia module and powered by a feature pyramid network. Finally, the extracted tensors enter a comparison head that efficiently matches pairs of exemplars and search areas, generating quality predictions via a pairwise depthwise region proposal network and a multi-object penalization module. SiamMOTION has been validated on five public benchmarks, achieving leading performance against current state-of-the-art trackers. Code available at: https://github.com/lorenzovaquero/SiamMOTION
2403.16527
Neeloy Chakraborty
Neeloy Chakraborty and Melkior Ornik and Katherine Driggs-Campbell
Hallucination Detection in Foundation Models for Decision-Making: A Flexible Definition and Review of the State of the Art
31 pages, 2 tables
null
null
null
cs.AI cs.CL cs.RO
http://creativecommons.org/licenses/by/4.0/
Autonomous systems are soon to be ubiquitous, from manufacturing autonomy to agricultural field robots, and from health care assistants to the entertainment industry. The majority of these systems are developed with modular sub-components for decision-making, planning, and control that may be hand-engineered or learning-based. While these existing approaches have been shown to perform well under the situations they were specifically designed for, they can perform especially poorly in rare, out-of-distribution scenarios that will undoubtedly arise at test-time. The rise of foundation models trained on multiple tasks with impressively large datasets from a variety of fields has led researchers to believe that these models may provide common sense reasoning that existing planners are missing. Researchers posit that this common sense reasoning will bridge the gap between algorithm development and deployment to out-of-distribution tasks, like how humans adapt to unexpected scenarios. Large language models have already penetrated the robotics and autonomous systems domains as researchers are scrambling to showcase their potential use cases in deployment. While this application direction is very promising empirically, foundation models are known to hallucinate and generate decisions that may sound reasonable, but are in fact poor. We argue there is a need to step back and simultaneously design systems that can quantify the certainty of a model's decision, and detect when it may be hallucinating. In this work, we discuss the current use cases of foundation models for decision-making tasks, provide a general definition for hallucinations with examples, discuss existing approaches to hallucination detection and mitigation with a focus on decision problems, and explore areas for further research in this exciting field.
[ { "created": "Mon, 25 Mar 2024 08:11:02 GMT", "version": "v1" } ]
2024-03-26
[ [ "Chakraborty", "Neeloy", "" ], [ "Ornik", "Melkior", "" ], [ "Driggs-Campbell", "Katherine", "" ] ]
Autonomous systems are soon to be ubiquitous, from manufacturing autonomy to agricultural field robots, and from health care assistants to the entertainment industry. The majority of these systems are developed with modular sub-components for decision-making, planning, and control that may be hand-engineered or learning-based. While these existing approaches have been shown to perform well under the situations they were specifically designed for, they can perform especially poorly in rare, out-of-distribution scenarios that will undoubtedly arise at test-time. The rise of foundation models trained on multiple tasks with impressively large datasets from a variety of fields has led researchers to believe that these models may provide common sense reasoning that existing planners are missing. Researchers posit that this common sense reasoning will bridge the gap between algorithm development and deployment to out-of-distribution tasks, like how humans adapt to unexpected scenarios. Large language models have already penetrated the robotics and autonomous systems domains as researchers are scrambling to showcase their potential use cases in deployment. While this application direction is very promising empirically, foundation models are known to hallucinate and generate decisions that may sound reasonable, but are in fact poor. We argue there is a need to step back and simultaneously design systems that can quantify the certainty of a model's decision, and detect when it may be hallucinating. In this work, we discuss the current use cases of foundation models for decision-making tasks, provide a general definition for hallucinations with examples, discuss existing approaches to hallucination detection and mitigation with a focus on decision problems, and explore areas for further research in this exciting field.
1301.1218
Matteo Riondato
Matteo Riondato and Fabio Vandin
Finding the True Frequent Itemsets
13 pages, Extended version of work appeared in SIAM International Conference on Data Mining, 2014
null
null
null
cs.LG cs.DB cs.DS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Frequent Itemsets (FIs) mining is a fundamental primitive in data mining. It requires to identify all itemsets appearing in at least a fraction $\theta$ of a transactional dataset $\mathcal{D}$. Often though, the ultimate goal of mining $\mathcal{D}$ is not an analysis of the dataset \emph{per se}, but the understanding of the underlying process that generated it. Specifically, in many applications $\mathcal{D}$ is a collection of samples obtained from an unknown probability distribution $\pi$ on transactions, and by extracting the FIs in $\mathcal{D}$ one attempts to infer itemsets that are frequently (i.e., with probability at least $\theta$) generated by $\pi$, which we call the True Frequent Itemsets (TFIs). Due to the inherently stochastic nature of the generative process, the set of FIs is only a rough approximation of the set of TFIs, as it often contains a huge number of \emph{false positives}, i.e., spurious itemsets that are not among the TFIs. In this work we design and analyze an algorithm to identify a threshold $\hat{\theta}$ such that the collection of itemsets with frequency at least $\hat{\theta}$ in $\mathcal{D}$ contains only TFIs with probability at least $1-\delta$, for some user-specified $\delta$. Our method uses results from statistical learning theory involving the (empirical) VC-dimension of the problem at hand. This allows us to identify almost all the TFIs without including any false positive. We also experimentally compare our method with the direct mining of $\mathcal{D}$ at frequency $\theta$ and with techniques based on widely-used standard bounds (i.e., the Chernoff bounds) of the binomial distribution, and show that our algorithm outperforms these methods and achieves even better results than what is guaranteed by the theoretical analysis.
[ { "created": "Mon, 7 Jan 2013 15:04:43 GMT", "version": "v1" }, { "created": "Tue, 30 Apr 2013 12:54:12 GMT", "version": "v2" }, { "created": "Wed, 22 Jan 2014 16:38:44 GMT", "version": "v3" } ]
2014-01-23
[ [ "Riondato", "Matteo", "" ], [ "Vandin", "Fabio", "" ] ]
Frequent Itemsets (FIs) mining is a fundamental primitive in data mining. It requires to identify all itemsets appearing in at least a fraction $\theta$ of a transactional dataset $\mathcal{D}$. Often though, the ultimate goal of mining $\mathcal{D}$ is not an analysis of the dataset \emph{per se}, but the understanding of the underlying process that generated it. Specifically, in many applications $\mathcal{D}$ is a collection of samples obtained from an unknown probability distribution $\pi$ on transactions, and by extracting the FIs in $\mathcal{D}$ one attempts to infer itemsets that are frequently (i.e., with probability at least $\theta$) generated by $\pi$, which we call the True Frequent Itemsets (TFIs). Due to the inherently stochastic nature of the generative process, the set of FIs is only a rough approximation of the set of TFIs, as it often contains a huge number of \emph{false positives}, i.e., spurious itemsets that are not among the TFIs. In this work we design and analyze an algorithm to identify a threshold $\hat{\theta}$ such that the collection of itemsets with frequency at least $\hat{\theta}$ in $\mathcal{D}$ contains only TFIs with probability at least $1-\delta$, for some user-specified $\delta$. Our method uses results from statistical learning theory involving the (empirical) VC-dimension of the problem at hand. This allows us to identify almost all the TFIs without including any false positive. We also experimentally compare our method with the direct mining of $\mathcal{D}$ at frequency $\theta$ and with techniques based on widely-used standard bounds (i.e., the Chernoff bounds) of the binomial distribution, and show that our algorithm outperforms these methods and achieves even better results than what is guaranteed by the theoretical analysis.
2307.11780
Jiang Wu
Jiang Wu, Dongyu Liu, Ziyang Guo, Yingcai Wu
Beep: Balancing Effectiveness and Efficiency when Finding Multivariate Patterns in Racket Sports
This work is accepted in the KDD 2023 Workshop on Data Science and AI for Sports
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
Modeling each hit as a multivariate event in racket sports and conducting sequential analysis aids in assessing player/team performance and identifying successful tactics for coaches and analysts. However, the complex correlations among multiple event attributes require pattern mining algorithms to be highly effective and efficient. This paper proposes Beep to discover meaningful multivariate patterns in racket sports. In particular, Beep introduces a new encoding scheme to discover patterns with correlations among multiple attributes and high-level tolerances of noise. Moreover, Beep applies an algorithm based on LSH (Locality-Sensitive Hashing) to accelerate summarizing patterns. We conducted a case study on a table tennis dataset and quantitative experiments on multi-scaled synthetic datasets to compare Beep with the SOTA multivariate pattern mining algorithm. Results showed that Beep can effectively discover patterns and noises to help analysts gain insights. Moreover, Beep was about five times faster than the SOTA algorithm.
[ { "created": "Thu, 20 Jul 2023 04:55:57 GMT", "version": "v1" }, { "created": "Wed, 26 Jul 2023 04:16:05 GMT", "version": "v2" } ]
2023-07-27
[ [ "Wu", "Jiang", "" ], [ "Liu", "Dongyu", "" ], [ "Guo", "Ziyang", "" ], [ "Wu", "Yingcai", "" ] ]
Modeling each hit as a multivariate event in racket sports and conducting sequential analysis aids in assessing player/team performance and identifying successful tactics for coaches and analysts. However, the complex correlations among multiple event attributes require pattern mining algorithms to be highly effective and efficient. This paper proposes Beep to discover meaningful multivariate patterns in racket sports. In particular, Beep introduces a new encoding scheme to discover patterns with correlations among multiple attributes and high-level tolerances of noise. Moreover, Beep applies an algorithm based on LSH (Locality-Sensitive Hashing) to accelerate summarizing patterns. We conducted a case study on a table tennis dataset and quantitative experiments on multi-scaled synthetic datasets to compare Beep with the SOTA multivariate pattern mining algorithm. Results showed that Beep can effectively discover patterns and noises to help analysts gain insights. Moreover, Beep was about five times faster than the SOTA algorithm.
2210.11905
Ella Rabinovich
Ella Rabinovich and Boaz Carmeli
Exploration of the Usage of Color Terms by Color-blind Participants in Online Discussion Platforms
Accepted at EMNLP 2022 (main conference), 13 pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Prominent questions about the role of sensory vs. linguistic input in the way we acquire and use language have been extensively studied in the psycholinguistic literature. However, the relative effect of various factors in a person's overall experience on their linguistic system remains unclear. We study this question by making a step forward towards a better understanding of the conceptual perception of colors by color-blind individuals, as reflected in their spontaneous linguistic productions. Using a novel and carefully curated dataset, we show that red-green color-blind speakers use the "red" and "green" color terms in less predictable contexts, and in linguistic environments evoking mental image to a lower extent, when compared to their normal-sighted counterparts. These findings shed some new and interesting light on the role of sensory experience on our linguistic system.
[ { "created": "Fri, 21 Oct 2022 12:11:10 GMT", "version": "v1" }, { "created": "Sun, 30 Oct 2022 08:41:31 GMT", "version": "v2" } ]
2022-11-01
[ [ "Rabinovich", "Ella", "" ], [ "Carmeli", "Boaz", "" ] ]
Prominent questions about the role of sensory vs. linguistic input in the way we acquire and use language have been extensively studied in the psycholinguistic literature. However, the relative effect of various factors in a person's overall experience on their linguistic system remains unclear. We study this question by making a step forward towards a better understanding of the conceptual perception of colors by color-blind individuals, as reflected in their spontaneous linguistic productions. Using a novel and carefully curated dataset, we show that red-green color-blind speakers use the "red" and "green" color terms in less predictable contexts, and in linguistic environments evoking mental image to a lower extent, when compared to their normal-sighted counterparts. These findings shed some new and interesting light on the role of sensory experience on our linguistic system.
2305.11294
Adrian Groza
Adrian Groza
Solving probability puzzles with logic toolkit
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The proposed approach is to formalise the probabilistic puzzle in equational FOL. Two formalisations are needed: one theory for all models of the given puzzle, and a second theory for the favorable models. Then Mace4 - that computes all the interpretation models of a FOL theory - is called twice. First, it is asked to compute all the possible models M p .Second, the additional constraint is added, and Mace4 computes only favourabile models M f. Finally, the definition of probability is applied: the number of favorable models is divided by the number of possible models. The proposed approach equips students from the logic tribe to find the correct solution for puzzles from the probabilitistic tribe, by using their favourite instruments: modelling and formalisation. I have exemplified here five probabilistic puzzles and how they can be solved by translating the min FOL and then find the corresponding interpretation models. Mace4 was the tool of choice here. Ongoing work is investigating the limits of this method on various collections of probabilistic puzzles
[ { "created": "Thu, 18 May 2023 20:35:46 GMT", "version": "v1" } ]
2023-05-22
[ [ "Groza", "Adrian", "" ] ]
The proposed approach is to formalise the probabilistic puzzle in equational FOL. Two formalisations are needed: one theory for all models of the given puzzle, and a second theory for the favorable models. Then Mace4 - that computes all the interpretation models of a FOL theory - is called twice. First, it is asked to compute all the possible models M p .Second, the additional constraint is added, and Mace4 computes only favourabile models M f. Finally, the definition of probability is applied: the number of favorable models is divided by the number of possible models. The proposed approach equips students from the logic tribe to find the correct solution for puzzles from the probabilitistic tribe, by using their favourite instruments: modelling and formalisation. I have exemplified here five probabilistic puzzles and how they can be solved by translating the min FOL and then find the corresponding interpretation models. Mace4 was the tool of choice here. Ongoing work is investigating the limits of this method on various collections of probabilistic puzzles
2407.04516
James Rowbottom
James Rowbottom, Georg Maierhofer, Teo Deveney, Katharina Schratz, Pietro Li\`o, Carola-Bibiane Sch\"onlieb, Chris Budd
G-Adaptive mesh refinement -- leveraging graph neural networks and differentiable finite element solvers
null
null
null
null
cs.LG cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel, and effective, approach to the long-standing problem of mesh adaptivity in finite element methods (FEM). FE solvers are powerful tools for solving partial differential equations (PDEs), but their cost and accuracy are critically dependent on the choice of mesh points. To keep computational costs low, mesh relocation (r-adaptivity) seeks to optimise the position of a fixed number of mesh points to obtain the best FE solution accuracy. Classical approaches to this problem require the solution of a separate nonlinear "meshing" PDE to find the mesh point locations. This incurs significant cost at remeshing and relies on certain a-priori assumptions and guiding heuristics for optimal mesh point location. Recent machine learning approaches to r-adaptivity have mainly focused on the construction of fast surrogates for such classical methods. Our new approach combines a graph neural network (GNN) powered architecture, with training based on direct minimisation of the FE solution error with respect to the mesh point locations. The GNN employs graph neural diffusion (GRAND), closely aligning the mesh solution space to that of classical meshing methodologies, thus replacing heuristics with a learnable strategy, and providing a strong inductive bias. This allows for rapid and robust training and results in an extremely efficient and effective GNN approach to online r-adaptivity. This method outperforms classical and prior ML approaches to r-adaptive meshing on the test problems we consider, in particular achieving lower FE solution error, whilst retaining the significant speed-up over classical methods observed in prior ML work.
[ { "created": "Fri, 5 Jul 2024 13:57:35 GMT", "version": "v1" } ]
2024-07-08
[ [ "Rowbottom", "James", "" ], [ "Maierhofer", "Georg", "" ], [ "Deveney", "Teo", "" ], [ "Schratz", "Katharina", "" ], [ "Liò", "Pietro", "" ], [ "Schönlieb", "Carola-Bibiane", "" ], [ "Budd", "Chris", "" ] ]
We present a novel, and effective, approach to the long-standing problem of mesh adaptivity in finite element methods (FEM). FE solvers are powerful tools for solving partial differential equations (PDEs), but their cost and accuracy are critically dependent on the choice of mesh points. To keep computational costs low, mesh relocation (r-adaptivity) seeks to optimise the position of a fixed number of mesh points to obtain the best FE solution accuracy. Classical approaches to this problem require the solution of a separate nonlinear "meshing" PDE to find the mesh point locations. This incurs significant cost at remeshing and relies on certain a-priori assumptions and guiding heuristics for optimal mesh point location. Recent machine learning approaches to r-adaptivity have mainly focused on the construction of fast surrogates for such classical methods. Our new approach combines a graph neural network (GNN) powered architecture, with training based on direct minimisation of the FE solution error with respect to the mesh point locations. The GNN employs graph neural diffusion (GRAND), closely aligning the mesh solution space to that of classical meshing methodologies, thus replacing heuristics with a learnable strategy, and providing a strong inductive bias. This allows for rapid and robust training and results in an extremely efficient and effective GNN approach to online r-adaptivity. This method outperforms classical and prior ML approaches to r-adaptive meshing on the test problems we consider, in particular achieving lower FE solution error, whilst retaining the significant speed-up over classical methods observed in prior ML work.
1811.05375
Carlos Sarraute
Martin Fixman, Martin Minnoni, Carlos Sarraute
Comparison of Feature Extraction Methods and Predictors for Income Inference
Argentine Symposium on Big Data (AGRANDA), September 5, 2017
null
null
null
cs.CY cs.LG cs.SI stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Patterns of mobile phone communications, coupled with the information of the social network graph and financial behavior, allow us to make inferences of users' socio-economic attributes such as their income level. We present here several methods to extract features from mobile phone usage (calls and messages), and compare different combinations of supervised machine learning techniques and sets of features used as input for the inference of users' income. Our experimental results show that the Bayesian method based on the communication graph outperforms standard machine learning algorithms using node-based features.
[ { "created": "Tue, 13 Nov 2018 15:53:22 GMT", "version": "v1" } ]
2020-02-27
[ [ "Fixman", "Martin", "" ], [ "Minnoni", "Martin", "" ], [ "Sarraute", "Carlos", "" ] ]
Patterns of mobile phone communications, coupled with the information of the social network graph and financial behavior, allow us to make inferences of users' socio-economic attributes such as their income level. We present here several methods to extract features from mobile phone usage (calls and messages), and compare different combinations of supervised machine learning techniques and sets of features used as input for the inference of users' income. Our experimental results show that the Bayesian method based on the communication graph outperforms standard machine learning algorithms using node-based features.
2405.07963
Dayanjan Wijesinghe
Suad Alshammari, Lama Basalelah, Walaa Abu Rukbah, Ali Alsuhibani, Dayanjan S. Wijesinghe
PyZoBot: A Platform for Conversational Information Extraction and Synthesis from Curated Zotero Reference Libraries through Advanced Retrieval-Augmented Generation
10 pages, 2 figures. The code is provided in github and the link to the repository is provided at the end of the publication
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
The exponential growth of scientific literature has resulted in information overload, challenging researchers to effectively synthesize relevant publications. This paper explores the integration of traditional reference management software with advanced computational techniques, including Large Language Models and Retrieval-Augmented Generation. We introduce PyZoBot, an AI-driven platform developed in Python, incorporating Zoteros reference management with OpenAIs sophisticated LLMs. PyZoBot streamlines knowledge extraction and synthesis from extensive human-curated scientific literature databases. It demonstrates proficiency in handling complex natural language queries, integrating data from multiple sources, and meticulously presenting references to uphold research integrity and facilitate further exploration. By leveraging LLMs, RAG, and human expertise through a curated library, PyZoBot offers an effective solution to manage information overload and keep pace with rapid scientific advancements. The development of such AI-enhanced tools promises significant improvements in research efficiency and effectiveness across various disciplines.
[ { "created": "Mon, 13 May 2024 17:44:05 GMT", "version": "v1" } ]
2024-05-14
[ [ "Alshammari", "Suad", "" ], [ "Basalelah", "Lama", "" ], [ "Rukbah", "Walaa Abu", "" ], [ "Alsuhibani", "Ali", "" ], [ "Wijesinghe", "Dayanjan S.", "" ] ]
The exponential growth of scientific literature has resulted in information overload, challenging researchers to effectively synthesize relevant publications. This paper explores the integration of traditional reference management software with advanced computational techniques, including Large Language Models and Retrieval-Augmented Generation. We introduce PyZoBot, an AI-driven platform developed in Python, incorporating Zoteros reference management with OpenAIs sophisticated LLMs. PyZoBot streamlines knowledge extraction and synthesis from extensive human-curated scientific literature databases. It demonstrates proficiency in handling complex natural language queries, integrating data from multiple sources, and meticulously presenting references to uphold research integrity and facilitate further exploration. By leveraging LLMs, RAG, and human expertise through a curated library, PyZoBot offers an effective solution to manage information overload and keep pace with rapid scientific advancements. The development of such AI-enhanced tools promises significant improvements in research efficiency and effectiveness across various disciplines.
2403.15600
Sivana Hamer
Sivana Hamer, Marcelo d'Amorim, Laurie Williams
Just another copy and paste? Comparing the security vulnerabilities of ChatGPT generated code and StackOverflow answers
8 pages, 2 figures, accepted at Deep Learning Security and Privacy Workshop (DLSP) part of IEEE Symposium on Security and Privacy Workshops (SPW) for 2024
null
null
null
cs.SE cs.AI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sonatype's 2023 report found that 97% of developers and security leads integrate generative Artificial Intelligence (AI), particularly Large Language Models (LLMs), into their development process. Concerns about the security implications of this trend have been raised. Developers are now weighing the benefits and risks of LLMs against other relied-upon information sources, such as StackOverflow (SO), requiring empirical data to inform their choice. In this work, our goal is to raise software developers awareness of the security implications when selecting code snippets by empirically comparing the vulnerabilities of ChatGPT and StackOverflow. To achieve this, we used an existing Java dataset from SO with security-related questions and answers. Then, we asked ChatGPT the same SO questions, gathering the generated code for comparison. After curating the dataset, we analyzed the number and types of Common Weakness Enumeration (CWE) vulnerabilities of 108 snippets from each platform using CodeQL. ChatGPT-generated code contained 248 vulnerabilities compared to the 302 vulnerabilities found in SO snippets, producing 20% fewer vulnerabilities with a statistically significant difference. Additionally, ChatGPT generated 19 types of CWE, fewer than the 22 found in SO. Our findings suggest developers are under-educated on insecure code propagation from both platforms, as we found 274 unique vulnerabilities and 25 types of CWE. Any code copied and pasted, created by AI or humans, cannot be trusted blindly, requiring good software engineering practices to reduce risk. Future work can help minimize insecure code propagation from any platform.
[ { "created": "Fri, 22 Mar 2024 20:06:41 GMT", "version": "v1" } ]
2024-03-26
[ [ "Hamer", "Sivana", "" ], [ "d'Amorim", "Marcelo", "" ], [ "Williams", "Laurie", "" ] ]
Sonatype's 2023 report found that 97% of developers and security leads integrate generative Artificial Intelligence (AI), particularly Large Language Models (LLMs), into their development process. Concerns about the security implications of this trend have been raised. Developers are now weighing the benefits and risks of LLMs against other relied-upon information sources, such as StackOverflow (SO), requiring empirical data to inform their choice. In this work, our goal is to raise software developers awareness of the security implications when selecting code snippets by empirically comparing the vulnerabilities of ChatGPT and StackOverflow. To achieve this, we used an existing Java dataset from SO with security-related questions and answers. Then, we asked ChatGPT the same SO questions, gathering the generated code for comparison. After curating the dataset, we analyzed the number and types of Common Weakness Enumeration (CWE) vulnerabilities of 108 snippets from each platform using CodeQL. ChatGPT-generated code contained 248 vulnerabilities compared to the 302 vulnerabilities found in SO snippets, producing 20% fewer vulnerabilities with a statistically significant difference. Additionally, ChatGPT generated 19 types of CWE, fewer than the 22 found in SO. Our findings suggest developers are under-educated on insecure code propagation from both platforms, as we found 274 unique vulnerabilities and 25 types of CWE. Any code copied and pasted, created by AI or humans, cannot be trusted blindly, requiring good software engineering practices to reduce risk. Future work can help minimize insecure code propagation from any platform.
2109.02810
EPTCS
Maria Bendix Mikkelsen (DIKU, University of Copenhagen, Denmark), Robert Gl\"uck (DIKU, University of Copenhagen, Denmark), Maja H. Kirkeby (Roskilde University, Denmark)
An Inversion Tool for Conditional Term Rewriting Systems -- A Case Study of Ackermann Inversion
In Proceedings VPT 2021, arXiv:2109.02001
EPTCS 341, 2021, pp. 33-41
10.4204/EPTCS.341.3
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We report on an inversion tool for a class of oriented conditional constructor term rewriting systems. Four well-behaved rule inverters ranging from trivial to full, partial and semi-inverters are included. Conditional term rewriting systems are theoretically well founded and can model functional and non-functional rewrite relations. We illustrate the inversion by experiments with full and partial inversions of the Ackermann function. The case study demonstrates, among others, that polyvariant inversion and input-output set propagation can reduce the search space of the generated inverse systems.
[ { "created": "Tue, 7 Sep 2021 01:55:39 GMT", "version": "v1" } ]
2021-09-08
[ [ "Mikkelsen", "Maria Bendix", "", "DIKU, University of Copenhagen, Denmark" ], [ "Glück", "Robert", "", "DIKU, University of Copenhagen, Denmark" ], [ "Kirkeby", "Maja H.", "", "Roskilde University, Denmark" ] ]
We report on an inversion tool for a class of oriented conditional constructor term rewriting systems. Four well-behaved rule inverters ranging from trivial to full, partial and semi-inverters are included. Conditional term rewriting systems are theoretically well founded and can model functional and non-functional rewrite relations. We illustrate the inversion by experiments with full and partial inversions of the Ackermann function. The case study demonstrates, among others, that polyvariant inversion and input-output set propagation can reduce the search space of the generated inverse systems.
2311.12048
Kim Doyoung
Doyoung Kim, Susik Yoon, Dongmin Park, Youngjun Lee, Hwanjun Song, Jihwan Bang, Jae-Gil Lee
One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning
ICML 2024
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In real-world continual learning (CL) scenarios, tasks often exhibit intricate and unpredictable semantic shifts, posing challenges for fixed prompt management strategies which are tailored to only handle semantic shifts of uniform degree (i.e., uniformly mild or uniformly abrupt). To address this limitation, we propose an adaptive prompting approach that effectively accommodates semantic shifts of varying degree where mild and abrupt shifts are mixed. AdaPromptCL employs the assign-and-refine semantic grouping mechanism that dynamically manages prompt groups in accordance with the semantic similarity between tasks, enhancing the quality of grouping through continuous refinement. Our experiment results demonstrate that AdaPromptCL outperforms existing prompting methods by up to 21.3%, especially in the benchmark datasets with diverse semantic shifts between tasks.
[ { "created": "Sat, 18 Nov 2023 08:55:08 GMT", "version": "v1" }, { "created": "Mon, 22 Jul 2024 11:11:28 GMT", "version": "v2" } ]
2024-07-23
[ [ "Kim", "Doyoung", "" ], [ "Yoon", "Susik", "" ], [ "Park", "Dongmin", "" ], [ "Lee", "Youngjun", "" ], [ "Song", "Hwanjun", "" ], [ "Bang", "Jihwan", "" ], [ "Lee", "Jae-Gil", "" ] ]
In real-world continual learning (CL) scenarios, tasks often exhibit intricate and unpredictable semantic shifts, posing challenges for fixed prompt management strategies which are tailored to only handle semantic shifts of uniform degree (i.e., uniformly mild or uniformly abrupt). To address this limitation, we propose an adaptive prompting approach that effectively accommodates semantic shifts of varying degree where mild and abrupt shifts are mixed. AdaPromptCL employs the assign-and-refine semantic grouping mechanism that dynamically manages prompt groups in accordance with the semantic similarity between tasks, enhancing the quality of grouping through continuous refinement. Our experiment results demonstrate that AdaPromptCL outperforms existing prompting methods by up to 21.3%, especially in the benchmark datasets with diverse semantic shifts between tasks.
1506.09019
Jaderick Pabico
Jaderick P. Pabico
Artificial Catalytic Reactions in 2D for Combinatorial Optimization
8 pages, 2 figures, In H.N. Adorna (ed.) Proceedings of the 3rd Symposium on Mathematical Aspects of Computer Science (SMACS 2006), Adventist University of the Philippines, Silang, Cavite, Philippines, 19-20 October 2006 (Published by the Computing Society of the Philippines)
null
null
null
cs.ET cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Presented in this paper is a derivation of a 2D catalytic reaction-based model to solve combinatorial optimization problems (COPs). The simulated catalytic reactions, a computational metaphor, occurs in an artificial chemical reactor that finds near-optimal solutions to COPs. The artificial environment is governed by catalytic reactions that can alter the structure of artificial molecular elements. Altering the molecular structure means finding new solutions to the COP. The molecular mass of the elements was considered as a measure of goodness of fit of the solutions. Several data structures and matrices were used to record the directions and locations of the molecules. These provided the model the 2D topology. The Traveling Salesperson Problem (TSP) was used as a working example. The performance of the model in finding a solution for the TSP was compared to the performance of a topology-less model. Experimental results show that the 2D model performs better than the topology-less one.
[ { "created": "Tue, 30 Jun 2015 10:16:20 GMT", "version": "v1" } ]
2015-07-01
[ [ "Pabico", "Jaderick P.", "" ] ]
Presented in this paper is a derivation of a 2D catalytic reaction-based model to solve combinatorial optimization problems (COPs). The simulated catalytic reactions, a computational metaphor, occurs in an artificial chemical reactor that finds near-optimal solutions to COPs. The artificial environment is governed by catalytic reactions that can alter the structure of artificial molecular elements. Altering the molecular structure means finding new solutions to the COP. The molecular mass of the elements was considered as a measure of goodness of fit of the solutions. Several data structures and matrices were used to record the directions and locations of the molecules. These provided the model the 2D topology. The Traveling Salesperson Problem (TSP) was used as a working example. The performance of the model in finding a solution for the TSP was compared to the performance of a topology-less model. Experimental results show that the 2D model performs better than the topology-less one.
1401.4568
Petru Valicov
Julien Bensmail (LaBRI), Ararat Harutyunyan (MI), Herv\'e Hocquard (LaBRI), Petru Valicov (LIP)
Strong edge-colouring of sparse planar graphs
null
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A strong edge-colouring of a graph is a proper edge-colouring where each colour class induces a matching. It is known that every planar graph with maximum degree $\Delta$ has a strong edge-colouring with at most $4\Delta+4$ colours. We show that $3\Delta+1$ colours suffice if the graph has girth 6, and $4\Delta$ colours suffice if $\Delta\geq 7$ or the girth is at least 5. In the last part of the paper, we raise some questions related to a long-standing conjecture of Vizing on proper edge-colouring of planar graphs.
[ { "created": "Sat, 18 Jan 2014 17:11:56 GMT", "version": "v1" }, { "created": "Wed, 22 Jan 2014 13:21:02 GMT", "version": "v2" }, { "created": "Mon, 21 Jul 2014 16:32:19 GMT", "version": "v3" } ]
2014-07-22
[ [ "Bensmail", "Julien", "", "LaBRI" ], [ "Harutyunyan", "Ararat", "", "MI" ], [ "Hocquard", "Hervé", "", "LaBRI" ], [ "Valicov", "Petru", "", "LIP" ] ]
A strong edge-colouring of a graph is a proper edge-colouring where each colour class induces a matching. It is known that every planar graph with maximum degree $\Delta$ has a strong edge-colouring with at most $4\Delta+4$ colours. We show that $3\Delta+1$ colours suffice if the graph has girth 6, and $4\Delta$ colours suffice if $\Delta\geq 7$ or the girth is at least 5. In the last part of the paper, we raise some questions related to a long-standing conjecture of Vizing on proper edge-colouring of planar graphs.
1912.02918
Fabio Valerio Massoli
Fabio Valerio Massoli, Fabio Carrara, Giuseppe Amato, Fabrizio Falchi
Detection of Face Recognition Adversarial Attacks
null
Computer Vision and Image Understanding Volume 202, January 2021, 103103
10.1016/j.cviu.2020.103103
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Learning methods have become state-of-the-art for solving tasks such as Face Recognition (FR). Unfortunately, despite their success, it has been pointed out that these learning models are exposed to adversarial inputs - images to which an imperceptible amount of noise for humans is added to maliciously fool a neural network - thus limiting their adoption in real-world applications. While it is true that an enormous effort has been spent in order to train robust models against this type of threat, adversarial detection techniques have recently started to draw attention within the scientific community. A detection approach has the advantage that it does not require to re-train any model, thus it can be added on top of any system. In this context, we present our work on adversarial samples detection in forensics mainly focused on detecting attacks against FR systems in which the learning model is typically used only as a features extractor. Thus, in these cases, train a more robust classifier might not be enough to defence a FR system. In this frame, the contribution of our work is four-fold: i) we tested our recently proposed adversarial detection approach against classifier attacks, i.e. adversarial samples crafted to fool a FR neural network acting as a classifier; ii) using a k-Nearest Neighbor (kNN) algorithm as a guidance, we generated deep features attacks against a FR system based on a DL model acting as features extractor, followed by a kNN which gives back the query identity based on features similarity; iii) we used the deep features attacks to fool a FR system on the 1:1 Face Verification task and we showed their superior effectiveness with respect to classifier attacks in fooling such type of system; iv) we used the detectors trained on classifier attacks to detect deep features attacks, thus showing that such approach is generalizable to different types of offensives.
[ { "created": "Thu, 5 Dec 2019 23:24:33 GMT", "version": "v1" } ]
2020-11-23
[ [ "Massoli", "Fabio Valerio", "" ], [ "Carrara", "Fabio", "" ], [ "Amato", "Giuseppe", "" ], [ "Falchi", "Fabrizio", "" ] ]
Deep Learning methods have become state-of-the-art for solving tasks such as Face Recognition (FR). Unfortunately, despite their success, it has been pointed out that these learning models are exposed to adversarial inputs - images to which an imperceptible amount of noise for humans is added to maliciously fool a neural network - thus limiting their adoption in real-world applications. While it is true that an enormous effort has been spent in order to train robust models against this type of threat, adversarial detection techniques have recently started to draw attention within the scientific community. A detection approach has the advantage that it does not require to re-train any model, thus it can be added on top of any system. In this context, we present our work on adversarial samples detection in forensics mainly focused on detecting attacks against FR systems in which the learning model is typically used only as a features extractor. Thus, in these cases, train a more robust classifier might not be enough to defence a FR system. In this frame, the contribution of our work is four-fold: i) we tested our recently proposed adversarial detection approach against classifier attacks, i.e. adversarial samples crafted to fool a FR neural network acting as a classifier; ii) using a k-Nearest Neighbor (kNN) algorithm as a guidance, we generated deep features attacks against a FR system based on a DL model acting as features extractor, followed by a kNN which gives back the query identity based on features similarity; iii) we used the deep features attacks to fool a FR system on the 1:1 Face Verification task and we showed their superior effectiveness with respect to classifier attacks in fooling such type of system; iv) we used the detectors trained on classifier attacks to detect deep features attacks, thus showing that such approach is generalizable to different types of offensives.
2008.13583
Isabelle Tingzon
Isabelle Tingzon, Niccolo Dejito, Ren Avell Flores, Rodolfo De Guzman, Liliana Carvajal, Katerine Zapata Erazo, Ivan Enrique Contreras Cala, Jeffrey Villaveces, Daniela Rubio, Rayid Ghani
Mapping New Informal Settlements using Machine Learning and Time Series Satellite Images: An Application in the Venezuelan Migration Crisis
null
null
10.1109/AI4G50087.2020.9311041
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since 2014, nearly 2 million Venezuelans have fled to Colombia to escape an economically devastated country during what is one of the largest humanitarian crises in modern history. Non-government organizations and local government units are faced with the challenge of identifying, assessing, and monitoring rapidly growing migrant communities in order to provide urgent humanitarian aid. However, with many of these displaced populations living in informal settlements areas across the country, locating migrant settlements across large territories can be a major challenge. To address this problem, we propose a novel approach for rapidly and cost-effectively locating new and emerging informal settlements using machine learning and publicly accessible Sentinel-2 time-series satellite imagery. We demonstrate the effectiveness of the approach in identifying potential Venezuelan migrant settlements in Colombia that have emerged between 2015 to 2020. Finally, we emphasize the importance of post-classification verification and present a two-step validation approach consisting of (1) remote validation using Google Earth and (2) on-the-ground validation through the Premise App, a mobile crowdsourcing platform.
[ { "created": "Thu, 27 Aug 2020 04:42:45 GMT", "version": "v1" }, { "created": "Wed, 18 Nov 2020 18:59:49 GMT", "version": "v2" }, { "created": "Wed, 16 Dec 2020 02:35:56 GMT", "version": "v3" } ]
2023-07-27
[ [ "Tingzon", "Isabelle", "" ], [ "Dejito", "Niccolo", "" ], [ "Flores", "Ren Avell", "" ], [ "De Guzman", "Rodolfo", "" ], [ "Carvajal", "Liliana", "" ], [ "Erazo", "Katerine Zapata", "" ], [ "Cala", "Ivan Enrique Contreras", "" ], [ "Villaveces", "Jeffrey", "" ], [ "Rubio", "Daniela", "" ], [ "Ghani", "Rayid", "" ] ]
Since 2014, nearly 2 million Venezuelans have fled to Colombia to escape an economically devastated country during what is one of the largest humanitarian crises in modern history. Non-government organizations and local government units are faced with the challenge of identifying, assessing, and monitoring rapidly growing migrant communities in order to provide urgent humanitarian aid. However, with many of these displaced populations living in informal settlements areas across the country, locating migrant settlements across large territories can be a major challenge. To address this problem, we propose a novel approach for rapidly and cost-effectively locating new and emerging informal settlements using machine learning and publicly accessible Sentinel-2 time-series satellite imagery. We demonstrate the effectiveness of the approach in identifying potential Venezuelan migrant settlements in Colombia that have emerged between 2015 to 2020. Finally, we emphasize the importance of post-classification verification and present a two-step validation approach consisting of (1) remote validation using Google Earth and (2) on-the-ground validation through the Premise App, a mobile crowdsourcing platform.
1806.11527
Ren\'e van Bevern
Ren\'e van Bevern and Oxana Yu. Tsidulko and Philipp Zschoche
Representative families for matroid intersections, with applications to location, packing, and covering problems
Restructuring (focus on representative families instead of facility location), slight running time improvements, algorithms factored out
Discrete Applied Mathematics, 298: 110-128, 2021
10.1016/j.dam.2021.03.014
null
cs.DS cs.DM math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show algorithms for computing representative families for matroid intersections and use them in fixed-parameter algorithms for set packing, set covering, and facility location problems with multiple matroid constraints. We complement our tractability results by hardness results.
[ { "created": "Fri, 29 Jun 2018 16:45:42 GMT", "version": "v1" }, { "created": "Sun, 30 Sep 2018 08:03:36 GMT", "version": "v2" }, { "created": "Sat, 13 Oct 2018 08:51:55 GMT", "version": "v3" }, { "created": "Sun, 28 Feb 2021 10:00:39 GMT", "version": "v4" } ]
2021-09-14
[ [ "van Bevern", "René", "" ], [ "Tsidulko", "Oxana Yu.", "" ], [ "Zschoche", "Philipp", "" ] ]
We show algorithms for computing representative families for matroid intersections and use them in fixed-parameter algorithms for set packing, set covering, and facility location problems with multiple matroid constraints. We complement our tractability results by hardness results.
cs/0605124
Marcelo Arenas
Jorge Perez, Marcelo Arenas and Claudio Gutierrez
Semantics and Complexity of SPARQL
null
null
null
null
cs.DB
null
SPARQL is the W3C candidate recommendation query language for RDF. In this paper we address systematically the formal study of SPARQL, concentrating in its graph pattern facility. We consider for this study a fragment without literals and a simple version of filters which encompasses all the main issues yet is simple to formalize. We provide a compositional semantics, prove there are normal forms, prove complexity bounds, among others that the evaluation of SPARQL patterns is PSPACE-complete, compare our semantics to an alternative operational semantics, give simple and natural conditions when both semantics coincide and discuss optimizations procedures.
[ { "created": "Fri, 26 May 2006 16:41:15 GMT", "version": "v1" } ]
2007-05-23
[ [ "Perez", "Jorge", "" ], [ "Arenas", "Marcelo", "" ], [ "Gutierrez", "Claudio", "" ] ]
SPARQL is the W3C candidate recommendation query language for RDF. In this paper we address systematically the formal study of SPARQL, concentrating in its graph pattern facility. We consider for this study a fragment without literals and a simple version of filters which encompasses all the main issues yet is simple to formalize. We provide a compositional semantics, prove there are normal forms, prove complexity bounds, among others that the evaluation of SPARQL patterns is PSPACE-complete, compare our semantics to an alternative operational semantics, give simple and natural conditions when both semantics coincide and discuss optimizations procedures.
2104.10047
Ignacio Sarasua
Ignacio Sarasua, Jonwong Lee, Christian Wachinger
Geometric Deep Learning on Anatomical Meshes for the Prediction of Alzheimer's Disease
null
null
null
null
cs.LG cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Geometric deep learning can find representations that are optimal for a given task and therefore improve the performance over pre-defined representations. While current work has mainly focused on point representations, meshes also contain connectivity information and are therefore a more comprehensive characterization of the underlying anatomical surface. In this work, we evaluate four recent geometric deep learning approaches that operate on mesh representations. These approaches can be grouped into template-free and template-based approaches, where the template-based methods need a more elaborate pre-processing step with the definition of a common reference template and correspondences. We compare the different networks for the prediction of Alzheimer's disease based on the meshes of the hippocampus. Our results show advantages for template-based methods in terms of accuracy, number of learnable parameters, and training speed. While the template creation may be limiting for some applications, neuroimaging has a long history of building templates with automated tools readily available. Overall, working with meshes is more involved than working with simplistic point clouds, but they also offer new avenues for designing geometric deep learning architectures.
[ { "created": "Tue, 20 Apr 2021 15:17:13 GMT", "version": "v1" } ]
2021-04-21
[ [ "Sarasua", "Ignacio", "" ], [ "Lee", "Jonwong", "" ], [ "Wachinger", "Christian", "" ] ]
Geometric deep learning can find representations that are optimal for a given task and therefore improve the performance over pre-defined representations. While current work has mainly focused on point representations, meshes also contain connectivity information and are therefore a more comprehensive characterization of the underlying anatomical surface. In this work, we evaluate four recent geometric deep learning approaches that operate on mesh representations. These approaches can be grouped into template-free and template-based approaches, where the template-based methods need a more elaborate pre-processing step with the definition of a common reference template and correspondences. We compare the different networks for the prediction of Alzheimer's disease based on the meshes of the hippocampus. Our results show advantages for template-based methods in terms of accuracy, number of learnable parameters, and training speed. While the template creation may be limiting for some applications, neuroimaging has a long history of building templates with automated tools readily available. Overall, working with meshes is more involved than working with simplistic point clouds, but they also offer new avenues for designing geometric deep learning architectures.
2112.01919
Huilian Sophie Qiu
Huilian Sophie Qiu
On Female Audience Sending Virtual Gifts to Male Streamers on Douyin
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Live streaming has become increasingly popular. Our study focuses on the emerging Chinese female audiences who send virtual gifts to young male streamers. We observe a reversed entertainer-viewer gender relationship. We aim to study why they watch young male streamers, why they send gifts, and their relationships with these streamers.
[ { "created": "Tue, 16 Nov 2021 07:05:49 GMT", "version": "v1" } ]
2021-12-06
[ [ "Qiu", "Huilian Sophie", "" ] ]
Live streaming has become increasingly popular. Our study focuses on the emerging Chinese female audiences who send virtual gifts to young male streamers. We observe a reversed entertainer-viewer gender relationship. We aim to study why they watch young male streamers, why they send gifts, and their relationships with these streamers.
2108.00492
Jinming Wen Prof
Jinming Wen and Xiao Wen Chang
On the Success Probability of Three Detectors for the Box-Constrained Integer Linear Model
12 pages, to appear in IEEE Transactions on Communications
null
null
null
cs.IT math.IT
http://creativecommons.org/publicdomain/zero/1.0/
This paper is concerned with detecting an integer parameter vector inside a box from a linear model that is corrupted with a noise vector following the Gaussian distribution. One of the commonly used detectors is the maximum likelihood detector, which is obtained by solving a box-constrained integer least squares problem, that is NP-hard. Two other popular detectors are the box-constrained rounding and Babai detectors due to their high efficiency of implementation. In this paper, we first present formulas for the success probabilities (the probabilities of correct detection) of these three detectors for two different situations: the integer parameter vector is deterministic and is uniformly distributed over the constraint box. Then, we give two simple examples to respectively show that the success probability of the box-constrained rounding detector can be larger than that of the box-constrained Babai detector and the latter can be larger than the success probability of the maximum likelihood detector when the parameter vector is deterministic, and prove that the success probability of the box-constrained rounding detector is always not larger than that of the box-constrained Babai detector when the parameter vector is uniformly distributed over the constraint box. Some relations between the results for the box constrained and ordinary cases are presented, and two bounds on the success probability of the maximum likelihood detector, which can easily be computed, are developed. Finally, simulation results are provided to illustrate our main theoretical findings.
[ { "created": "Sun, 1 Aug 2021 16:49:01 GMT", "version": "v1" } ]
2021-08-03
[ [ "Wen", "Jinming", "" ], [ "Chang", "Xiao Wen", "" ] ]
This paper is concerned with detecting an integer parameter vector inside a box from a linear model that is corrupted with a noise vector following the Gaussian distribution. One of the commonly used detectors is the maximum likelihood detector, which is obtained by solving a box-constrained integer least squares problem, that is NP-hard. Two other popular detectors are the box-constrained rounding and Babai detectors due to their high efficiency of implementation. In this paper, we first present formulas for the success probabilities (the probabilities of correct detection) of these three detectors for two different situations: the integer parameter vector is deterministic and is uniformly distributed over the constraint box. Then, we give two simple examples to respectively show that the success probability of the box-constrained rounding detector can be larger than that of the box-constrained Babai detector and the latter can be larger than the success probability of the maximum likelihood detector when the parameter vector is deterministic, and prove that the success probability of the box-constrained rounding detector is always not larger than that of the box-constrained Babai detector when the parameter vector is uniformly distributed over the constraint box. Some relations between the results for the box constrained and ordinary cases are presented, and two bounds on the success probability of the maximum likelihood detector, which can easily be computed, are developed. Finally, simulation results are provided to illustrate our main theoretical findings.
2009.09331
Amiangshu Bosu
Jaydeb Sarker, Asif Kamal Turzo, Amiangshu Bosu
A Benchmark Study of the Contemporary Toxicity Detectors on Software Engineering Interactions
null
Proceedings of the 27th Asia-Pacific Software Engineering Conference (APSEC 2020)
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated filtering of toxic conversations may help an Open-source software (OSS) community to maintain healthy interactions among the project participants. Although, several general purpose tools exist to identify toxic contents, those may incorrectly flag some words commonly used in the Software Engineering (SE) context as toxic (e.g., 'junk', 'kill', and 'dump') and vice versa. To encounter this challenge, an SE specific tool has been proposed by the CMU Strudel Lab (referred as the `STRUDEL' hereinafter) by combining the output of the Perspective API with the output from a customized version of the Stanford's Politeness detector tool. However, since STRUDEL's evaluation was very limited with only 654 SE text, its practical applicability is unclear. Therefore, this study aims to empirically evaluate the Strudel tool as well as four state-of-the-art general purpose toxicity detectors on a large scale SE dataset. On this goal, we empirically developed a rubric to manually label toxic SE interactions. Using this rubric, we manually labeled a dataset of 6,533 code review comments and 4,140 Gitter messages. The results of our analyses suggest significant degradation of all tools' performances on our datasets. Those degradations were significantly higher on our dataset of formal SE communication such as code review than on our dataset of informal communication such as Gitter messages. Two of the models from our study showed significant performance improvements during 10-fold cross validations after we retrained those on our SE datasets. Based on our manual investigations of the incorrectly classified text, we have identified several recommendations for developing an SE specific toxicity detector.
[ { "created": "Sun, 20 Sep 2020 01:27:14 GMT", "version": "v1" } ]
2020-09-22
[ [ "Sarker", "Jaydeb", "" ], [ "Turzo", "Asif Kamal", "" ], [ "Bosu", "Amiangshu", "" ] ]
Automated filtering of toxic conversations may help an Open-source software (OSS) community to maintain healthy interactions among the project participants. Although, several general purpose tools exist to identify toxic contents, those may incorrectly flag some words commonly used in the Software Engineering (SE) context as toxic (e.g., 'junk', 'kill', and 'dump') and vice versa. To encounter this challenge, an SE specific tool has been proposed by the CMU Strudel Lab (referred as the `STRUDEL' hereinafter) by combining the output of the Perspective API with the output from a customized version of the Stanford's Politeness detector tool. However, since STRUDEL's evaluation was very limited with only 654 SE text, its practical applicability is unclear. Therefore, this study aims to empirically evaluate the Strudel tool as well as four state-of-the-art general purpose toxicity detectors on a large scale SE dataset. On this goal, we empirically developed a rubric to manually label toxic SE interactions. Using this rubric, we manually labeled a dataset of 6,533 code review comments and 4,140 Gitter messages. The results of our analyses suggest significant degradation of all tools' performances on our datasets. Those degradations were significantly higher on our dataset of formal SE communication such as code review than on our dataset of informal communication such as Gitter messages. Two of the models from our study showed significant performance improvements during 10-fold cross validations after we retrained those on our SE datasets. Based on our manual investigations of the incorrectly classified text, we have identified several recommendations for developing an SE specific toxicity detector.
2111.09161
Thomas Sandholm
Thomas Sandholm and Sayandev Mukherjee
MASS: Mobile Autonomous Station Simulation
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
We propose a set of tools to replay wireless network traffic traces, while preserving the privacy of the original traces. Traces are generated by a user- and context-aware trained generative adversarial network (GAN). The replay allows for realistic traces from any number of users and of any trace duration to be produced given contextual parameters like the type of application and the real-time signal strength. We demonstrate the usefulness of the tools in three replay scenarios: Linux- and Android-station experiments and NS3 simulations. We also evaluate the ability of the GAN model to generate traces that retain key statistical properties of the original traces such as feature correlation, statistical moments, and novelty. Our results show that we beat both traditional statistical distribution fitting approaches as well as a state-of-the-art GAN time series generator across these metrics. The ability of our GAN model to generate any number of user traces regardless of the number of users in the original trace also makes our tools more practically applicable compared to previous GAN approaches. Furthermore, we present a use case where our tools were employed in a Wi-Fi research experiment.
[ { "created": "Wed, 17 Nov 2021 14:51:35 GMT", "version": "v1" } ]
2021-11-18
[ [ "Sandholm", "Thomas", "" ], [ "Mukherjee", "Sayandev", "" ] ]
We propose a set of tools to replay wireless network traffic traces, while preserving the privacy of the original traces. Traces are generated by a user- and context-aware trained generative adversarial network (GAN). The replay allows for realistic traces from any number of users and of any trace duration to be produced given contextual parameters like the type of application and the real-time signal strength. We demonstrate the usefulness of the tools in three replay scenarios: Linux- and Android-station experiments and NS3 simulations. We also evaluate the ability of the GAN model to generate traces that retain key statistical properties of the original traces such as feature correlation, statistical moments, and novelty. Our results show that we beat both traditional statistical distribution fitting approaches as well as a state-of-the-art GAN time series generator across these metrics. The ability of our GAN model to generate any number of user traces regardless of the number of users in the original trace also makes our tools more practically applicable compared to previous GAN approaches. Furthermore, we present a use case where our tools were employed in a Wi-Fi research experiment.
1704.07809
Tomas Simon
Tomas Simon, Hanbyul Joo, Iain Matthews, Yaser Sheikh
Hand Keypoint Detection in Single Images using Multiview Bootstrapping
CVPR 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an approach that uses a multi-camera system to train fine-grained detectors for keypoints that are prone to occlusion, such as the joints of a hand. We call this procedure multiview bootstrapping: first, an initial keypoint detector is used to produce noisy labels in multiple views of the hand. The noisy detections are then triangulated in 3D using multiview geometry or marked as outliers. Finally, the reprojected triangulations are used as new labeled training data to improve the detector. We repeat this process, generating more labeled data in each iteration. We derive a result analytically relating the minimum number of views to achieve target true and false positive rates for a given detector. The method is used to train a hand keypoint detector for single images. The resulting keypoint detector runs in realtime on RGB images and has accuracy comparable to methods that use depth sensors. The single view detector, triangulated over multiple views, enables 3D markerless hand motion capture with complex object interactions.
[ { "created": "Tue, 25 Apr 2017 17:37:48 GMT", "version": "v1" } ]
2017-04-26
[ [ "Simon", "Tomas", "" ], [ "Joo", "Hanbyul", "" ], [ "Matthews", "Iain", "" ], [ "Sheikh", "Yaser", "" ] ]
We present an approach that uses a multi-camera system to train fine-grained detectors for keypoints that are prone to occlusion, such as the joints of a hand. We call this procedure multiview bootstrapping: first, an initial keypoint detector is used to produce noisy labels in multiple views of the hand. The noisy detections are then triangulated in 3D using multiview geometry or marked as outliers. Finally, the reprojected triangulations are used as new labeled training data to improve the detector. We repeat this process, generating more labeled data in each iteration. We derive a result analytically relating the minimum number of views to achieve target true and false positive rates for a given detector. The method is used to train a hand keypoint detector for single images. The resulting keypoint detector runs in realtime on RGB images and has accuracy comparable to methods that use depth sensors. The single view detector, triangulated over multiple views, enables 3D markerless hand motion capture with complex object interactions.
2210.06986
Anastasia Safonova
Ilya Nikitin, Brian O'Connor, Anastasia Safonova
Tone prediction and orthographic conversion for Basaa
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In this paper, we present a seq2seq approach for transliterating missionary Basaa orthographies into the official orthography. Our model uses pre-trained Basaa missionary and official orthography corpora using BERT. Since Basaa is a low-resource language, we have decided to use the mT5 model for our project. Before training our model, we pre-processed our corpora by eliminating one-to-one correspondences between spellings and unifying characters variably containing either one to two characters into single-character form. Our best mT5 model achieved a CER equal to 12.6747 and a WER equal to 40.1012.
[ { "created": "Thu, 13 Oct 2022 12:58:39 GMT", "version": "v1" } ]
2022-10-14
[ [ "Nikitin", "Ilya", "" ], [ "O'Connor", "Brian", "" ], [ "Safonova", "Anastasia", "" ] ]
In this paper, we present a seq2seq approach for transliterating missionary Basaa orthographies into the official orthography. Our model uses pre-trained Basaa missionary and official orthography corpora using BERT. Since Basaa is a low-resource language, we have decided to use the mT5 model for our project. Before training our model, we pre-processed our corpora by eliminating one-to-one correspondences between spellings and unifying characters variably containing either one to two characters into single-character form. Our best mT5 model achieved a CER equal to 12.6747 and a WER equal to 40.1012.
2305.05705
Maria Waheed
Maria Waheed, Michael Milford, Xiaojun Zhai, Klaus McDonald-Maier and Shoaib Ehsan
An Evaluation and Ranking of Different Voting Schemes for Improved Visual Place Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual Place Recognition has recently seen a surge of endeavours utilizing different ensemble approaches to improve VPR performance. Ideas like multi-process fusion or switching involve combining different VPR techniques together, utilizing different strategies. One major aspect often common to many of these strategies is voting. Voting is widely used in many ensemble methods, so it is potentially a relevant subject to explore in terms of its application and significance for improving VPR performance. This paper attempts to looks into detail and analyze a variety of voting schemes to evaluate which voting technique is optimal for an ensemble VPR set up. We take inspiration from a variety of voting schemes that exist and are widely employed in other research fields such as politics and sociology. The idea is inspired by an observation that different voting methods result in different outcomes for the same type of data and each voting scheme is utilized for specific cases in different academic fields. Some of these voting schemes include Condorcet voting, Broda Count and Plurality voting. Voting employed in any aspect requires that a fair system be established, that outputs the best and most favourable results which in our case would involve improving VPR performance. We evaluate some of these voting techniques in a standardized testing of different VPR techniques, using a variety of VPR data sets. We aim to determine whether a single optimal voting scheme exists or, much like in other fields of research, the selection of a voting technique is relative to its application and environment. We also aim to propose a ranking of these different voting methods from best to worst according to our results as this will allow for better selection of voting schemes.
[ { "created": "Tue, 9 May 2023 18:24:33 GMT", "version": "v1" } ]
2023-05-11
[ [ "Waheed", "Maria", "" ], [ "Milford", "Michael", "" ], [ "Zhai", "Xiaojun", "" ], [ "McDonald-Maier", "Klaus", "" ], [ "Ehsan", "Shoaib", "" ] ]
Visual Place Recognition has recently seen a surge of endeavours utilizing different ensemble approaches to improve VPR performance. Ideas like multi-process fusion or switching involve combining different VPR techniques together, utilizing different strategies. One major aspect often common to many of these strategies is voting. Voting is widely used in many ensemble methods, so it is potentially a relevant subject to explore in terms of its application and significance for improving VPR performance. This paper attempts to looks into detail and analyze a variety of voting schemes to evaluate which voting technique is optimal for an ensemble VPR set up. We take inspiration from a variety of voting schemes that exist and are widely employed in other research fields such as politics and sociology. The idea is inspired by an observation that different voting methods result in different outcomes for the same type of data and each voting scheme is utilized for specific cases in different academic fields. Some of these voting schemes include Condorcet voting, Broda Count and Plurality voting. Voting employed in any aspect requires that a fair system be established, that outputs the best and most favourable results which in our case would involve improving VPR performance. We evaluate some of these voting techniques in a standardized testing of different VPR techniques, using a variety of VPR data sets. We aim to determine whether a single optimal voting scheme exists or, much like in other fields of research, the selection of a voting technique is relative to its application and environment. We also aim to propose a ranking of these different voting methods from best to worst according to our results as this will allow for better selection of voting schemes.
2303.07599
Kaiqi Zhao
Kaiqi Zhao, Yitao Chen, Ming Zhao
A Contrastive Knowledge Transfer Framework for Model Compression and Transfer Learning
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge Transfer (KT) achieves competitive performance and is widely used for image classification tasks in model compression and transfer learning. Existing KT works transfer the information from a large model ("teacher") to train a small model ("student") by minimizing the difference of their conditionally independent output distributions. However, these works overlook the high-dimension structural knowledge from the intermediate representations of the teacher, which leads to limited effectiveness, and they are motivated by various heuristic intuitions, which makes it difficult to generalize. This paper proposes a novel Contrastive Knowledge Transfer Framework (CKTF), which enables the transfer of sufficient structural knowledge from the teacher to the student by optimizing multiple contrastive objectives across the intermediate representations between them. Also, CKTF provides a generalized agreement to existing KT techniques and increases their performance significantly by deriving them as specific cases of CKTF. The extensive evaluation shows that CKTF consistently outperforms the existing KT works by 0.04% to 11.59% in model compression and by 0.4% to 4.75% in transfer learning on various models and datasets.
[ { "created": "Tue, 14 Mar 2023 02:45:41 GMT", "version": "v1" } ]
2023-03-15
[ [ "Zhao", "Kaiqi", "" ], [ "Chen", "Yitao", "" ], [ "Zhao", "Ming", "" ] ]
Knowledge Transfer (KT) achieves competitive performance and is widely used for image classification tasks in model compression and transfer learning. Existing KT works transfer the information from a large model ("teacher") to train a small model ("student") by minimizing the difference of their conditionally independent output distributions. However, these works overlook the high-dimension structural knowledge from the intermediate representations of the teacher, which leads to limited effectiveness, and they are motivated by various heuristic intuitions, which makes it difficult to generalize. This paper proposes a novel Contrastive Knowledge Transfer Framework (CKTF), which enables the transfer of sufficient structural knowledge from the teacher to the student by optimizing multiple contrastive objectives across the intermediate representations between them. Also, CKTF provides a generalized agreement to existing KT techniques and increases their performance significantly by deriving them as specific cases of CKTF. The extensive evaluation shows that CKTF consistently outperforms the existing KT works by 0.04% to 11.59% in model compression and by 0.4% to 4.75% in transfer learning on various models and datasets.
1206.4680
Mikhail Bilenko
Hoyt Koepke (University of Washington), Mikhail Bilenko (Microsoft Research)
Fast Prediction of New Feature Utility
ICML2012
null
null
null
cs.LG math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the new feature utility prediction problem: statistically testing whether adding a new feature to the data representation can improve predictive accuracy on a supervised learning task. In many applications, identifying new informative features is the primary pathway for improving performance. However, evaluating every potential feature by re-training the predictor with it can be costly. The paper describes an efficient, learner-independent technique for estimating new feature utility without re-training based on the current predictor's outputs. The method is obtained by deriving a connection between loss reduction potential and the new feature's correlation with the loss gradient of the current predictor. This leads to a simple yet powerful hypothesis testing procedure, for which we prove consistency. Our theoretical analysis is accompanied by empirical evaluation on standard benchmarks and a large-scale industrial dataset.
[ { "created": "Mon, 18 Jun 2012 15:38:18 GMT", "version": "v1" } ]
2012-06-22
[ [ "Koepke", "Hoyt", "", "University of Washington" ], [ "Bilenko", "Mikhail", "", "Microsoft\n Research" ] ]
We study the new feature utility prediction problem: statistically testing whether adding a new feature to the data representation can improve predictive accuracy on a supervised learning task. In many applications, identifying new informative features is the primary pathway for improving performance. However, evaluating every potential feature by re-training the predictor with it can be costly. The paper describes an efficient, learner-independent technique for estimating new feature utility without re-training based on the current predictor's outputs. The method is obtained by deriving a connection between loss reduction potential and the new feature's correlation with the loss gradient of the current predictor. This leads to a simple yet powerful hypothesis testing procedure, for which we prove consistency. Our theoretical analysis is accompanied by empirical evaluation on standard benchmarks and a large-scale industrial dataset.
0803.1207
Hang Dinh
Hang Dinh
Serious Flaws in Korf et al.'s Analysis on Time Complexity of A*
This paper has been withdrawn
null
null
null
cs.AI
http://creativecommons.org/licenses/by/3.0/
This paper has been withdrawn.
[ { "created": "Sat, 8 Mar 2008 02:47:27 GMT", "version": "v1" }, { "created": "Fri, 13 Mar 2009 15:36:28 GMT", "version": "v2" }, { "created": "Tue, 28 Sep 2010 21:20:08 GMT", "version": "v3" } ]
2010-09-30
[ [ "Dinh", "Hang", "" ] ]
This paper has been withdrawn.
2211.16756
Chengming Xu
Chengming Xu, Chen Liu, Siqian Yang, Yabiao Wang, Shijie Zhang, Lijie Jia, Yanwei Fu
Split-PU: Hardness-aware Training Strategy for Positive-Unlabeled Learning
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Positive-Unlabeled (PU) learning aims to learn a model with rare positive samples and abundant unlabeled samples. Compared with classical binary classification, the task of PU learning is much more challenging due to the existence of many incompletely-annotated data instances. Since only part of the most confident positive samples are available and evidence is not enough to categorize the rest samples, many of these unlabeled data may also be the positive samples. Research on this topic is particularly useful and essential to many real-world tasks which demand very expensive labelling cost. For example, the recognition tasks in disease diagnosis, recommendation system and satellite image recognition may only have few positive samples that can be annotated by the experts. These methods mainly omit the intrinsic hardness of some unlabeled data, which can result in sub-optimal performance as a consequence of fitting the easy noisy data and not sufficiently utilizing the hard data. In this paper, we focus on improving the commonly-used nnPU with a novel training pipeline. We highlight the intrinsic difference of hardness of samples in the dataset and the proper learning strategies for easy and hard data. By considering this fact, we propose first splitting the unlabeled dataset with an early-stop strategy. The samples that have inconsistent predictions between the temporary and base model are considered as hard samples. Then the model utilizes a noise-tolerant Jensen-Shannon divergence loss for easy data; and a dual-source consistency regularization for hard data which includes a cross-consistency between student and base model for low-level features and self-consistency for high-level features and predictions, respectively.
[ { "created": "Wed, 30 Nov 2022 05:48:31 GMT", "version": "v1" } ]
2022-12-01
[ [ "Xu", "Chengming", "" ], [ "Liu", "Chen", "" ], [ "Yang", "Siqian", "" ], [ "Wang", "Yabiao", "" ], [ "Zhang", "Shijie", "" ], [ "Jia", "Lijie", "" ], [ "Fu", "Yanwei", "" ] ]
Positive-Unlabeled (PU) learning aims to learn a model with rare positive samples and abundant unlabeled samples. Compared with classical binary classification, the task of PU learning is much more challenging due to the existence of many incompletely-annotated data instances. Since only part of the most confident positive samples are available and evidence is not enough to categorize the rest samples, many of these unlabeled data may also be the positive samples. Research on this topic is particularly useful and essential to many real-world tasks which demand very expensive labelling cost. For example, the recognition tasks in disease diagnosis, recommendation system and satellite image recognition may only have few positive samples that can be annotated by the experts. These methods mainly omit the intrinsic hardness of some unlabeled data, which can result in sub-optimal performance as a consequence of fitting the easy noisy data and not sufficiently utilizing the hard data. In this paper, we focus on improving the commonly-used nnPU with a novel training pipeline. We highlight the intrinsic difference of hardness of samples in the dataset and the proper learning strategies for easy and hard data. By considering this fact, we propose first splitting the unlabeled dataset with an early-stop strategy. The samples that have inconsistent predictions between the temporary and base model are considered as hard samples. Then the model utilizes a noise-tolerant Jensen-Shannon divergence loss for easy data; and a dual-source consistency regularization for hard data which includes a cross-consistency between student and base model for low-level features and self-consistency for high-level features and predictions, respectively.
2109.03890
Vignesh Viswanathan
Gagan Biradar, Vignesh Viswanathan, Yair Zick
Model Explanations via the Axiomatic Causal Lens
Withdrawing because this paper was re-written and resubmitted at arXiv:2310.03131. Please see arXiv:2310.03131 for the most recent version of this work
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explaining the decisions of black-box models is a central theme in the study of trustworthy ML. Numerous measures have been proposed in the literature; however, none of them take an axiomatic approach to causal explainability. In this work, we propose three explanation measures which aggregate the set of all but-for causes -- a necessary and sufficient explanation -- into feature importance weights. Our first measure is a natural adaptation of Chockler and Halpern's notion of causal responsibility, whereas the other two correspond to existing game-theoretic influence measures. We present an axiomatic treatment for our proposed indices, showing that they can be uniquely characterized by a set of desirable properties. We also extend our approach to derive a new method to compute the Shapley-Shubik and Banzhaf indices for black-box model explanations. Finally, we analyze and compare the necessity and sufficiency of all our proposed explanation measures in practice using the Adult-Income dataset. Thus, our work is the first to formally bridge the gap between model explanations, game-theoretic influence, and causal analysis.
[ { "created": "Wed, 8 Sep 2021 19:33:52 GMT", "version": "v1" }, { "created": "Fri, 17 Sep 2021 14:17:59 GMT", "version": "v2" }, { "created": "Mon, 31 Jan 2022 23:50:48 GMT", "version": "v3" }, { "created": "Mon, 11 Sep 2023 19:33:45 GMT", "version": "v4" }, { "created": "Wed, 27 Sep 2023 20:17:38 GMT", "version": "v5" }, { "created": "Wed, 4 Oct 2023 20:36:32 GMT", "version": "v6" }, { "created": "Fri, 16 Feb 2024 00:16:03 GMT", "version": "v7" } ]
2024-02-20
[ [ "Biradar", "Gagan", "" ], [ "Viswanathan", "Vignesh", "" ], [ "Zick", "Yair", "" ] ]
Explaining the decisions of black-box models is a central theme in the study of trustworthy ML. Numerous measures have been proposed in the literature; however, none of them take an axiomatic approach to causal explainability. In this work, we propose three explanation measures which aggregate the set of all but-for causes -- a necessary and sufficient explanation -- into feature importance weights. Our first measure is a natural adaptation of Chockler and Halpern's notion of causal responsibility, whereas the other two correspond to existing game-theoretic influence measures. We present an axiomatic treatment for our proposed indices, showing that they can be uniquely characterized by a set of desirable properties. We also extend our approach to derive a new method to compute the Shapley-Shubik and Banzhaf indices for black-box model explanations. Finally, we analyze and compare the necessity and sufficiency of all our proposed explanation measures in practice using the Adult-Income dataset. Thus, our work is the first to formally bridge the gap between model explanations, game-theoretic influence, and causal analysis.
1909.03916
Mark Ditsworth
Mark Ditsworth and Justin Ruths
Community Detection via Katz and Eigenvector Centrality
Submitted to Physical Review E
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The computational demands of community detection algorithms such as Louvain and spectral optimization can be prohibitive for large networks. Eigenvector centrality and Katz centrality are two network statistics commonly used to describe the relative importance of nodes; and their calculation can be closely approximated on large networks by scalable iterative methods. In this paper, we present and leverage a surprising relationship between Katz centrality and eigenvector centrality to detect communities. Beyond the computational gains, we demonstrate that our approach identifies communities that are as good or better than conventional methods.
[ { "created": "Mon, 9 Sep 2019 15:18:13 GMT", "version": "v1" } ]
2019-09-10
[ [ "Ditsworth", "Mark", "" ], [ "Ruths", "Justin", "" ] ]
The computational demands of community detection algorithms such as Louvain and spectral optimization can be prohibitive for large networks. Eigenvector centrality and Katz centrality are two network statistics commonly used to describe the relative importance of nodes; and their calculation can be closely approximated on large networks by scalable iterative methods. In this paper, we present and leverage a surprising relationship between Katz centrality and eigenvector centrality to detect communities. Beyond the computational gains, we demonstrate that our approach identifies communities that are as good or better than conventional methods.
1503.08577
Gabriel Peyre
Vincent Duval (INRIA Paris-Rocquencourt), Gabriel Peyr\'e (CEREMADE)
Sparse Spikes Deconvolution on Thin Grids
null
null
null
null
cs.IT math.IT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article analyzes the recovery performance of two popular finite dimensional approximations of the sparse spikes deconvolution problem over Radon measures. We examine in a unified framework both the L1 regularization (often referred to as Lasso or Basis-Pursuit) and the Continuous Basis-Pursuit (C-BP) methods. The Lasso is the de-facto standard for the sparse regularization of inverse problems in imaging. It performs a nearest neighbor interpolation of the spikes locations on the sampling grid. The C-BP method, introduced by Ekanadham, Tranchina and Simoncelli, uses a linear interpolation of the locations to perform a better approximation of the infinite-dimensional optimization problem, for positive measures. We show that, in the small noise regime, both methods estimate twice the number of spikes as the number of original spikes. Indeed, we show that they both detect two neighboring spikes around the locations of an original spikes. These results for deconvolution problems are based on an abstract analysis of the so-called extended support of the solutions of L1-type problems (including as special cases the Lasso and C-BP for deconvolution), which are of an independent interest. They precisely characterize the support of the solutions when the noise is small and the regularization parameter is selected accordingly. We illustrate these findings to analyze for the first time the support instability of compressed sensing recovery when the number of measurements is below the critical limit (well documented in the literature) where the support is provably stable.
[ { "created": "Mon, 30 Mar 2015 07:53:07 GMT", "version": "v1" } ]
2015-03-31
[ [ "Duval", "Vincent", "", "INRIA Paris-Rocquencourt" ], [ "Peyré", "Gabriel", "", "CEREMADE" ] ]
This article analyzes the recovery performance of two popular finite dimensional approximations of the sparse spikes deconvolution problem over Radon measures. We examine in a unified framework both the L1 regularization (often referred to as Lasso or Basis-Pursuit) and the Continuous Basis-Pursuit (C-BP) methods. The Lasso is the de-facto standard for the sparse regularization of inverse problems in imaging. It performs a nearest neighbor interpolation of the spikes locations on the sampling grid. The C-BP method, introduced by Ekanadham, Tranchina and Simoncelli, uses a linear interpolation of the locations to perform a better approximation of the infinite-dimensional optimization problem, for positive measures. We show that, in the small noise regime, both methods estimate twice the number of spikes as the number of original spikes. Indeed, we show that they both detect two neighboring spikes around the locations of an original spikes. These results for deconvolution problems are based on an abstract analysis of the so-called extended support of the solutions of L1-type problems (including as special cases the Lasso and C-BP for deconvolution), which are of an independent interest. They precisely characterize the support of the solutions when the noise is small and the regularization parameter is selected accordingly. We illustrate these findings to analyze for the first time the support instability of compressed sensing recovery when the number of measurements is below the critical limit (well documented in the literature) where the support is provably stable.
2211.12035
Chin Chun Ooi
Shi Jer Low, Venugopalan, S.G. Raghavan, Harish Gopalan, Jian Cheng Wong, Justin Yeoh, Chin Chun Ooi
FastFlow: AI for Fast Urban Wind Velocity Prediction
null
null
null
null
cs.LG cs.CY physics.flu-dyn
http://creativecommons.org/licenses/by/4.0/
Data-driven approaches, including deep learning, have shown great promise as surrogate models across many domains. These extend to various areas in sustainability. An interesting direction for which data-driven methods have not been applied much yet is in the quick quantitative evaluation of urban layouts for planning and design. In particular, urban designs typically involve complex trade-offs between multiple objectives, including limits on urban build-up and/or consideration of urban heat island effect. Hence, it can be beneficial to urban planners to have a fast surrogate model to predict urban characteristics of a hypothetical layout, e.g. pedestrian-level wind velocity, without having to run computationally expensive and time-consuming high-fidelity numerical simulations. This fast surrogate can then be potentially integrated into other design optimization frameworks, including generative models or other gradient-based methods. Here we present the use of CNNs for urban layout characterization that is typically done via high-fidelity numerical simulation. We further apply this model towards a first demonstration of its utility for data-driven pedestrian-level wind velocity prediction. The data set in this work comprises results from high-fidelity numerical simulations of wind velocities for a diverse set of realistic urban layouts, based on randomized samples from a real-world, highly built-up urban city. We then provide prediction results obtained from the trained CNN, demonstrating test errors of under 0.1 m/s for previously unseen urban layouts. We further illustrate how this can be useful for purposes such as rapid evaluation of pedestrian wind velocity for a potential new layout. It is hoped that this data set will further accelerate research in data-driven urban AI, even as our baseline model facilitates quantitative comparison to future methods.
[ { "created": "Tue, 22 Nov 2022 06:13:48 GMT", "version": "v1" } ]
2022-11-23
[ [ "Low", "Shi Jer", "" ], [ "Venugopalan", "", "" ], [ "Raghavan", "S. G.", "" ], [ "Gopalan", "Harish", "" ], [ "Wong", "Jian Cheng", "" ], [ "Yeoh", "Justin", "" ], [ "Ooi", "Chin Chun", "" ] ]
Data-driven approaches, including deep learning, have shown great promise as surrogate models across many domains. These extend to various areas in sustainability. An interesting direction for which data-driven methods have not been applied much yet is in the quick quantitative evaluation of urban layouts for planning and design. In particular, urban designs typically involve complex trade-offs between multiple objectives, including limits on urban build-up and/or consideration of urban heat island effect. Hence, it can be beneficial to urban planners to have a fast surrogate model to predict urban characteristics of a hypothetical layout, e.g. pedestrian-level wind velocity, without having to run computationally expensive and time-consuming high-fidelity numerical simulations. This fast surrogate can then be potentially integrated into other design optimization frameworks, including generative models or other gradient-based methods. Here we present the use of CNNs for urban layout characterization that is typically done via high-fidelity numerical simulation. We further apply this model towards a first demonstration of its utility for data-driven pedestrian-level wind velocity prediction. The data set in this work comprises results from high-fidelity numerical simulations of wind velocities for a diverse set of realistic urban layouts, based on randomized samples from a real-world, highly built-up urban city. We then provide prediction results obtained from the trained CNN, demonstrating test errors of under 0.1 m/s for previously unseen urban layouts. We further illustrate how this can be useful for purposes such as rapid evaluation of pedestrian wind velocity for a potential new layout. It is hoped that this data set will further accelerate research in data-driven urban AI, even as our baseline model facilitates quantitative comparison to future methods.
2103.00464
Omar Sharif
Eftekhar Hossain, Omar Sharif, Mohammed Moshiul Hoque
NLP-CUET@LT-EDI-EACL2021: Multilingual Code-Mixed Hope Speech Detection using Cross-lingual Representation Learner
Winner LT-EDI workshop EACL-2021, 7 pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
In recent years, several systems have been developed to regulate the spread of negativity and eliminate aggressive, offensive or abusive contents from the online platforms. Nevertheless, a limited number of researches carried out to identify positive, encouraging and supportive contents. In this work, our goal is to identify whether a social media post/comment contains hope speech or not. We propose three distinct models to identify hope speech in English, Tamil and Malayalam language to serve this purpose. To attain this goal, we employed various machine learning (support vector machine, logistic regression, ensemble), deep learning (convolutional neural network + long short term memory) and transformer (m-BERT, Indic-BERT, XLNet, XLM-Roberta) based methods. Results indicate that XLM-Roberta outdoes all other techniques by gaining a weighted $f_1$-score of $0.93$, $0.60$ and $0.85$ respectively for English, Tamil and Malayalam language. Our team has achieved $1^{st}$, $2^{nd}$ and $1^{st}$ rank in these three tasks respectively.
[ { "created": "Sun, 28 Feb 2021 11:30:52 GMT", "version": "v1" } ]
2021-03-02
[ [ "Hossain", "Eftekhar", "" ], [ "Sharif", "Omar", "" ], [ "Hoque", "Mohammed Moshiul", "" ] ]
In recent years, several systems have been developed to regulate the spread of negativity and eliminate aggressive, offensive or abusive contents from the online platforms. Nevertheless, a limited number of researches carried out to identify positive, encouraging and supportive contents. In this work, our goal is to identify whether a social media post/comment contains hope speech or not. We propose three distinct models to identify hope speech in English, Tamil and Malayalam language to serve this purpose. To attain this goal, we employed various machine learning (support vector machine, logistic regression, ensemble), deep learning (convolutional neural network + long short term memory) and transformer (m-BERT, Indic-BERT, XLNet, XLM-Roberta) based methods. Results indicate that XLM-Roberta outdoes all other techniques by gaining a weighted $f_1$-score of $0.93$, $0.60$ and $0.85$ respectively for English, Tamil and Malayalam language. Our team has achieved $1^{st}$, $2^{nd}$ and $1^{st}$ rank in these three tasks respectively.
2306.09021
Yizhou Chen
Yizhou Chen, Yushan Han, Jingyu Chen, Joseph Teran
Position-Based Nonlinear Gauss-Seidel for Quasistatic Hyperelasticity
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Position based dynamics is a powerful technique for simulating a variety of materials. Its primary strength is its robustness when run with limited computational budget. We develop a novel approach to address problems with PBD for quasistatic hyperelastic materials. Even though PBD is based on the projection of static constraints, PBD is best suited for dynamic simulations. This is particularly relevant since the efficient creation of large data sets of plausible, but not necessarily accurate elastic equilibria is of increasing importance with the emergence of quasistatic neural networks. Furthermore, PBD projects one constraint at a time. We show that ignoring the effects of neighboring constraints limits its convergence and stability properties. Recent works have shown that PBD can be related to the Gauss-Seidel approximation of a Lagrange multiplier formulation of backward Euler time stepping, where each constraint is solved/projected independently of the others in an iterative fashion. We show that a position-based, rather than constraint-based nonlinear Gauss-Seidel approach solves these problems. Our approach retains the essential PBD feature of stable behavior with constrained computational budgets, but also allows for convergent behavior with expanded budgets. We demonstrate the efficacy of our method on a variety of representative hyperelastic problems and show that both successive over relaxation (SOR) and Chebyshev acceleration can be easily applied.
[ { "created": "Thu, 15 Jun 2023 10:28:12 GMT", "version": "v1" } ]
2023-06-16
[ [ "Chen", "Yizhou", "" ], [ "Han", "Yushan", "" ], [ "Chen", "Jingyu", "" ], [ "Teran", "Joseph", "" ] ]
Position based dynamics is a powerful technique for simulating a variety of materials. Its primary strength is its robustness when run with limited computational budget. We develop a novel approach to address problems with PBD for quasistatic hyperelastic materials. Even though PBD is based on the projection of static constraints, PBD is best suited for dynamic simulations. This is particularly relevant since the efficient creation of large data sets of plausible, but not necessarily accurate elastic equilibria is of increasing importance with the emergence of quasistatic neural networks. Furthermore, PBD projects one constraint at a time. We show that ignoring the effects of neighboring constraints limits its convergence and stability properties. Recent works have shown that PBD can be related to the Gauss-Seidel approximation of a Lagrange multiplier formulation of backward Euler time stepping, where each constraint is solved/projected independently of the others in an iterative fashion. We show that a position-based, rather than constraint-based nonlinear Gauss-Seidel approach solves these problems. Our approach retains the essential PBD feature of stable behavior with constrained computational budgets, but also allows for convergent behavior with expanded budgets. We demonstrate the efficacy of our method on a variety of representative hyperelastic problems and show that both successive over relaxation (SOR) and Chebyshev acceleration can be easily applied.
2306.10620
Patrick Kuckertz
Patrick Kuckertz, Jan G\"opfert, Oliver Karras, David Neuroth, Julian Sch\"onau, Rodrigo Pueblas, Stephan Ferenz, Felix Engel, Noah Pflugradt, Jann M. Weinand, Astrid Nie{\ss}e, S\"oren Auer, Detlef Stolten
A Metadata-Based Ecosystem to Improve the FAIRness of Research Software
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
The reuse of research software is central to research efficiency and academic exchange. The application of software enables researchers with varied backgrounds to reproduce, validate, and expand upon study findings. Furthermore, the analysis of open source code aids in the comprehension, comparison, and integration of approaches. Often, however, no further use occurs because relevant software cannot be found or is incompatible with existing research processes. This results in repetitive software development, which impedes the advancement of individual researchers and entire research communities. In this article, the DataDesc ecosystem is presented, an approach to describing data models of software interfaces with detailed and machine-actionable metadata. In addition to a specialized metadata schema, an exchange format and support tools for easy collection and the automated publishing of software documentation are introduced. This approach practically increases the FAIRness, i.e., findability, accessibility, interoperability, and so the reusability of research software, as well as effectively promotes its impact on research.
[ { "created": "Sun, 18 Jun 2023 19:01:08 GMT", "version": "v1" } ]
2023-06-21
[ [ "Kuckertz", "Patrick", "" ], [ "Göpfert", "Jan", "" ], [ "Karras", "Oliver", "" ], [ "Neuroth", "David", "" ], [ "Schönau", "Julian", "" ], [ "Pueblas", "Rodrigo", "" ], [ "Ferenz", "Stephan", "" ], [ "Engel", "Felix", "" ], [ "Pflugradt", "Noah", "" ], [ "Weinand", "Jann M.", "" ], [ "Nieße", "Astrid", "" ], [ "Auer", "Sören", "" ], [ "Stolten", "Detlef", "" ] ]
The reuse of research software is central to research efficiency and academic exchange. The application of software enables researchers with varied backgrounds to reproduce, validate, and expand upon study findings. Furthermore, the analysis of open source code aids in the comprehension, comparison, and integration of approaches. Often, however, no further use occurs because relevant software cannot be found or is incompatible with existing research processes. This results in repetitive software development, which impedes the advancement of individual researchers and entire research communities. In this article, the DataDesc ecosystem is presented, an approach to describing data models of software interfaces with detailed and machine-actionable metadata. In addition to a specialized metadata schema, an exchange format and support tools for easy collection and the automated publishing of software documentation are introduced. This approach practically increases the FAIRness, i.e., findability, accessibility, interoperability, and so the reusability of research software, as well as effectively promotes its impact on research.
1506.07866
Gil Ben-Artzi
Gil Ben-Artzi, Yoni Kasten, Shmuel Peleg, Michael Werman
Camera Calibration from Dynamic Silhouettes Using Motion Barcodes
Update metadata
Proc. CVPR'16, Las Vegas, June 2016, pp. 4095-4103
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computing the epipolar geometry between cameras with very different viewpoints is often problematic as matching points are hard to find. In these cases, it has been proposed to use information from dynamic objects in the scene for suggesting point and line correspondences. We propose a speed up of about two orders of magnitude, as well as an increase in robustness and accuracy, to methods computing epipolar geometry from dynamic silhouettes. This improvement is based on a new temporal signature: motion barcode for lines. Motion barcode is a binary temporal sequence for lines, indicating for each frame the existence of at least one foreground pixel on that line. The motion barcodes of two corresponding epipolar lines are very similar, so the search for corresponding epipolar lines can be limited only to lines having similar barcodes. The use of motion barcodes leads to increased speed, accuracy, and robustness in computing the epipolar geometry.
[ { "created": "Thu, 25 Jun 2015 19:37:24 GMT", "version": "v1" }, { "created": "Sat, 7 Nov 2015 16:44:20 GMT", "version": "v2" }, { "created": "Tue, 24 Nov 2015 14:19:45 GMT", "version": "v3" }, { "created": "Sun, 8 Jan 2017 00:45:49 GMT", "version": "v4" } ]
2017-01-10
[ [ "Ben-Artzi", "Gil", "" ], [ "Kasten", "Yoni", "" ], [ "Peleg", "Shmuel", "" ], [ "Werman", "Michael", "" ] ]
Computing the epipolar geometry between cameras with very different viewpoints is often problematic as matching points are hard to find. In these cases, it has been proposed to use information from dynamic objects in the scene for suggesting point and line correspondences. We propose a speed up of about two orders of magnitude, as well as an increase in robustness and accuracy, to methods computing epipolar geometry from dynamic silhouettes. This improvement is based on a new temporal signature: motion barcode for lines. Motion barcode is a binary temporal sequence for lines, indicating for each frame the existence of at least one foreground pixel on that line. The motion barcodes of two corresponding epipolar lines are very similar, so the search for corresponding epipolar lines can be limited only to lines having similar barcodes. The use of motion barcodes leads to increased speed, accuracy, and robustness in computing the epipolar geometry.
2208.10273
Tie Luo
Ashish Gupta, Tie Luo, Mao V. Ngo, Sajal K. Das
Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning
European Symposium on Research in Computer Security (ESORICS) 2022
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning offers a framework of training a machine learning model in a distributed fashion while preserving privacy of the participants. As the server cannot govern the clients' actions, nefarious clients may attack the global model by sending malicious local gradients. In the meantime, there could also be unreliable clients who are benign but each has a portion of low-quality training data (e.g., blur or low-resolution images), thus may appearing similar as malicious clients. Therefore, a defense mechanism will need to perform a three-fold differentiation which is much more challenging than the conventional (two-fold) case. This paper introduces MUD-HoG, a novel defense algorithm that addresses this challenge in federated learning using long-short history of gradients, and treats the detected malicious and unreliable clients differently. Not only this, but we can also distinguish between targeted and untargeted attacks among malicious clients, unlike most prior works which only consider one type of the attacks. Specifically, we take into account sign-flipping, additive-noise, label-flipping, and multi-label-flipping attacks, under a non-IID setting. We evaluate MUD-HoG with six state-of-the-art methods on two datasets. The results show that MUD-HoG outperforms all of them in terms of accuracy as well as precision and recall, in the presence of a mixture of multiple (four) types of attackers as well as unreliable clients. Moreover, unlike most prior works which can only tolerate a low population of harmful users, MUD-HoG can work with and successfully detect a wide range of malicious and unreliable clients - up to 47.5% and 10%, respectively, of the total population. Our code is open-sourced at https://github.com/LabSAINT/MUD-HoG_Federated_Learning.
[ { "created": "Sun, 14 Aug 2022 04:54:28 GMT", "version": "v1" } ]
2022-08-23
[ [ "Gupta", "Ashish", "" ], [ "Luo", "Tie", "" ], [ "Ngo", "Mao V.", "" ], [ "Das", "Sajal K.", "" ] ]
Federated learning offers a framework of training a machine learning model in a distributed fashion while preserving privacy of the participants. As the server cannot govern the clients' actions, nefarious clients may attack the global model by sending malicious local gradients. In the meantime, there could also be unreliable clients who are benign but each has a portion of low-quality training data (e.g., blur or low-resolution images), thus may appearing similar as malicious clients. Therefore, a defense mechanism will need to perform a three-fold differentiation which is much more challenging than the conventional (two-fold) case. This paper introduces MUD-HoG, a novel defense algorithm that addresses this challenge in federated learning using long-short history of gradients, and treats the detected malicious and unreliable clients differently. Not only this, but we can also distinguish between targeted and untargeted attacks among malicious clients, unlike most prior works which only consider one type of the attacks. Specifically, we take into account sign-flipping, additive-noise, label-flipping, and multi-label-flipping attacks, under a non-IID setting. We evaluate MUD-HoG with six state-of-the-art methods on two datasets. The results show that MUD-HoG outperforms all of them in terms of accuracy as well as precision and recall, in the presence of a mixture of multiple (four) types of attackers as well as unreliable clients. Moreover, unlike most prior works which can only tolerate a low population of harmful users, MUD-HoG can work with and successfully detect a wide range of malicious and unreliable clients - up to 47.5% and 10%, respectively, of the total population. Our code is open-sourced at https://github.com/LabSAINT/MUD-HoG_Federated_Learning.
1910.00531
Nikos Salamanos
Nikos Salamanos, Michael J. Jensen, Xinlei He, Yang Chen, Michael Sirivianos
On the Influence of Twitter Trolls during the 2016 US Presidential Election
With this version, we are correcting an error in the Acknowledgments regarding the research funding that supports this work. The correct one is the European Union's Horizon 2020 Research and Innovation program under the Cybersecurity CONCORDIA project (Grant Agreement No. 830927)
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
It is a widely accepted fact that state-sponsored Twitter accounts operated during the 2016 US presidential election spreading millions of tweets with misinformation and inflammatory political content. Whether these social media campaigns of the so-called "troll" accounts were able to manipulate public opinion is still in question. Here we aim to quantify the influence of troll accounts and the impact they had on Twitter by analyzing 152.5 million tweets from 9.9 million users, including 822 troll accounts. The data collected during the US election campaign, contain original troll tweets before they were deleted by Twitter. From these data, we constructed a very large interaction graph; a directed graph of 9.3 million nodes and 169.9 million edges. Recently, Twitter released datasets on the misinformation campaigns of 8,275 state-sponsored accounts linked to Russia, Iran and Venezuela as part of the investigation on the foreign interference in the 2016 US election. These data serve as ground-truth identifier of troll users in our dataset. Using graph analysis techniques we qualify the diffusion cascades of web and media context that have been shared by the troll accounts. We present strong evidence that authentic users were the source of the viral cascades. Although the trolls were participating in the viral cascades, they did not have a leading role in them and only four troll accounts were truly influential.
[ { "created": "Tue, 1 Oct 2019 16:32:59 GMT", "version": "v1" }, { "created": "Thu, 3 Oct 2019 19:43:53 GMT", "version": "v2" } ]
2019-10-07
[ [ "Salamanos", "Nikos", "" ], [ "Jensen", "Michael J.", "" ], [ "He", "Xinlei", "" ], [ "Chen", "Yang", "" ], [ "Sirivianos", "Michael", "" ] ]
It is a widely accepted fact that state-sponsored Twitter accounts operated during the 2016 US presidential election spreading millions of tweets with misinformation and inflammatory political content. Whether these social media campaigns of the so-called "troll" accounts were able to manipulate public opinion is still in question. Here we aim to quantify the influence of troll accounts and the impact they had on Twitter by analyzing 152.5 million tweets from 9.9 million users, including 822 troll accounts. The data collected during the US election campaign, contain original troll tweets before they were deleted by Twitter. From these data, we constructed a very large interaction graph; a directed graph of 9.3 million nodes and 169.9 million edges. Recently, Twitter released datasets on the misinformation campaigns of 8,275 state-sponsored accounts linked to Russia, Iran and Venezuela as part of the investigation on the foreign interference in the 2016 US election. These data serve as ground-truth identifier of troll users in our dataset. Using graph analysis techniques we qualify the diffusion cascades of web and media context that have been shared by the troll accounts. We present strong evidence that authentic users were the source of the viral cascades. Although the trolls were participating in the viral cascades, they did not have a leading role in them and only four troll accounts were truly influential.
2007.13233
Najla Al-Taleb
Najla Al-Taleb, Nazar Abbas Saqib, Atta-ur-Rahman, Sujata Dash
Cyber Threat Intelligence for Secure Smart City
null
null
null
null
cs.CR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Smart city improved the quality of life for the citizens by implementing information communication technology (ICT) such as the internet of things (IoT). Nevertheless, the smart city is a critical environment that needs to secure it is network and data from intrusions and attacks. This work proposes a hybrid deep learning (DL) model for cyber threat intelligence (CTI) to improve threats classification performance based on convolutional neural network (CNN) and quasi-recurrent neural network (QRNN). We use QRNN to provide a real-time threat classification model. The evaluation results of the proposed model compared to the state-of-the-art models show that the proposed model outperformed the other models. Therefore, it will help in classifying the smart city threats in a reasonable time.
[ { "created": "Sun, 26 Jul 2020 22:39:33 GMT", "version": "v1" } ]
2020-07-28
[ [ "Al-Taleb", "Najla", "" ], [ "Saqib", "Nazar Abbas", "" ], [ "Atta-ur-Rahman", "", "" ], [ "Dash", "Sujata", "" ] ]
Smart city improved the quality of life for the citizens by implementing information communication technology (ICT) such as the internet of things (IoT). Nevertheless, the smart city is a critical environment that needs to secure it is network and data from intrusions and attacks. This work proposes a hybrid deep learning (DL) model for cyber threat intelligence (CTI) to improve threats classification performance based on convolutional neural network (CNN) and quasi-recurrent neural network (QRNN). We use QRNN to provide a real-time threat classification model. The evaluation results of the proposed model compared to the state-of-the-art models show that the proposed model outperformed the other models. Therefore, it will help in classifying the smart city threats in a reasonable time.
1904.02179
Alexandra Porter
Alexandra Porter and Mary Wootters
Embedded Index Coding
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by applications in distributed storage and distributed computation, we introduce embedded index coding (EIC). EIC is a type of distributed index coding in which nodes in a distributed system act as both senders and receivers of information. We show how embedded index coding is related to index coding in general, and give characterizations and bounds on the communication costs of optimal embedded index codes. We also define task-based EIC, in which each sending node encodes and sends data blocks independently of the other nodes. Task-based EIC is more computationally tractable and has advantages in applications such as distributed storage, in which senders may complete their broadcasts at different times. Finally, we give heuristic algorithms for approximating optimal embedded index codes, and demonstrate empirically that these algorithms perform well.
[ { "created": "Wed, 3 Apr 2019 18:03:21 GMT", "version": "v1" }, { "created": "Wed, 24 Apr 2019 19:24:14 GMT", "version": "v2" }, { "created": "Wed, 30 Oct 2019 20:15:57 GMT", "version": "v3" } ]
2019-11-01
[ [ "Porter", "Alexandra", "" ], [ "Wootters", "Mary", "" ] ]
Motivated by applications in distributed storage and distributed computation, we introduce embedded index coding (EIC). EIC is a type of distributed index coding in which nodes in a distributed system act as both senders and receivers of information. We show how embedded index coding is related to index coding in general, and give characterizations and bounds on the communication costs of optimal embedded index codes. We also define task-based EIC, in which each sending node encodes and sends data blocks independently of the other nodes. Task-based EIC is more computationally tractable and has advantages in applications such as distributed storage, in which senders may complete their broadcasts at different times. Finally, we give heuristic algorithms for approximating optimal embedded index codes, and demonstrate empirically that these algorithms perform well.
1912.12957
EPTCS
Claudia Schon, Sophie Siebert, Frieder Stolzenburg
Using ConceptNet to Teach Common Sense to an Automated Theorem Prover
In Proceedings ARCADE 2019, arXiv:1912.11786
EPTCS 311, 2019, pp. 19-24
10.4204/EPTCS.311.3
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The CoRg system is a system to solve commonsense reasoning problems. The core of the CoRg system is the automated theorem prover Hyper that is fed with large amounts of background knowledge. This background knowledge plays a crucial role in solving commonsense reasoning problems. In this paper we present different ways to use knowledge graphs as background knowledge and discuss challenges that arise.
[ { "created": "Mon, 30 Dec 2019 15:13:53 GMT", "version": "v1" } ]
2020-01-01
[ [ "Schon", "Claudia", "" ], [ "Siebert", "Sophie", "" ], [ "Stolzenburg", "Frieder", "" ] ]
The CoRg system is a system to solve commonsense reasoning problems. The core of the CoRg system is the automated theorem prover Hyper that is fed with large amounts of background knowledge. This background knowledge plays a crucial role in solving commonsense reasoning problems. In this paper we present different ways to use knowledge graphs as background knowledge and discuss challenges that arise.
1802.03006
Lars Buesing
Lars Buesing, Theophane Weber, Sebastien Racaniere, S. M. Ali Eslami, Danilo Rezende, David P. Reichert, Fabio Viola, Frederic Besse, Karol Gregor, Demis Hassabis, Daan Wierstra
Learning and Querying Fast Generative Models for Reinforcement Learning
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A key challenge in model-based reinforcement learning (RL) is to synthesize computationally efficient and accurate environment models. We show that carefully designed generative models that learn and operate on compact state representations, so-called state-space models, substantially reduce the computational costs for predicting outcomes of sequences of actions. Extensive experiments establish that state-space models accurately capture the dynamics of Atari games from the Arcade Learning Environment from raw pixels. The computational speed-up of state-space models while maintaining high accuracy makes their application in RL feasible: We demonstrate that agents which query these models for decision making outperform strong model-free baselines on the game MSPACMAN, demonstrating the potential of using learned environment models for planning.
[ { "created": "Thu, 8 Feb 2018 18:54:44 GMT", "version": "v1" } ]
2018-02-09
[ [ "Buesing", "Lars", "" ], [ "Weber", "Theophane", "" ], [ "Racaniere", "Sebastien", "" ], [ "Eslami", "S. M. Ali", "" ], [ "Rezende", "Danilo", "" ], [ "Reichert", "David P.", "" ], [ "Viola", "Fabio", "" ], [ "Besse", "Frederic", "" ], [ "Gregor", "Karol", "" ], [ "Hassabis", "Demis", "" ], [ "Wierstra", "Daan", "" ] ]
A key challenge in model-based reinforcement learning (RL) is to synthesize computationally efficient and accurate environment models. We show that carefully designed generative models that learn and operate on compact state representations, so-called state-space models, substantially reduce the computational costs for predicting outcomes of sequences of actions. Extensive experiments establish that state-space models accurately capture the dynamics of Atari games from the Arcade Learning Environment from raw pixels. The computational speed-up of state-space models while maintaining high accuracy makes their application in RL feasible: We demonstrate that agents which query these models for decision making outperform strong model-free baselines on the game MSPACMAN, demonstrating the potential of using learned environment models for planning.
1612.00130
Mengfan Zheng
Mengfan Zheng, Meixia Tao, Wen Chen and Cong Ling
Secure Polar Coding for the Two-Way Wiretap Channel
28 pages, 7 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of polar coding for secure communications over the two-way wiretap channel, where two legitimate users communicate with each other simultaneously while a passive eavesdropper overhears a combination of their exchanged signals. The legitimate users wish to design a cooperative jamming code such that the interference between their codewords can jam the eavesdropper. In this paper, we design a polar coded cooperative jamming scheme that achieves the whole secrecy rate region of the general two-way wiretap channel under the strong secrecy criterion. The chaining method is used to make proper alignment of polar indices. The randomness required to be shared between two legitimate users is treated as a limited resource and we show that its rate can be made negligible by increasing the blocklength and the number of chained blocks. For the special case when the eavesdropper channel is degraded with respect to the legitimate ones, a simplified scheme is proposed which can simultaneously ensure reliability and weak secrecy within a single transmission block. An example of the binary erasure channel case is given to demonstrate the performance of our scheme.
[ { "created": "Thu, 1 Dec 2016 03:32:25 GMT", "version": "v1" }, { "created": "Sat, 13 May 2017 04:49:35 GMT", "version": "v2" }, { "created": "Thu, 22 Jun 2017 07:55:48 GMT", "version": "v3" } ]
2017-06-23
[ [ "Zheng", "Mengfan", "" ], [ "Tao", "Meixia", "" ], [ "Chen", "Wen", "" ], [ "Ling", "Cong", "" ] ]
We consider the problem of polar coding for secure communications over the two-way wiretap channel, where two legitimate users communicate with each other simultaneously while a passive eavesdropper overhears a combination of their exchanged signals. The legitimate users wish to design a cooperative jamming code such that the interference between their codewords can jam the eavesdropper. In this paper, we design a polar coded cooperative jamming scheme that achieves the whole secrecy rate region of the general two-way wiretap channel under the strong secrecy criterion. The chaining method is used to make proper alignment of polar indices. The randomness required to be shared between two legitimate users is treated as a limited resource and we show that its rate can be made negligible by increasing the blocklength and the number of chained blocks. For the special case when the eavesdropper channel is degraded with respect to the legitimate ones, a simplified scheme is proposed which can simultaneously ensure reliability and weak secrecy within a single transmission block. An example of the binary erasure channel case is given to demonstrate the performance of our scheme.
2106.07140
Jihyeong Yoo
Jihyeong Yoo and Qifeng Chen
SinIR: Efficient General Image Manipulation with Single Image Reconstruction
Accepted to ICML 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose SinIR, an efficient reconstruction-based framework trained on a single natural image for general image manipulation, including super-resolution, editing, harmonization, paint-to-image, photo-realistic style transfer, and artistic style transfer. We train our model on a single image with cascaded multi-scale learning, where each network at each scale is responsible for image reconstruction. This reconstruction objective greatly reduces the complexity and running time of training, compared to the GAN objective. However, the reconstruction objective also exacerbates the output quality. Therefore, to solve this problem, we further utilize simple random pixel shuffling, which also gives control over manipulation, inspired by the Denoising Autoencoder. With quantitative evaluation, we show that SinIR has competitive performance on various image manipulation tasks. Moreover, with a much simpler training objective (i.e., reconstruction), SinIR is trained 33.5 times faster than SinGAN (for 500 X 500 images) that solves similar tasks. Our code is publicly available at github.com/YooJiHyeong/SinIR.
[ { "created": "Mon, 14 Jun 2021 02:41:26 GMT", "version": "v1" } ]
2021-06-15
[ [ "Yoo", "Jihyeong", "" ], [ "Chen", "Qifeng", "" ] ]
We propose SinIR, an efficient reconstruction-based framework trained on a single natural image for general image manipulation, including super-resolution, editing, harmonization, paint-to-image, photo-realistic style transfer, and artistic style transfer. We train our model on a single image with cascaded multi-scale learning, where each network at each scale is responsible for image reconstruction. This reconstruction objective greatly reduces the complexity and running time of training, compared to the GAN objective. However, the reconstruction objective also exacerbates the output quality. Therefore, to solve this problem, we further utilize simple random pixel shuffling, which also gives control over manipulation, inspired by the Denoising Autoencoder. With quantitative evaluation, we show that SinIR has competitive performance on various image manipulation tasks. Moreover, with a much simpler training objective (i.e., reconstruction), SinIR is trained 33.5 times faster than SinGAN (for 500 X 500 images) that solves similar tasks. Our code is publicly available at github.com/YooJiHyeong/SinIR.
2103.04768
Liya Wang
Liya Wang, Panta Lucic, Keith Campbell, and Craig Wanke
Helicopter Track Identification with Autoencoder
Draft for ICNS conference. arXiv admin note: text overlap with arXiv:2011.01464
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Computing power, big data, and advancement of algorithms have led to a renewed interest in artificial intelligence (AI), especially in deep learning (DL). The success of DL largely lies on data representation because different representations can indicate to a degree the different explanatory factors of variation behind the data. In the last few year, the most successful story in DL is supervised learning. However, to apply supervised learning, one challenge is that data labels are expensive to get, noisy, or only partially available. With consideration that we human beings learn in an unsupervised way; self-supervised learning methods have garnered a lot of attention recently. A dominant force in self-supervised learning is the autoencoder, which has multiple uses (e.g., data representation, anomaly detection, denoise). This research explored the application of an autoencoder to learn effective data representation of helicopter flight track data, and then to support helicopter track identification. Our testing results are promising. For example, at Phoenix Deer Valley (DVT) airport, where 70% of recorded flight tracks have missing aircraft types, the autoencoder can help to identify twenty-two times more helicopters than otherwise detectable using rule-based methods; for Grand Canyon West Airport (1G4) airport, the autoencoder can identify thirteen times more helicopters than a current rule-based approach. Our approach can also identify mislabeled aircraft types in the flight track data and find true types for records with pseudo aircraft type labels such as HELO. With improved labelling, studies using these data sets can produce more reliable results.
[ { "created": "Wed, 3 Mar 2021 22:32:39 GMT", "version": "v1" } ]
2021-03-09
[ [ "Wang", "Liya", "" ], [ "Lucic", "Panta", "" ], [ "Campbell", "Keith", "" ], [ "Wanke", "Craig", "" ] ]
Computing power, big data, and advancement of algorithms have led to a renewed interest in artificial intelligence (AI), especially in deep learning (DL). The success of DL largely lies on data representation because different representations can indicate to a degree the different explanatory factors of variation behind the data. In the last few year, the most successful story in DL is supervised learning. However, to apply supervised learning, one challenge is that data labels are expensive to get, noisy, or only partially available. With consideration that we human beings learn in an unsupervised way; self-supervised learning methods have garnered a lot of attention recently. A dominant force in self-supervised learning is the autoencoder, which has multiple uses (e.g., data representation, anomaly detection, denoise). This research explored the application of an autoencoder to learn effective data representation of helicopter flight track data, and then to support helicopter track identification. Our testing results are promising. For example, at Phoenix Deer Valley (DVT) airport, where 70% of recorded flight tracks have missing aircraft types, the autoencoder can help to identify twenty-two times more helicopters than otherwise detectable using rule-based methods; for Grand Canyon West Airport (1G4) airport, the autoencoder can identify thirteen times more helicopters than a current rule-based approach. Our approach can also identify mislabeled aircraft types in the flight track data and find true types for records with pseudo aircraft type labels such as HELO. With improved labelling, studies using these data sets can produce more reliable results.