id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2304.04389
Wei Hu
Jiacheng Huang and Zequn Sun and Qijin Chen and Xiaozhou Xu and Weijun Ren and Wei Hu
Deep Active Alignment of Knowledge Graph Entities and Schemata
Accepted in the ACM SIGMOD/PODS International Conference on Management of Data (SIGMOD 2023)
null
null
null
cs.DB cs.AI
http://creativecommons.org/licenses/by/4.0/
Knowledge graphs (KGs) store rich facts about the real world. In this paper, we study KG alignment, which aims to find alignment between not only entities but also relations and classes in different KGs. Alignment at the entity level can cross-fertilize alignment at the schema level. We propose a new KG alignment approach, called DAAKG, based on deep learning and active learning. With deep learning, it learns the embeddings of entities, relations and classes, and jointly aligns them in a semi-supervised manner. With active learning, it estimates how likely an entity, relation or class pair can be inferred, and selects the best batch for human labeling. We design two approximation algorithms for efficient solution to batch selection. Our experiments on benchmark datasets show the superior accuracy and generalization of DAAKG and validate the effectiveness of all its modules.
[ { "created": "Mon, 10 Apr 2023 05:31:24 GMT", "version": "v1" }, { "created": "Wed, 19 Apr 2023 23:44:46 GMT", "version": "v2" }, { "created": "Sat, 17 Jun 2023 13:17:38 GMT", "version": "v3" } ]
2023-06-21
[ [ "Huang", "Jiacheng", "" ], [ "Sun", "Zequn", "" ], [ "Chen", "Qijin", "" ], [ "Xu", "Xiaozhou", "" ], [ "Ren", "Weijun", "" ], [ "Hu", "Wei", "" ] ]
Knowledge graphs (KGs) store rich facts about the real world. In this paper, we study KG alignment, which aims to find alignment between not only entities but also relations and classes in different KGs. Alignment at the entity level can cross-fertilize alignment at the schema level. We propose a new KG alignment approach, called DAAKG, based on deep learning and active learning. With deep learning, it learns the embeddings of entities, relations and classes, and jointly aligns them in a semi-supervised manner. With active learning, it estimates how likely an entity, relation or class pair can be inferred, and selects the best batch for human labeling. We design two approximation algorithms for efficient solution to batch selection. Our experiments on benchmark datasets show the superior accuracy and generalization of DAAKG and validate the effectiveness of all its modules.
1306.0686
Pooria Joulani
Pooria Joulani, Andr\'as Gy\"orgy, Csaba Szepesv\'ari
Online Learning under Delayed Feedback
Extended version of a paper accepted to ICML-2013
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online learning with delayed feedback has received increasing attention recently due to its several applications in distributed, web-based learning problems. In this paper we provide a systematic study of the topic, and analyze the effect of delay on the regret of online learning algorithms. Somewhat surprisingly, it turns out that delay increases the regret in a multiplicative way in adversarial problems, and in an additive way in stochastic problems. We give meta-algorithms that transform, in a black-box fashion, algorithms developed for the non-delayed case into ones that can handle the presence of delays in the feedback loop. Modifications of the well-known UCB algorithm are also developed for the bandit problem with delayed feedback, with the advantage over the meta-algorithms that they can be implemented with lower complexity.
[ { "created": "Tue, 4 Jun 2013 07:39:21 GMT", "version": "v1" }, { "created": "Wed, 5 Jun 2013 01:01:04 GMT", "version": "v2" } ]
2015-07-02
[ [ "Joulani", "Pooria", "" ], [ "György", "András", "" ], [ "Szepesvári", "Csaba", "" ] ]
Online learning with delayed feedback has received increasing attention recently due to its several applications in distributed, web-based learning problems. In this paper we provide a systematic study of the topic, and analyze the effect of delay on the regret of online learning algorithms. Somewhat surprisingly, it turns out that delay increases the regret in a multiplicative way in adversarial problems, and in an additive way in stochastic problems. We give meta-algorithms that transform, in a black-box fashion, algorithms developed for the non-delayed case into ones that can handle the presence of delays in the feedback loop. Modifications of the well-known UCB algorithm are also developed for the bandit problem with delayed feedback, with the advantage over the meta-algorithms that they can be implemented with lower complexity.
2208.04842
George Chacko
Akhil Jakatdar and Baqiao Liu and Tandy Warnow and George Chacko
AOC; Assembling Overlapping Communities
This version submitted to Quantitative Science Studies
Quantitative Science Studies (2022)
10.1162/qss_a_00227
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
Through discovery of meso-scale structures, community detection methods contribute to the understanding of complex networks. Many community finding methods, however, rely on disjoint clustering techniques, in which node membership is restricted to one community or cluster. This strict requirement limits the ability to inclusively describe communities since some nodes may reasonably be assigned to many communities. We have previously reported Iterative K-core Clustering (IKC), a scalable and modular pipeline that discovers disjoint research communities from the scientific literature. We now present Assembling Overlapping Clusters (AOC), a complementary meta-method for overlapping communities as an option that addresses the disjoint clustering problem. We present findings from the use of AOC on a network of over 13 million nodes that captures recent research in the very rapidly growing field of extracellular vesicles in biology.
[ { "created": "Fri, 5 Aug 2022 20:34:45 GMT", "version": "v1" }, { "created": "Wed, 24 Aug 2022 23:41:22 GMT", "version": "v2" }, { "created": "Fri, 26 Aug 2022 10:36:33 GMT", "version": "v3" }, { "created": "Tue, 4 Oct 2022 22:52:05 GMT", "version": "v4" } ]
2022-11-23
[ [ "Jakatdar", "Akhil", "" ], [ "Liu", "Baqiao", "" ], [ "Warnow", "Tandy", "" ], [ "Chacko", "George", "" ] ]
Through discovery of meso-scale structures, community detection methods contribute to the understanding of complex networks. Many community finding methods, however, rely on disjoint clustering techniques, in which node membership is restricted to one community or cluster. This strict requirement limits the ability to inclusively describe communities since some nodes may reasonably be assigned to many communities. We have previously reported Iterative K-core Clustering (IKC), a scalable and modular pipeline that discovers disjoint research communities from the scientific literature. We now present Assembling Overlapping Clusters (AOC), a complementary meta-method for overlapping communities as an option that addresses the disjoint clustering problem. We present findings from the use of AOC on a network of over 13 million nodes that captures recent research in the very rapidly growing field of extracellular vesicles in biology.
1402.1783
Jason J Corso
Caiming Xiong, David Johnson, Jason J. Corso
Active Clustering with Model-Based Uncertainty Reduction
14 pages, 8 figures, submitted to TPAMI (second version just fixes a missing reference and format)
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semi-supervised clustering seeks to augment traditional clustering methods by incorporating side information provided via human expertise in order to increase the semantic meaningfulness of the resulting clusters. However, most current methods are \emph{passive} in the sense that the side information is provided beforehand and selected randomly. This may require a large number of constraints, some of which could be redundant, unnecessary, or even detrimental to the clustering results. Thus in order to scale such semi-supervised algorithms to larger problems it is desirable to pursue an \emph{active} clustering method---i.e. an algorithm that maximizes the effectiveness of the available human labor by only requesting human input where it will have the greatest impact. Here, we propose a novel online framework for active semi-supervised spectral clustering that selects pairwise constraints as clustering proceeds, based on the principle of uncertainty reduction. Using a first-order Taylor expansion, we decompose the expected uncertainty reduction problem into a gradient and a step-scale, computed via an application of matrix perturbation theory and cluster-assignment entropy, respectively. The resulting model is used to estimate the uncertainty reduction potential of each sample in the dataset. We then present the human user with pairwise queries with respect to only the best candidate sample. We evaluate our method using three different image datasets (faces, leaves and dogs), a set of common UCI machine learning datasets and a gene dataset. The results validate our decomposition formulation and show that our method is consistently superior to existing state-of-the-art techniques, as well as being robust to noise and to unknown numbers of clusters.
[ { "created": "Fri, 7 Feb 2014 22:13:03 GMT", "version": "v1" }, { "created": "Fri, 14 Feb 2014 02:53:32 GMT", "version": "v2" } ]
2014-02-17
[ [ "Xiong", "Caiming", "" ], [ "Johnson", "David", "" ], [ "Corso", "Jason J.", "" ] ]
Semi-supervised clustering seeks to augment traditional clustering methods by incorporating side information provided via human expertise in order to increase the semantic meaningfulness of the resulting clusters. However, most current methods are \emph{passive} in the sense that the side information is provided beforehand and selected randomly. This may require a large number of constraints, some of which could be redundant, unnecessary, or even detrimental to the clustering results. Thus in order to scale such semi-supervised algorithms to larger problems it is desirable to pursue an \emph{active} clustering method---i.e. an algorithm that maximizes the effectiveness of the available human labor by only requesting human input where it will have the greatest impact. Here, we propose a novel online framework for active semi-supervised spectral clustering that selects pairwise constraints as clustering proceeds, based on the principle of uncertainty reduction. Using a first-order Taylor expansion, we decompose the expected uncertainty reduction problem into a gradient and a step-scale, computed via an application of matrix perturbation theory and cluster-assignment entropy, respectively. The resulting model is used to estimate the uncertainty reduction potential of each sample in the dataset. We then present the human user with pairwise queries with respect to only the best candidate sample. We evaluate our method using three different image datasets (faces, leaves and dogs), a set of common UCI machine learning datasets and a gene dataset. The results validate our decomposition formulation and show that our method is consistently superior to existing state-of-the-art techniques, as well as being robust to noise and to unknown numbers of clusters.
2302.05116
Nicolas Bonneel
Bastien Doignies and Nicolas Bonneel and David Coeurjolly and Julie Digne and Lo\"is Paulin and Jean-Claude Iehl and Victor Ostromoukhov
Example-Based Sampling with Diffusion Models
null
null
null
null
cs.GR cs.CV cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Much effort has been put into developing samplers with specific properties, such as producing blue noise, low-discrepancy, lattice or Poisson disk samples. These samplers can be slow if they rely on optimization processes, may rely on a wide range of numerical methods, are not always differentiable. The success of recent diffusion models for image generation suggests that these models could be appropriate for learning how to generate point sets from examples. However, their convolutional nature makes these methods impractical for dealing with scattered data such as point sets. We propose a generic way to produce 2-d point sets imitating existing samplers from observed point sets using a diffusion model. We address the problem of convolutional layers by leveraging neighborhood information from an optimal transport matching to a uniform grid, that allows us to benefit from fast convolutions on grids, and to support the example-based learning of non-uniform sampling patterns. We demonstrate how the differentiability of our approach can be used to optimize point sets to enforce properties.
[ { "created": "Fri, 10 Feb 2023 08:35:17 GMT", "version": "v1" } ]
2023-02-13
[ [ "Doignies", "Bastien", "" ], [ "Bonneel", "Nicolas", "" ], [ "Coeurjolly", "David", "" ], [ "Digne", "Julie", "" ], [ "Paulin", "Loïs", "" ], [ "Iehl", "Jean-Claude", "" ], [ "Ostromoukhov", "Victor", "" ...
Much effort has been put into developing samplers with specific properties, such as producing blue noise, low-discrepancy, lattice or Poisson disk samples. These samplers can be slow if they rely on optimization processes, may rely on a wide range of numerical methods, are not always differentiable. The success of recent diffusion models for image generation suggests that these models could be appropriate for learning how to generate point sets from examples. However, their convolutional nature makes these methods impractical for dealing with scattered data such as point sets. We propose a generic way to produce 2-d point sets imitating existing samplers from observed point sets using a diffusion model. We address the problem of convolutional layers by leveraging neighborhood information from an optimal transport matching to a uniform grid, that allows us to benefit from fast convolutions on grids, and to support the example-based learning of non-uniform sampling patterns. We demonstrate how the differentiability of our approach can be used to optimize point sets to enforce properties.
2210.12952
Farhan Ahmed
Farhan Ahmed, Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati
Ares: A System-Oriented Wargame Framework for Adversarial ML
Presented at the DLS Workshop at S&P 2022
null
null
null
cs.LG cs.AI cs.CR
http://creativecommons.org/licenses/by/4.0/
Since the discovery of adversarial attacks against machine learning models nearly a decade ago, research on adversarial machine learning has rapidly evolved into an eternal war between defenders, who seek to increase the robustness of ML models against adversarial attacks, and adversaries, who seek to develop better attacks capable of weakening or defeating these defenses. This domain, however, has found little buy-in from ML practitioners, who are neither overtly concerned about these attacks affecting their systems in the real world nor are willing to trade off the accuracy of their models in pursuit of robustness against these attacks. In this paper, we motivate the design and implementation of Ares, an evaluation framework for adversarial ML that allows researchers to explore attacks and defenses in a realistic wargame-like environment. Ares frames the conflict between the attacker and defender as two agents in a reinforcement learning environment with opposing objectives. This allows the introduction of system-level evaluation metrics such as time to failure and evaluation of complex strategies such as moving target defenses. We provide the results of our initial exploration involving a white-box attacker against an adversarially trained defender.
[ { "created": "Mon, 24 Oct 2022 04:55:18 GMT", "version": "v1" } ]
2022-10-25
[ [ "Ahmed", "Farhan", "" ], [ "Vaishnavi", "Pratik", "" ], [ "Eykholt", "Kevin", "" ], [ "Rahmati", "Amir", "" ] ]
Since the discovery of adversarial attacks against machine learning models nearly a decade ago, research on adversarial machine learning has rapidly evolved into an eternal war between defenders, who seek to increase the robustness of ML models against adversarial attacks, and adversaries, who seek to develop better attacks capable of weakening or defeating these defenses. This domain, however, has found little buy-in from ML practitioners, who are neither overtly concerned about these attacks affecting their systems in the real world nor are willing to trade off the accuracy of their models in pursuit of robustness against these attacks. In this paper, we motivate the design and implementation of Ares, an evaluation framework for adversarial ML that allows researchers to explore attacks and defenses in a realistic wargame-like environment. Ares frames the conflict between the attacker and defender as two agents in a reinforcement learning environment with opposing objectives. This allows the introduction of system-level evaluation metrics such as time to failure and evaluation of complex strategies such as moving target defenses. We provide the results of our initial exploration involving a white-box attacker against an adversarially trained defender.
2003.00303
Shengyu Zhang
Shengyu Zhang, Tan Jiang, Qinghao Huang, Ziqi Tan, Zhou Zhao, Siliang Tang, Jin Yu, Hongxia Yang, Yi Yang, and Fei Wu
Grounded and Controllable Image Completion by Incorporating Lexical Semantics
9 pages, 9 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present an approach, namely Lexical Semantic Image Completion (LSIC), that may have potential applications in art, design, and heritage conservation, among several others. Existing image completion procedure is highly subjective by considering only visual context, which may trigger unpredictable results which are plausible but not faithful to a grounded knowledge. To permit both grounded and controllable completion process, we advocate generating results faithful to both visual and lexical semantic context, i.e., the description of leaving holes or blank regions in the image (e.g., hole description). One major challenge for LSIC comes from modeling and aligning the structure of visual-semantic context and translating across different modalities. We term this process as structure completion, which is realized by multi-grained reasoning blocks in our model. Another challenge relates to the unimodal biases, which occurs when the model generates plausible results without using the textual description. This can be true since the annotated captions for an image are often semantically equivalent in existing datasets, and thus there is only one paired text for a masked image in training. We devise an unsupervised unpaired-creation learning path besides the over-explored paired-reconstruction path, as well as a multi-stage training strategy to mitigate the insufficiency of labeled data. We conduct extensive quantitative and qualitative experiments as well as ablation studies, which reveal the efficacy of our proposed LSIC.
[ { "created": "Sat, 29 Feb 2020 16:54:21 GMT", "version": "v1" } ]
2020-03-03
[ [ "Zhang", "Shengyu", "" ], [ "Jiang", "Tan", "" ], [ "Huang", "Qinghao", "" ], [ "Tan", "Ziqi", "" ], [ "Zhao", "Zhou", "" ], [ "Tang", "Siliang", "" ], [ "Yu", "Jin", "" ], [ "Yang", "Hongxia", ...
In this paper, we present an approach, namely Lexical Semantic Image Completion (LSIC), that may have potential applications in art, design, and heritage conservation, among several others. Existing image completion procedure is highly subjective by considering only visual context, which may trigger unpredictable results which are plausible but not faithful to a grounded knowledge. To permit both grounded and controllable completion process, we advocate generating results faithful to both visual and lexical semantic context, i.e., the description of leaving holes or blank regions in the image (e.g., hole description). One major challenge for LSIC comes from modeling and aligning the structure of visual-semantic context and translating across different modalities. We term this process as structure completion, which is realized by multi-grained reasoning blocks in our model. Another challenge relates to the unimodal biases, which occurs when the model generates plausible results without using the textual description. This can be true since the annotated captions for an image are often semantically equivalent in existing datasets, and thus there is only one paired text for a masked image in training. We devise an unsupervised unpaired-creation learning path besides the over-explored paired-reconstruction path, as well as a multi-stage training strategy to mitigate the insufficiency of labeled data. We conduct extensive quantitative and qualitative experiments as well as ablation studies, which reveal the efficacy of our proposed LSIC.
2103.07522
Andrea Marino
Andrea Marino and Ana Silva
K\"{o}nigsberg Sightseeing: Eulerian Walks in Temporal Graphs
null
null
null
null
cs.DM cs.DS
http://creativecommons.org/licenses/by/4.0/
An Eulerian walk (or Eulerian trail) is a walk (resp. trail) that visits every edge of a graph $G$ at least (resp. exactly) once. This notion was first discussed by Leonhard Euler while solving the famous Seven Bridges of K\"{o}nigsberg problem in 1736. What if Euler had to take a bus? In a temporal graph $(G,\lambda)$, with $\lambda: E(G)\to 2^{[\tau]}$, an edge $e\in E(G)$ is available only at the times specified by $\lambda(e)\subseteq [\tau]$, in the same way the connections of the public transportation network of a city or of sightseeing tours are available only at scheduled times. In this scenario, even though several translations of Eulerian trails and walks are possible in temporal terms, only a very particular variation has been exploited in the literature, specifically for infinite dynamic networks (Orlin, 1984). In this paper, we deal with temporal walks, local trails, and trails, respectively referring to edge traversal with no constraints, constrained to not repeating the same edge in a single timestamp, and constrained to never repeating the same edge throughout the entire traversal. We show that, if the edges are always available, then deciding whether $(G,\lambda)$ has a temporal walk or trail is polynomial, while deciding whether it has a local trail is NP-complete even if it has lifetime~2. In contrast, in the general case, solving any of these problems is NP-complete, even under very strict hypothesis.
[ { "created": "Fri, 12 Mar 2021 20:51:14 GMT", "version": "v1" } ]
2021-03-16
[ [ "Marino", "Andrea", "" ], [ "Silva", "Ana", "" ] ]
An Eulerian walk (or Eulerian trail) is a walk (resp. trail) that visits every edge of a graph $G$ at least (resp. exactly) once. This notion was first discussed by Leonhard Euler while solving the famous Seven Bridges of K\"{o}nigsberg problem in 1736. What if Euler had to take a bus? In a temporal graph $(G,\lambda)$, with $\lambda: E(G)\to 2^{[\tau]}$, an edge $e\in E(G)$ is available only at the times specified by $\lambda(e)\subseteq [\tau]$, in the same way the connections of the public transportation network of a city or of sightseeing tours are available only at scheduled times. In this scenario, even though several translations of Eulerian trails and walks are possible in temporal terms, only a very particular variation has been exploited in the literature, specifically for infinite dynamic networks (Orlin, 1984). In this paper, we deal with temporal walks, local trails, and trails, respectively referring to edge traversal with no constraints, constrained to not repeating the same edge in a single timestamp, and constrained to never repeating the same edge throughout the entire traversal. We show that, if the edges are always available, then deciding whether $(G,\lambda)$ has a temporal walk or trail is polynomial, while deciding whether it has a local trail is NP-complete even if it has lifetime~2. In contrast, in the general case, solving any of these problems is NP-complete, even under very strict hypothesis.
2010.16061
David Powers
David M. W. Powers
Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation
27 pages, 7 figures. Updated and fixed egregious formatting errors (including a table overlapping text) that were introduced by the publisher. This open access journal appears to have been discontinued. arXiv admin note: text overlap with arXiv:1504.00854
International Journal of Machine Learning Technology 2:1 (2011), pp.37-63
null
null
cs.LG stat.ME stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Commonly used evaluation measures including Recall, Precision, F-Measure and Rand Accuracy are biased and should not be used without clear understanding of the biases, and corresponding identification of chance or base case levels of the statistic. Using these measures a system that performs worse in the objective sense of Informedness, can appear to perform better under any of these commonly used measures. We discuss several concepts and measures that reflect the probability that prediction is informed versus chance. Informedness and introduce Markedness as a dual measure for the probability that prediction is marked versus chance. Finally we demonstrate elegant connections between the concepts of Informedness, Markedness, Correlation and Significance as well as their intuitive relationships with Recall and Precision, and outline the extension from the dichotomous case to the general multi-class case.
[ { "created": "Sun, 11 Oct 2020 02:15:11 GMT", "version": "v1" } ]
2020-11-02
[ [ "Powers", "David M. W.", "" ] ]
Commonly used evaluation measures including Recall, Precision, F-Measure and Rand Accuracy are biased and should not be used without clear understanding of the biases, and corresponding identification of chance or base case levels of the statistic. Using these measures a system that performs worse in the objective sense of Informedness, can appear to perform better under any of these commonly used measures. We discuss several concepts and measures that reflect the probability that prediction is informed versus chance. Informedness and introduce Markedness as a dual measure for the probability that prediction is marked versus chance. Finally we demonstrate elegant connections between the concepts of Informedness, Markedness, Correlation and Significance as well as their intuitive relationships with Recall and Precision, and outline the extension from the dichotomous case to the general multi-class case.
2308.11474
Xiaojie Sun
Xiaojie Sun, Keping Bi, Jiafeng Guo, Xinyu Ma, Fan Yixing, Hongyu Shan, Qishen Zhang, Zhongyi Liu
Pre-training with Aspect-Content Text Mutual Prediction for Multi-Aspect Dense Retrieval
accepted by cikm2023
null
null
null
cs.IR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Grounded on pre-trained language models (PLMs), dense retrieval has been studied extensively on plain text. In contrast, there has been little research on retrieving data with multiple aspects using dense models. In the scenarios such as product search, the aspect information plays an essential role in relevance matching, e.g., category: Electronics, Computers, and Pet Supplies. A common way of leveraging aspect information for multi-aspect retrieval is to introduce an auxiliary classification objective, i.e., using item contents to predict the annotated value IDs of item aspects. However, by learning the value embeddings from scratch, this approach may not capture the various semantic similarities between the values sufficiently. To address this limitation, we leverage the aspect information as text strings rather than class IDs during pre-training so that their semantic similarities can be naturally captured in the PLMs. To facilitate effective retrieval with the aspect strings, we propose mutual prediction objectives between the text of the item aspect and content. In this way, our model makes more sufficient use of aspect information than conducting undifferentiated masked language modeling (MLM) on the concatenated text of aspects and content. Extensive experiments on two real-world datasets (product and mini-program search) show that our approach can outperform competitive baselines both treating aspect values as classes and conducting the same MLM for aspect and content strings. Code and related dataset will be available at the URL \footnote{https://github.com/sunxiaojie99/ATTEMPT}.
[ { "created": "Tue, 22 Aug 2023 14:42:27 GMT", "version": "v1" } ]
2023-08-23
[ [ "Sun", "Xiaojie", "" ], [ "Bi", "Keping", "" ], [ "Guo", "Jiafeng", "" ], [ "Ma", "Xinyu", "" ], [ "Yixing", "Fan", "" ], [ "Shan", "Hongyu", "" ], [ "Zhang", "Qishen", "" ], [ "Liu", "Zhongyi",...
Grounded on pre-trained language models (PLMs), dense retrieval has been studied extensively on plain text. In contrast, there has been little research on retrieving data with multiple aspects using dense models. In the scenarios such as product search, the aspect information plays an essential role in relevance matching, e.g., category: Electronics, Computers, and Pet Supplies. A common way of leveraging aspect information for multi-aspect retrieval is to introduce an auxiliary classification objective, i.e., using item contents to predict the annotated value IDs of item aspects. However, by learning the value embeddings from scratch, this approach may not capture the various semantic similarities between the values sufficiently. To address this limitation, we leverage the aspect information as text strings rather than class IDs during pre-training so that their semantic similarities can be naturally captured in the PLMs. To facilitate effective retrieval with the aspect strings, we propose mutual prediction objectives between the text of the item aspect and content. In this way, our model makes more sufficient use of aspect information than conducting undifferentiated masked language modeling (MLM) on the concatenated text of aspects and content. Extensive experiments on two real-world datasets (product and mini-program search) show that our approach can outperform competitive baselines both treating aspect values as classes and conducting the same MLM for aspect and content strings. Code and related dataset will be available at the URL \footnote{https://github.com/sunxiaojie99/ATTEMPT}.
1402.0060
Xue Luo
Xue Luo, Stephen S.-T. Yau, Mingyi Zhang and Huaiqing Zuo
On Classification of Toric Surface Codes of Low Dimension
18 pages, 4 figures, 8 tables
Finite Fields Appl., Vol. 33, pp. 90-102, 2015
null
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work is a natural continuation of our previous work \cite{yz}. In this paper, we give a complete classification of toric surface codes of dimension less than or equal to 6, except a special pair, $C_{P_6^{(4)}}$ and $C_{P_6^{(5)}}$ over $\mathbb{F}_8$. Also, we give an example, $C_{P_6^{(5)}}$ and $C_{P_6^{(6)}}$ over $\mathbb{F}_7$, to illustrate that two monomially equivalent toric codes can be constructed from two lattice non-equivalent polygons.
[ { "created": "Sat, 1 Feb 2014 07:39:37 GMT", "version": "v1" }, { "created": "Sun, 14 Sep 2014 02:55:55 GMT", "version": "v2" } ]
2015-08-11
[ [ "Luo", "Xue", "" ], [ "Yau", "Stephen S. -T.", "" ], [ "Zhang", "Mingyi", "" ], [ "Zuo", "Huaiqing", "" ] ]
This work is a natural continuation of our previous work \cite{yz}. In this paper, we give a complete classification of toric surface codes of dimension less than or equal to 6, except a special pair, $C_{P_6^{(4)}}$ and $C_{P_6^{(5)}}$ over $\mathbb{F}_8$. Also, we give an example, $C_{P_6^{(5)}}$ and $C_{P_6^{(6)}}$ over $\mathbb{F}_7$, to illustrate that two monomially equivalent toric codes can be constructed from two lattice non-equivalent polygons.
2103.03741
Subhrajit Bhattacharya
Mohammad Saleh Teymouri and Subhrajit Bhattacharya
Landmark-based Distributed Topological Mapping and Navigation in GPS-denied Urban Environments Using Teams of Low-cost Robots
29 pages, 23 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we address the problem of autonomous multi-robot mapping, exploration and navigation in unknown, GPS-denied indoor or urban environments using a swarm of robots equipped with directional sensors with limited sensing capabilities and limited computational resources. The robots have no a priori knowledge of the environment and need to rapidly explore and construct a map in a distributed manner using existing landmarks, the presence of which can be detected using onboard senors, although little to no metric information (distance or bearing to the landmarks) is available. In order to correctly and effectively achieve this, the presence of a necessary density/distribution of landmarks is ensured by design of the urban/indoor environment. We thus address this problem in two phases: 1) During the design/construction of the urban/indoor environment we can ensure that sufficient landmarks are placed within the environment. To that end we develop a filtration-based approach for designing strategic placement of landmarks in an environment. 2) We develop a distributed algorithm using which a team of robots, with no a priori knowledge of the environment, can explore such an environment, construct a topological map requiring no metric/distance information, and use that map to navigate within the environment. This is achieved using a topological representation of the environment (called a Landmark Complex), instead of constructing a complete metric/pixel map. The representation is built by the robot as well as used by them for navigation through a balance between exploration and exploitation. We use tools from homology theory for identifying "holes" in the coverage/exploration of the unknown environment and hence guiding the robots towards achieving a complete exploration and mapping of the environment.
[ { "created": "Fri, 5 Mar 2021 15:13:39 GMT", "version": "v1" } ]
2021-03-08
[ [ "Teymouri", "Mohammad Saleh", "" ], [ "Bhattacharya", "Subhrajit", "" ] ]
In this paper, we address the problem of autonomous multi-robot mapping, exploration and navigation in unknown, GPS-denied indoor or urban environments using a swarm of robots equipped with directional sensors with limited sensing capabilities and limited computational resources. The robots have no a priori knowledge of the environment and need to rapidly explore and construct a map in a distributed manner using existing landmarks, the presence of which can be detected using onboard senors, although little to no metric information (distance or bearing to the landmarks) is available. In order to correctly and effectively achieve this, the presence of a necessary density/distribution of landmarks is ensured by design of the urban/indoor environment. We thus address this problem in two phases: 1) During the design/construction of the urban/indoor environment we can ensure that sufficient landmarks are placed within the environment. To that end we develop a filtration-based approach for designing strategic placement of landmarks in an environment. 2) We develop a distributed algorithm using which a team of robots, with no a priori knowledge of the environment, can explore such an environment, construct a topological map requiring no metric/distance information, and use that map to navigate within the environment. This is achieved using a topological representation of the environment (called a Landmark Complex), instead of constructing a complete metric/pixel map. The representation is built by the robot as well as used by them for navigation through a balance between exploration and exploitation. We use tools from homology theory for identifying "holes" in the coverage/exploration of the unknown environment and hence guiding the robots towards achieving a complete exploration and mapping of the environment.
1904.04094
David Griffiths Mr
David Griffiths, Jan Boehm
Weighted Point Cloud Augmentation for Neural Network Training Data Class-Imbalance
7 pages, 6 figures, submitted for ISPRS Geospatial Week conference 2019
null
10.5194/isprs-archives-XLII-2-W13-981-2019
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent developments in the field of deep learning for 3D data have demonstrated promising potential for end-to-end learning directly from point clouds. However, many real-world point clouds contain a large class im-balance due to the natural class im-balance observed in nature. For example, a 3D scan of an urban environment will consist mostly of road and facade, whereas other objects such as poles will be under-represented. In this paper we address this issue by employing a weighted augmentation to increase classes that contain fewer points. By mitigating the class im-balance present in the data we demonstrate that a standard PointNet++ deep neural network can achieve higher performance at inference on validation data. This was observed as an increase of F1 score of 19% and 25% on two test benchmark datasets; ScanNet and Semantic3D respectively where no class im-balance pre-processing had been performed. Our networks performed better on both highly-represented and under-represented classes, which indicates that the network is learning more robust and meaningful features when the loss function is not overly exposed to only a few classes.
[ { "created": "Mon, 8 Apr 2019 14:32:27 GMT", "version": "v1" }, { "created": "Tue, 9 Apr 2019 07:31:37 GMT", "version": "v2" } ]
2019-06-19
[ [ "Griffiths", "David", "" ], [ "Boehm", "Jan", "" ] ]
Recent developments in the field of deep learning for 3D data have demonstrated promising potential for end-to-end learning directly from point clouds. However, many real-world point clouds contain a large class im-balance due to the natural class im-balance observed in nature. For example, a 3D scan of an urban environment will consist mostly of road and facade, whereas other objects such as poles will be under-represented. In this paper we address this issue by employing a weighted augmentation to increase classes that contain fewer points. By mitigating the class im-balance present in the data we demonstrate that a standard PointNet++ deep neural network can achieve higher performance at inference on validation data. This was observed as an increase of F1 score of 19% and 25% on two test benchmark datasets; ScanNet and Semantic3D respectively where no class im-balance pre-processing had been performed. Our networks performed better on both highly-represented and under-represented classes, which indicates that the network is learning more robust and meaningful features when the loss function is not overly exposed to only a few classes.
2311.15189
Nisansala Yatapanage
Nisansala P. Yatapanage and Cliff B. Jones
Using Rely/Guarantee to Pinpoint Assumptions underlying Security Protocols
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
The verification of security protocols is essential, in order to ensure the absence of potential attacks. However, verification results are only valid with respect to the assumptions under which the verification was performed. These assumptions are often hidden and are difficult to identify, making it unclear whether a given protocol is safe to deploy into a particular environment. Rely/guarantee provides a mechanism for abstractly reasoning about the interference from the environment. Using this approach, the assumptions are made clear and precise. This paper investigates this approach on the Needham-Schroeder Public Key protocol, showing that the technique can effectively uncover the assumptions under which the protocol can withstand attacks from intruders.
[ { "created": "Sun, 26 Nov 2023 04:43:09 GMT", "version": "v1" } ]
2023-11-28
[ [ "Yatapanage", "Nisansala P.", "" ], [ "Jones", "Cliff B.", "" ] ]
The verification of security protocols is essential, in order to ensure the absence of potential attacks. However, verification results are only valid with respect to the assumptions under which the verification was performed. These assumptions are often hidden and are difficult to identify, making it unclear whether a given protocol is safe to deploy into a particular environment. Rely/guarantee provides a mechanism for abstractly reasoning about the interference from the environment. Using this approach, the assumptions are made clear and precise. This paper investigates this approach on the Needham-Schroeder Public Key protocol, showing that the technique can effectively uncover the assumptions under which the protocol can withstand attacks from intruders.
1805.08632
Xiang Chen
Xiang Chen
Towards Global Optimization in Display Advertising by Integrating Multimedia Metrics with Real-Time Bidding
In proceedings of ACM Multimedia'17 (Doctoral Symposium), Mountain View, CA, USA
null
10.1145/3123266.3123966
null
cs.GT cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time bidding (RTB) has become a new norm in display advertising where a publisher uses auction models to sell online user's page view to advertisers. In RTB, the ad with the highest bid price will be displayed to the user. This ad displaying process is biased towards the publisher. In fact, the benefits of the advertiser and the user have been rarely discussed. Towards the global optimization, we argue that all stakeholders' benefits should be considered. To this end, we propose a novel computation framework where multimedia techniques and auction theory are integrated. This doctoral research mainly focus on 1) figuring out the multimedia metrics that affect the effectiveness of online advertising; 2) integrating the discovered metrics into the RTB framework. We have presented some preliminary results and discussed the future directions.
[ { "created": "Mon, 21 May 2018 04:02:19 GMT", "version": "v1" } ]
2018-05-23
[ [ "Chen", "Xiang", "" ] ]
Real-time bidding (RTB) has become a new norm in display advertising where a publisher uses auction models to sell online user's page view to advertisers. In RTB, the ad with the highest bid price will be displayed to the user. This ad displaying process is biased towards the publisher. In fact, the benefits of the advertiser and the user have been rarely discussed. Towards the global optimization, we argue that all stakeholders' benefits should be considered. To this end, we propose a novel computation framework where multimedia techniques and auction theory are integrated. This doctoral research mainly focus on 1) figuring out the multimedia metrics that affect the effectiveness of online advertising; 2) integrating the discovered metrics into the RTB framework. We have presented some preliminary results and discussed the future directions.
2304.06021
Tong Zhang
Shiwei Zhang, Zhengzheng Wang, Qing Liu, Fei Wang, Wei Ke, Tong Zhang
Crowd Counting with Sparse Annotation
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new annotation method called Sparse Annotation (SA) for crowd counting, which reduces human labeling efforts by sparsely labeling individuals in an image. We argue that sparse labeling can reduce the redundancy of full annotation and capture more diverse information from distant individuals that is not fully captured by Partial Annotation methods. Besides, we propose a point-based Progressive Point Matching network (PPM) to better explore the crowd from the whole image with sparse annotation, which includes a Proposal Matching Network (PMN) and a Performance Restoration Network (PRN). The PMN generates pseudo-point samples using a basic point classifier, while the PRN refines the point classifier with the pseudo points to maximize performance. Our experimental results show that PPM outperforms previous semi-supervised crowd counting methods with the same amount of annotation by a large margin and achieves competitive performance with state-of-the-art fully-supervised methods.
[ { "created": "Wed, 12 Apr 2023 17:57:48 GMT", "version": "v1" } ]
2023-04-13
[ [ "Zhang", "Shiwei", "" ], [ "Wang", "Zhengzheng", "" ], [ "Liu", "Qing", "" ], [ "Wang", "Fei", "" ], [ "Ke", "Wei", "" ], [ "Zhang", "Tong", "" ] ]
This paper presents a new annotation method called Sparse Annotation (SA) for crowd counting, which reduces human labeling efforts by sparsely labeling individuals in an image. We argue that sparse labeling can reduce the redundancy of full annotation and capture more diverse information from distant individuals that is not fully captured by Partial Annotation methods. Besides, we propose a point-based Progressive Point Matching network (PPM) to better explore the crowd from the whole image with sparse annotation, which includes a Proposal Matching Network (PMN) and a Performance Restoration Network (PRN). The PMN generates pseudo-point samples using a basic point classifier, while the PRN refines the point classifier with the pseudo points to maximize performance. Our experimental results show that PPM outperforms previous semi-supervised crowd counting methods with the same amount of annotation by a large margin and achieves competitive performance with state-of-the-art fully-supervised methods.
2211.02274
Anne Kohlbrenner
Anne Kohlbrenner, Ben Kaiser, Kartikeya Kandula, Rebecca Weiss, Jonathan Mayer, Ted Han, Robert Helmer
Rally and WebScience: A Platform and Toolkit for Browser-Based Research on Technology and Society Problems
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Empirical technology and society research is in a methodological crisis. Problems increasingly involve closed platforms, targeted content, and context-specific behavior. Prevailing research methods, such as surveys, tasks, and web crawls, pose design and ecological validity limitations. Deploying studies in participant browsers and devices is a promising direction. These vantage points can observe individualized experiences and implement UI interventions in real settings. We survey scholarship that uses these methods, annotating 284 sampled papers. Our analysis demonstrates their potential, but also recurring implementation barriers and shortcomings. We then present Rally and sdkName, a platform and toolkit for browser-based research. These systems lower implementation barriers and advance the science of measuring online behavior. Finally, we evaluate Rally and sdkName against our design goals. We report results from a one-month pilot study on news engagement, analyzing 4,466,200 webpage visits from 1,817 participants. We also present observations from interviews with researchers using these systems.
[ { "created": "Fri, 4 Nov 2022 06:05:06 GMT", "version": "v1" }, { "created": "Wed, 30 Nov 2022 17:20:29 GMT", "version": "v2" } ]
2022-12-01
[ [ "Kohlbrenner", "Anne", "" ], [ "Kaiser", "Ben", "" ], [ "Kandula", "Kartikeya", "" ], [ "Weiss", "Rebecca", "" ], [ "Mayer", "Jonathan", "" ], [ "Han", "Ted", "" ], [ "Helmer", "Robert", "" ] ]
Empirical technology and society research is in a methodological crisis. Problems increasingly involve closed platforms, targeted content, and context-specific behavior. Prevailing research methods, such as surveys, tasks, and web crawls, pose design and ecological validity limitations. Deploying studies in participant browsers and devices is a promising direction. These vantage points can observe individualized experiences and implement UI interventions in real settings. We survey scholarship that uses these methods, annotating 284 sampled papers. Our analysis demonstrates their potential, but also recurring implementation barriers and shortcomings. We then present Rally and sdkName, a platform and toolkit for browser-based research. These systems lower implementation barriers and advance the science of measuring online behavior. Finally, we evaluate Rally and sdkName against our design goals. We report results from a one-month pilot study on news engagement, analyzing 4,466,200 webpage visits from 1,817 participants. We also present observations from interviews with researchers using these systems.
2304.09957
Ekaterina Artemova
Ekaterina Artemova and Barbara Plank
Low-resource Bilingual Dialect Lexicon Induction with Large Language Models
Accepted to NoDaLiDa 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Bilingual word lexicons are crucial tools for multilingual natural language understanding and machine translation tasks, as they facilitate the mapping of words in one language to their synonyms in another language. To achieve this, numerous papers have explored bilingual lexicon induction (BLI) in high-resource scenarios, using a typical pipeline consisting of two unsupervised steps: bitext mining and word alignment, both of which rely on pre-trained large language models~(LLMs). In this paper, we present an analysis of the BLI pipeline for German and two of its dialects, Bavarian and Alemannic. This setup poses several unique challenges, including the scarcity of resources, the relatedness of the languages, and the lack of standardization in the orthography of dialects. To evaluate the BLI outputs, we analyze them with respect to word frequency and pairwise edit distance. Additionally, we release two evaluation datasets comprising 1,500 bilingual sentence pairs and 1,000 bilingual word pairs. They were manually judged for their semantic similarity for each Bavarian-German and Alemannic-German language pair.
[ { "created": "Wed, 19 Apr 2023 20:20:41 GMT", "version": "v1" } ]
2023-04-21
[ [ "Artemova", "Ekaterina", "" ], [ "Plank", "Barbara", "" ] ]
Bilingual word lexicons are crucial tools for multilingual natural language understanding and machine translation tasks, as they facilitate the mapping of words in one language to their synonyms in another language. To achieve this, numerous papers have explored bilingual lexicon induction (BLI) in high-resource scenarios, using a typical pipeline consisting of two unsupervised steps: bitext mining and word alignment, both of which rely on pre-trained large language models~(LLMs). In this paper, we present an analysis of the BLI pipeline for German and two of its dialects, Bavarian and Alemannic. This setup poses several unique challenges, including the scarcity of resources, the relatedness of the languages, and the lack of standardization in the orthography of dialects. To evaluate the BLI outputs, we analyze them with respect to word frequency and pairwise edit distance. Additionally, we release two evaluation datasets comprising 1,500 bilingual sentence pairs and 1,000 bilingual word pairs. They were manually judged for their semantic similarity for each Bavarian-German and Alemannic-German language pair.
1801.05870
Laurent Jacques
Chunlei Xu and Laurent Jacques
Quantized Compressive Sensing with RIP Matrices: The Benefit of Dithering
42 pages, 9 figures. Diff. btw V3 & V2: better paper structure, new concepts (e.g., RIP matrix distribution, connections with Bussgang's theorem), as well as many clarifications and corrections
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantized compressive sensing (QCS) deals with the problem of coding compressive measurements of low-complexity signals with quantized, finite precision representations, i.e., a mandatory process involved in any practical sensing model. While the resolution of this quantization clearly impacts the quality of signal reconstruction, there actually exist incompatible combinations of quantization functions and sensing matrices that proscribe arbitrarily low reconstruction error when the number of measurements increases. This work shows that a large class of random matrix constructions known to respect the restricted isometry property (RIP) is "compatible" with a simple scalar and uniform quantization if a uniform random vector, or a random dither, is added to the compressive signal measurements before quantization. In the context of estimating low-complexity signals (e.g., sparse or compressible signals, low-rank matrices) from their quantized observations, this compatibility is demonstrated by the existence of (at least) one signal reconstruction method, the projected back projection (PBP), whose reconstruction error decays when the number of measurements increases. Interestingly, given one RIP matrix and a single realization of the dither, a small reconstruction error can be proved to hold uniformly for all signals in the considered low-complexity set. We confirm these observations numerically in several scenarios involving sparse signals, low-rank matrices, and compressible signals, with various RIP matrix constructions such as sub-Gaussian random matrices and random partial discrete cosine transform (DCT) matrices.
[ { "created": "Wed, 17 Jan 2018 21:52:13 GMT", "version": "v1" }, { "created": "Mon, 19 Feb 2018 18:21:03 GMT", "version": "v2" }, { "created": "Tue, 12 Feb 2019 16:05:08 GMT", "version": "v3" } ]
2019-02-13
[ [ "Xu", "Chunlei", "" ], [ "Jacques", "Laurent", "" ] ]
Quantized compressive sensing (QCS) deals with the problem of coding compressive measurements of low-complexity signals with quantized, finite precision representations, i.e., a mandatory process involved in any practical sensing model. While the resolution of this quantization clearly impacts the quality of signal reconstruction, there actually exist incompatible combinations of quantization functions and sensing matrices that proscribe arbitrarily low reconstruction error when the number of measurements increases. This work shows that a large class of random matrix constructions known to respect the restricted isometry property (RIP) is "compatible" with a simple scalar and uniform quantization if a uniform random vector, or a random dither, is added to the compressive signal measurements before quantization. In the context of estimating low-complexity signals (e.g., sparse or compressible signals, low-rank matrices) from their quantized observations, this compatibility is demonstrated by the existence of (at least) one signal reconstruction method, the projected back projection (PBP), whose reconstruction error decays when the number of measurements increases. Interestingly, given one RIP matrix and a single realization of the dither, a small reconstruction error can be proved to hold uniformly for all signals in the considered low-complexity set. We confirm these observations numerically in several scenarios involving sparse signals, low-rank matrices, and compressible signals, with various RIP matrix constructions such as sub-Gaussian random matrices and random partial discrete cosine transform (DCT) matrices.
2002.08795
Prithviraj Ammanabrolu
Prithviraj Ammanabrolu, Ethan Tien, Zhaochen Luo, Mark O. Riedl
How To Avoid Being Eaten By a Grue: Exploration Strategies for Text-Adventure Agents
null
null
null
null
cs.LG cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text-based games -- in which an agent interacts with the world through textual natural language -- present us with the problem of combinatorially-sized action-spaces. Most current reinforcement learning algorithms are not capable of effectively handling such a large number of possible actions per turn. Poor sample efficiency, consequently, results in agents that are unable to pass bottleneck states, where they are unable to proceed because they do not see the right action sequence to pass the bottleneck enough times to be sufficiently reinforced. Building on prior work using knowledge graphs in reinforcement learning, we introduce two new game state exploration strategies. We compare our exploration strategies against strong baselines on the classic text-adventure game, Zork1, where prior agent have been unable to get past a bottleneck where the agent is eaten by a Grue.
[ { "created": "Wed, 19 Feb 2020 17:18:20 GMT", "version": "v1" } ]
2020-02-21
[ [ "Ammanabrolu", "Prithviraj", "" ], [ "Tien", "Ethan", "" ], [ "Luo", "Zhaochen", "" ], [ "Riedl", "Mark O.", "" ] ]
Text-based games -- in which an agent interacts with the world through textual natural language -- present us with the problem of combinatorially-sized action-spaces. Most current reinforcement learning algorithms are not capable of effectively handling such a large number of possible actions per turn. Poor sample efficiency, consequently, results in agents that are unable to pass bottleneck states, where they are unable to proceed because they do not see the right action sequence to pass the bottleneck enough times to be sufficiently reinforced. Building on prior work using knowledge graphs in reinforcement learning, we introduce two new game state exploration strategies. We compare our exploration strategies against strong baselines on the classic text-adventure game, Zork1, where prior agent have been unable to get past a bottleneck where the agent is eaten by a Grue.
2306.05838
Pascal Welke
Pascal Welke, Maximilian Thiessen, Fabian Jogl, Thomas G\"artner
Expectation-Complete Graph Representations with Homomorphisms
accepted for publication at ICML 2023
null
null
null
cs.LG cs.DS
http://creativecommons.org/licenses/by-sa/4.0/
We investigate novel random graph embeddings that can be computed in expected polynomial time and that are able to distinguish all non-isomorphic graphs in expectation. Previous graph embeddings have limited expressiveness and either cannot distinguish all graphs or cannot be computed efficiently for every graph. To be able to approximate arbitrary functions on graphs, we are interested in efficient alternatives that become arbitrarily expressive with increasing resources. Our approach is based on Lov\'asz' characterisation of graph isomorphism through an infinite dimensional vector of homomorphism counts. Our empirical evaluation shows competitive results on several benchmark graph learning tasks.
[ { "created": "Fri, 9 Jun 2023 12:12:07 GMT", "version": "v1" }, { "created": "Thu, 24 Aug 2023 06:59:44 GMT", "version": "v2" } ]
2023-08-25
[ [ "Welke", "Pascal", "" ], [ "Thiessen", "Maximilian", "" ], [ "Jogl", "Fabian", "" ], [ "Gärtner", "Thomas", "" ] ]
We investigate novel random graph embeddings that can be computed in expected polynomial time and that are able to distinguish all non-isomorphic graphs in expectation. Previous graph embeddings have limited expressiveness and either cannot distinguish all graphs or cannot be computed efficiently for every graph. To be able to approximate arbitrary functions on graphs, we are interested in efficient alternatives that become arbitrarily expressive with increasing resources. Our approach is based on Lov\'asz' characterisation of graph isomorphism through an infinite dimensional vector of homomorphism counts. Our empirical evaluation shows competitive results on several benchmark graph learning tasks.
2304.06790
Tao Yu
Tao Yu, Runseng Feng, Ruoyu Feng, Jinming Liu, Xin Jin, Wenjun Zeng, Zhibo Chen
Inpaint Anything: Segment Anything Meets Image Inpainting
Technical report. Code URL: https://github.com/geekyutao/Inpaint-Anything
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern image inpainting systems, despite the significant progress, often struggle with mask selection and holes filling. Based on Segment-Anything Model (SAM), we make the first attempt to the mask-free image inpainting and propose a new paradigm of ``clicking and filling'', which is named as Inpaint Anything (IA). The core idea behind IA is to combine the strengths of different models in order to build a very powerful and user-friendly pipeline for solving inpainting-related problems. IA supports three main features: (i) Remove Anything: users could click on an object and IA will remove it and smooth the ``hole'' with the context; (ii) Fill Anything: after certain objects removal, users could provide text-based prompts to IA, and then it will fill the hole with the corresponding generative content via driving AIGC models like Stable Diffusion; (iii) Replace Anything: with IA, users have another option to retain the click-selected object and replace the remaining background with the newly generated scenes. We are also very willing to help everyone share and promote new projects based on our Inpaint Anything (IA). Our codes are available at https://github.com/geekyutao/Inpaint-Anything.
[ { "created": "Thu, 13 Apr 2023 19:23:52 GMT", "version": "v1" } ]
2023-04-17
[ [ "Yu", "Tao", "" ], [ "Feng", "Runseng", "" ], [ "Feng", "Ruoyu", "" ], [ "Liu", "Jinming", "" ], [ "Jin", "Xin", "" ], [ "Zeng", "Wenjun", "" ], [ "Chen", "Zhibo", "" ] ]
Modern image inpainting systems, despite the significant progress, often struggle with mask selection and holes filling. Based on Segment-Anything Model (SAM), we make the first attempt to the mask-free image inpainting and propose a new paradigm of ``clicking and filling'', which is named as Inpaint Anything (IA). The core idea behind IA is to combine the strengths of different models in order to build a very powerful and user-friendly pipeline for solving inpainting-related problems. IA supports three main features: (i) Remove Anything: users could click on an object and IA will remove it and smooth the ``hole'' with the context; (ii) Fill Anything: after certain objects removal, users could provide text-based prompts to IA, and then it will fill the hole with the corresponding generative content via driving AIGC models like Stable Diffusion; (iii) Replace Anything: with IA, users have another option to retain the click-selected object and replace the remaining background with the newly generated scenes. We are also very willing to help everyone share and promote new projects based on our Inpaint Anything (IA). Our codes are available at https://github.com/geekyutao/Inpaint-Anything.
2406.05784
Seemab Latif
Huma Ameer, Seemab Latif, Iram Tariq Bhatti, Rabia Latif
Optimizing Multi-Stuttered Speech Classification: Leveraging Whisper's Encoder for Efficient Parameter Reduction in Automated Assessment
null
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
The automated classification of stuttered speech has significant implications for timely assessments providing assistance to speech language pathologists. Despite notable advancements in the field, the cases in which multiple disfluencies occur in speech require attention. We have taken a progressive approach to fill this gap by classifying multi-stuttered speech more efficiently. The problem has been addressed by firstly curating a dataset of multi-stuttered disfluencies from open source dataset SEP-28k audio clips. Secondly, employing Whisper, a state-of-the-art speech recognition model has been leveraged by using its encoder and taking the problem as multi label classification. Thirdly, using a 6 encoder layer Whisper and experimenting with various layer freezing strategies, a computationally efficient configuration of the model was identified. The proposed configuration achieved micro, macro, and weighted F1-scores of 0.88, 0.85, and 0.87, correspondingly on an external test dataset i.e. Fluency-Bank. In addition, through layer freezing strategies, we were able to achieve the aforementioned results by fine-tuning a single encoder layer, consequently, reducing the model's trainable parameters from 20.27 million to 3.29 million. This research study unveils the contribution of the last encoder layer in the identification of disfluencies in stuttered speech. Consequently, it has led to a computationally efficient approach, 83.7% less parameters to train, making the proposed approach more adaptable for various dialects and languages.
[ { "created": "Sun, 9 Jun 2024 13:42:51 GMT", "version": "v1" }, { "created": "Wed, 12 Jun 2024 06:13:36 GMT", "version": "v2" }, { "created": "Sat, 20 Jul 2024 16:00:30 GMT", "version": "v3" } ]
2024-07-23
[ [ "Ameer", "Huma", "" ], [ "Latif", "Seemab", "" ], [ "Bhatti", "Iram Tariq", "" ], [ "Latif", "Rabia", "" ] ]
The automated classification of stuttered speech has significant implications for timely assessments providing assistance to speech language pathologists. Despite notable advancements in the field, the cases in which multiple disfluencies occur in speech require attention. We have taken a progressive approach to fill this gap by classifying multi-stuttered speech more efficiently. The problem has been addressed by firstly curating a dataset of multi-stuttered disfluencies from open source dataset SEP-28k audio clips. Secondly, employing Whisper, a state-of-the-art speech recognition model has been leveraged by using its encoder and taking the problem as multi label classification. Thirdly, using a 6 encoder layer Whisper and experimenting with various layer freezing strategies, a computationally efficient configuration of the model was identified. The proposed configuration achieved micro, macro, and weighted F1-scores of 0.88, 0.85, and 0.87, correspondingly on an external test dataset i.e. Fluency-Bank. In addition, through layer freezing strategies, we were able to achieve the aforementioned results by fine-tuning a single encoder layer, consequently, reducing the model's trainable parameters from 20.27 million to 3.29 million. This research study unveils the contribution of the last encoder layer in the identification of disfluencies in stuttered speech. Consequently, it has led to a computationally efficient approach, 83.7% less parameters to train, making the proposed approach more adaptable for various dialects and languages.
1402.0988
Sascha Kurz
Sascha Kurz
The inverse problem for power distributions in committees
46 pages, 2 tables
null
10.1007/s00355-015-0946-8
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several power indices have been introduced in the literature in order to measure the influence of individual committee members on the aggregated decision. Here we ask the inverse question and aim to design voting rules for a committee such that a given desired power distribution is met as closely as possible. We present an exact algorithm for a large class of different power indices based on integer linear programming. With respect to negative approximation results we generalize the approach of Alon and Edelman who studied power distributions for the Banzhaf index, where most of the power is concentrated on few coordinates. It turned out that each Banzhaf vector of an n-member committee that is near to such a desired power distribution, has to be also near to the Banzhaf vector of a k-member committee. We show that such Alon-Edelman type results are possible for other power indices like e.g. the Public Good index or the Coleman index to prevent actions, while they are principally impossible for e.g. the Johnston index.
[ { "created": "Wed, 5 Feb 2014 09:33:10 GMT", "version": "v1" } ]
2016-01-22
[ [ "Kurz", "Sascha", "" ] ]
Several power indices have been introduced in the literature in order to measure the influence of individual committee members on the aggregated decision. Here we ask the inverse question and aim to design voting rules for a committee such that a given desired power distribution is met as closely as possible. We present an exact algorithm for a large class of different power indices based on integer linear programming. With respect to negative approximation results we generalize the approach of Alon and Edelman who studied power distributions for the Banzhaf index, where most of the power is concentrated on few coordinates. It turned out that each Banzhaf vector of an n-member committee that is near to such a desired power distribution, has to be also near to the Banzhaf vector of a k-member committee. We show that such Alon-Edelman type results are possible for other power indices like e.g. the Public Good index or the Coleman index to prevent actions, while they are principally impossible for e.g. the Johnston index.
2403.06223
Animesh Chattopadhyay
Animesh Chattopadhyay and Subrat Kar
IDEAS: Information-Driven EV Admission in Charging Station Considering User Impatience to Improve QoS and Station Utilization
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.MA
http://creativecommons.org/licenses/by-sa/4.0/
Our work delves into user behaviour at Electric Vehicle(EV) charging stations during peak times, particularly focusing on how impatience drives balking (not joining queues) and reneging (leaving queues prematurely). We introduce an Agent-based simulation framework that incorporates user optimism levels (pessimistic, standard, and optimistic) in the queue dynamics. Unlike previous work, this framework highlights the crucial role of human behaviour in shaping station efficiency for peak demand. The simulation reveals a key issue: balking often occurs due to a lack of queue insights, creating user dilemmas. To address this, we propose real-time sharing of wait time metrics with arriving EV users at the station. This ensures better Quality of Service (QoS) with user-informed queue joining and demonstrates significant reductions in reneging (up to 94%) improving the charging operation. Further analysis shows that charging speed decreases significantly beyond 80%, but most users prioritize full charges due to range anxiety, leading to a longer queue. To address this, we propose a two-mode, two-port charger design with power-sharing options. This allows users to fast-charge to 80% and automatically switch to slow charging, enabling fast charging on the second port. Thus, increasing fast charger availability and throughput by up to 5%. As the mobility sector transitions towards intelligent traffic, our modelling framework, which integrates human decision-making within automated planning, provides valuable insights for optimizing charging station efficiency and improving the user experience. This approach is particularly relevant during the introduction phase of new stations, when historical data might be limited.
[ { "created": "Sun, 10 Mar 2024 14:07:46 GMT", "version": "v1" } ]
2024-03-12
[ [ "Chattopadhyay", "Animesh", "" ], [ "Kar", "Subrat", "" ] ]
Our work delves into user behaviour at Electric Vehicle(EV) charging stations during peak times, particularly focusing on how impatience drives balking (not joining queues) and reneging (leaving queues prematurely). We introduce an Agent-based simulation framework that incorporates user optimism levels (pessimistic, standard, and optimistic) in the queue dynamics. Unlike previous work, this framework highlights the crucial role of human behaviour in shaping station efficiency for peak demand. The simulation reveals a key issue: balking often occurs due to a lack of queue insights, creating user dilemmas. To address this, we propose real-time sharing of wait time metrics with arriving EV users at the station. This ensures better Quality of Service (QoS) with user-informed queue joining and demonstrates significant reductions in reneging (up to 94%) improving the charging operation. Further analysis shows that charging speed decreases significantly beyond 80%, but most users prioritize full charges due to range anxiety, leading to a longer queue. To address this, we propose a two-mode, two-port charger design with power-sharing options. This allows users to fast-charge to 80% and automatically switch to slow charging, enabling fast charging on the second port. Thus, increasing fast charger availability and throughput by up to 5%. As the mobility sector transitions towards intelligent traffic, our modelling framework, which integrates human decision-making within automated planning, provides valuable insights for optimizing charging station efficiency and improving the user experience. This approach is particularly relevant during the introduction phase of new stations, when historical data might be limited.
2104.05115
James Y. Huang
James Y. Huang, Kuan-Hao Huang, Kai-Wei Chang
Disentangling Semantics and Syntax in Sentence Embeddings with Pre-trained Language Models
NAACL 2021
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-trained language models have achieved huge success on a wide range of NLP tasks. However, contextual representations from pre-trained models contain entangled semantic and syntactic information, and therefore cannot be directly used to derive useful semantic sentence embeddings for some tasks. Paraphrase pairs offer an effective way of learning the distinction between semantics and syntax, as they naturally share semantics and often vary in syntax. In this work, we present ParaBART, a semantic sentence embedding model that learns to disentangle semantics and syntax in sentence embeddings obtained by pre-trained language models. ParaBART is trained to perform syntax-guided paraphrasing, based on a source sentence that shares semantics with the target paraphrase, and a parse tree that specifies the target syntax. In this way, ParaBART learns disentangled semantic and syntactic representations from their respective inputs with separate encoders. Experiments in English show that ParaBART outperforms state-of-the-art sentence embedding models on unsupervised semantic similarity tasks. Additionally, we show that our approach can effectively remove syntactic information from semantic sentence embeddings, leading to better robustness against syntactic variation on downstream semantic tasks.
[ { "created": "Sun, 11 Apr 2021 21:34:46 GMT", "version": "v1" } ]
2021-04-13
[ [ "Huang", "James Y.", "" ], [ "Huang", "Kuan-Hao", "" ], [ "Chang", "Kai-Wei", "" ] ]
Pre-trained language models have achieved huge success on a wide range of NLP tasks. However, contextual representations from pre-trained models contain entangled semantic and syntactic information, and therefore cannot be directly used to derive useful semantic sentence embeddings for some tasks. Paraphrase pairs offer an effective way of learning the distinction between semantics and syntax, as they naturally share semantics and often vary in syntax. In this work, we present ParaBART, a semantic sentence embedding model that learns to disentangle semantics and syntax in sentence embeddings obtained by pre-trained language models. ParaBART is trained to perform syntax-guided paraphrasing, based on a source sentence that shares semantics with the target paraphrase, and a parse tree that specifies the target syntax. In this way, ParaBART learns disentangled semantic and syntactic representations from their respective inputs with separate encoders. Experiments in English show that ParaBART outperforms state-of-the-art sentence embedding models on unsupervised semantic similarity tasks. Additionally, we show that our approach can effectively remove syntactic information from semantic sentence embeddings, leading to better robustness against syntactic variation on downstream semantic tasks.
2009.05235
Jinghua Wang
Jinghua Wang, Adrian Hilton and Jianmin Jiang
Spectral Analysis Network for Deep Representation Learning and Image Clustering
null
ICME2019
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep representation learning is a crucial procedure in multimedia analysis and attracts increasing attention. Most of the popular techniques rely on convolutional neural network and require a large amount of labeled data in the training procedure. However, it is time consuming or even impossible to obtain the label information in some tasks due to cost limitation. Thus, it is necessary to develop unsupervised deep representation learning techniques. This paper proposes a new network structure for unsupervised deep representation learning based on spectral analysis, which is a popular technique with solid theory foundations. Compared with the existing spectral analysis methods, the proposed network structure has at least three advantages. Firstly, it can identify the local similarities among images in patch level and thus more robust against occlusion. Secondly, through multiple consecutive spectral analysis procedures, the proposed network can learn more clustering-friendly representations and is capable to reveal the deep correlations among data samples. Thirdly, it can elegantly integrate different spectral analysis procedures, so that each spectral analysis procedure can have their individual strengths in dealing with different data sample distributions. Extensive experimental results show the effectiveness of the proposed methods on various image clustering tasks.
[ { "created": "Fri, 11 Sep 2020 05:07:15 GMT", "version": "v1" } ]
2020-09-14
[ [ "Wang", "Jinghua", "" ], [ "Hilton", "Adrian", "" ], [ "Jiang", "Jianmin", "" ] ]
Deep representation learning is a crucial procedure in multimedia analysis and attracts increasing attention. Most of the popular techniques rely on convolutional neural network and require a large amount of labeled data in the training procedure. However, it is time consuming or even impossible to obtain the label information in some tasks due to cost limitation. Thus, it is necessary to develop unsupervised deep representation learning techniques. This paper proposes a new network structure for unsupervised deep representation learning based on spectral analysis, which is a popular technique with solid theory foundations. Compared with the existing spectral analysis methods, the proposed network structure has at least three advantages. Firstly, it can identify the local similarities among images in patch level and thus more robust against occlusion. Secondly, through multiple consecutive spectral analysis procedures, the proposed network can learn more clustering-friendly representations and is capable to reveal the deep correlations among data samples. Thirdly, it can elegantly integrate different spectral analysis procedures, so that each spectral analysis procedure can have their individual strengths in dealing with different data sample distributions. Extensive experimental results show the effectiveness of the proposed methods on various image clustering tasks.
1812.08313
Dan Guralnik
Dan P. Guralnik and Daniel E. Koditschek
Iterated Belief Revision Under Resource Constraints: Logic as Geometry
Preprint, 58 pages including appendices, 12 figures
null
null
null
cs.AI cs.DM cs.LG math.MG
http://creativecommons.org/publicdomain/zero/1.0/
We propose a variant of iterated belief revision designed for settings with limited computational resources, such as mobile autonomous robots. The proposed memory architecture---called the {\em universal memory architecture} (UMA)---maintains an epistemic state in the form of a system of default rules similar to those studied by Pearl and by Goldszmidt and Pearl (systems $Z$ and $Z^+$). A duality between the category of UMA representations and the category of the corresponding model spaces, extending the Sageev-Roller duality between discrete poc sets and discrete median algebras provides a two-way dictionary from inference to geometry, leading to immense savings in computation, at a cost in the quality of representation that can be quantified in terms of topological invariants. Moreover, the same framework naturally enables comparisons between different model spaces, making it possible to analyze the deficiencies of one model space in comparison to others. This paper develops the formalism underlying UMA, analyzes the complexity of maintenance and inference operations in UMA, and presents some learning guarantees for different UMA-based learners. Finally, we present simulation results to illustrate the viability of the approach, and close with a discussion of the strengths, weaknesses, and potential development of UMA-based learners.
[ { "created": "Thu, 20 Dec 2018 01:58:04 GMT", "version": "v1" } ]
2018-12-21
[ [ "Guralnik", "Dan P.", "" ], [ "Koditschek", "Daniel E.", "" ] ]
We propose a variant of iterated belief revision designed for settings with limited computational resources, such as mobile autonomous robots. The proposed memory architecture---called the {\em universal memory architecture} (UMA)---maintains an epistemic state in the form of a system of default rules similar to those studied by Pearl and by Goldszmidt and Pearl (systems $Z$ and $Z^+$). A duality between the category of UMA representations and the category of the corresponding model spaces, extending the Sageev-Roller duality between discrete poc sets and discrete median algebras provides a two-way dictionary from inference to geometry, leading to immense savings in computation, at a cost in the quality of representation that can be quantified in terms of topological invariants. Moreover, the same framework naturally enables comparisons between different model spaces, making it possible to analyze the deficiencies of one model space in comparison to others. This paper develops the formalism underlying UMA, analyzes the complexity of maintenance and inference operations in UMA, and presents some learning guarantees for different UMA-based learners. Finally, we present simulation results to illustrate the viability of the approach, and close with a discussion of the strengths, weaknesses, and potential development of UMA-based learners.
2210.04623
Chao Wu
Chao Wu and Cheng Ji and Geng Yuan and Riwei Pan and Weichao Guo and Chao Yu and Zongwei Zhu and Yanzhi Wang
DeltaFS: Pursuing Zero Update Overhead via Metadata-Enabled Delta Compression for Log-structured File System on Mobile Devices
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Data compression has been widely adopted to release mobile devices from intensive write pressure. Delta compression is particularly promising for its high compression efficacy over conventional compression methods. However, this method suffers from non-trivial system overheads incurred by delta maintenance and read penalty, which prevents its applicability on mobile devices. To this end, this paper proposes DeltaFS, a metadata-enabled Delta compression on log-structured File System for mobile devices, to achieve utmost compressing efficiency and zero hardware costs. DeltaFS smartly exploits the out-of-place updating ability of Log-structured File System (LFS) to alleviate the problems of write amplification, which is the key bottleneck for delta compression implementation. Specifically, DeltaFS utilizes the inline area in file inodes for delta maintenance with zero hardware cost, and integrates an inline area manage strategy to improve the utilization of constrained inline area. Moreover, a complimentary delta maintenance strategy is incorporated, which selectively maintains delta chunks in the main data area to break through the limitation of constrained inline area. Experimental results show that DeltaFS substantially reduces write traffics by up to 64.8\%, and improves the I/O performance by up to 37.3\%.
[ { "created": "Thu, 6 Oct 2022 17:19:03 GMT", "version": "v1" } ]
2022-10-11
[ [ "Wu", "Chao", "" ], [ "Ji", "Cheng", "" ], [ "Yuan", "Geng", "" ], [ "Pan", "Riwei", "" ], [ "Guo", "Weichao", "" ], [ "Yu", "Chao", "" ], [ "Zhu", "Zongwei", "" ], [ "Wang", "Yanzhi", "" ...
Data compression has been widely adopted to release mobile devices from intensive write pressure. Delta compression is particularly promising for its high compression efficacy over conventional compression methods. However, this method suffers from non-trivial system overheads incurred by delta maintenance and read penalty, which prevents its applicability on mobile devices. To this end, this paper proposes DeltaFS, a metadata-enabled Delta compression on log-structured File System for mobile devices, to achieve utmost compressing efficiency and zero hardware costs. DeltaFS smartly exploits the out-of-place updating ability of Log-structured File System (LFS) to alleviate the problems of write amplification, which is the key bottleneck for delta compression implementation. Specifically, DeltaFS utilizes the inline area in file inodes for delta maintenance with zero hardware cost, and integrates an inline area manage strategy to improve the utilization of constrained inline area. Moreover, a complimentary delta maintenance strategy is incorporated, which selectively maintains delta chunks in the main data area to break through the limitation of constrained inline area. Experimental results show that DeltaFS substantially reduces write traffics by up to 64.8\%, and improves the I/O performance by up to 37.3\%.
1512.08689
Marc Brockschmidt
Marc Brockschmidt, Byron Cook, Samin Ishtiaq, Heidy Khlaaf, Nir Piterman
T2: Temporal Property Verification
Full version of TACAS'16 tool paper
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the open-source tool T2, the first public release from the TERMINATOR project. T2 has been extended over the past decade to support automatic temporal-logic proving techniques and to handle a general class of user-provided liveness and safety properties. Input can be provided in a native format and in C, via the support of the LLVM compiler framework. We briefly discuss T2's architecture, its underlying techniques, and conclude with an experimental illustration of its competitiveness and directions for future extensions.
[ { "created": "Tue, 29 Dec 2015 14:20:30 GMT", "version": "v1" }, { "created": "Wed, 6 Jan 2016 13:33:40 GMT", "version": "v2" } ]
2016-01-07
[ [ "Brockschmidt", "Marc", "" ], [ "Cook", "Byron", "" ], [ "Ishtiaq", "Samin", "" ], [ "Khlaaf", "Heidy", "" ], [ "Piterman", "Nir", "" ] ]
We present the open-source tool T2, the first public release from the TERMINATOR project. T2 has been extended over the past decade to support automatic temporal-logic proving techniques and to handle a general class of user-provided liveness and safety properties. Input can be provided in a native format and in C, via the support of the LLVM compiler framework. We briefly discuss T2's architecture, its underlying techniques, and conclude with an experimental illustration of its competitiveness and directions for future extensions.
2007.03948
Neil Yorke-Smith
Kaan Yilmaz and Neil Yorke-Smith
A Study of Learning Search Approximation in Mixed Integer Branch and Bound: Node Selection in SCIP
Authors' version, not publisher's final version which is available at DOI
AI, volume 2, number 2, pages 150-178, 2021
10.3390/ai2020010
null
cs.NE math.OC
http://creativecommons.org/licenses/by/4.0/
In line with the growing trend of using machine learning to help solve combinatorial optimisation problems, one promising idea is to improve node selection within a mixed integer programming (MIP) branch-and-bound tree by using a learned policy. Previous work using imitation learning indicates the feasibility of acquiring a node selection policy, by learning an adaptive node searching order. In contrast, our imitation learning policy is focused solely on learning which of a node's children to select. We present an offline method to learn such a policy in two settings: one that comprises a heuristic by committing to pruning of nodes; one that is exact and backtracks from a leaf to guarantee finding the optimal integer solution. The former setting corresponds to a child selector during plunging, while the latter is akin to a diving heuristic. We apply the policy within the popular open-source solver SCIP, in both heuristic and exact settings. Empirical results on five MIP datasets indicate that our node selection policy leads to solutions significantly more quickly than the state-of-the-art precedent in the literature. While we do not beat the highly-optimised SCIP state-of-practice baseline node selector in terms of solving time on exact solutions, our heuristic policies have a consistently better optimality gap than all baselines, if the accuracy of the predictive model is sufficient. Further, the results also indicate that, when a time limit is applied, our heuristic method finds better solutions than all baselines in the majority of problems tested. We explain the results by showing that the learned policies have imitated the SCIP baseline, but without the latter's early plunge abort. Our recommendation is that, despite the clear improvements over the literature, this kind of MIP child selector is better seen in a broader approach using learning in MIP branch-and-bound tree decisions.
[ { "created": "Wed, 8 Jul 2020 08:12:44 GMT", "version": "v1" }, { "created": "Mon, 3 Jan 2022 21:04:30 GMT", "version": "v2" } ]
2022-01-05
[ [ "Yilmaz", "Kaan", "" ], [ "Yorke-Smith", "Neil", "" ] ]
In line with the growing trend of using machine learning to help solve combinatorial optimisation problems, one promising idea is to improve node selection within a mixed integer programming (MIP) branch-and-bound tree by using a learned policy. Previous work using imitation learning indicates the feasibility of acquiring a node selection policy, by learning an adaptive node searching order. In contrast, our imitation learning policy is focused solely on learning which of a node's children to select. We present an offline method to learn such a policy in two settings: one that comprises a heuristic by committing to pruning of nodes; one that is exact and backtracks from a leaf to guarantee finding the optimal integer solution. The former setting corresponds to a child selector during plunging, while the latter is akin to a diving heuristic. We apply the policy within the popular open-source solver SCIP, in both heuristic and exact settings. Empirical results on five MIP datasets indicate that our node selection policy leads to solutions significantly more quickly than the state-of-the-art precedent in the literature. While we do not beat the highly-optimised SCIP state-of-practice baseline node selector in terms of solving time on exact solutions, our heuristic policies have a consistently better optimality gap than all baselines, if the accuracy of the predictive model is sufficient. Further, the results also indicate that, when a time limit is applied, our heuristic method finds better solutions than all baselines in the majority of problems tested. We explain the results by showing that the learned policies have imitated the SCIP baseline, but without the latter's early plunge abort. Our recommendation is that, despite the clear improvements over the literature, this kind of MIP child selector is better seen in a broader approach using learning in MIP branch-and-bound tree decisions.
2404.18553
Gareth Davies
Gareth Davies
Evaluating the effectiveness of predicting covariates in LSTM Networks for Time Series Forecasting
9 content pages (22 total pages), 11 figures
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Autoregressive Recurrent Neural Networks are widely employed in time-series forecasting tasks, demonstrating effectiveness in univariate and certain multivariate scenarios. However, their inherent structure does not readily accommodate the integration of future, time-dependent covariates. A proposed solution, outlined by Salinas et al 2019, suggests forecasting both covariates and the target variable in a multivariate framework. In this study, we conducted comprehensive tests on publicly available time-series datasets, artificially introducing highly correlated covariates to future time-step values. Our evaluation aimed to assess the performance of an LSTM network when considering these covariates and compare it against a univariate baseline. As part of this study we introduce a novel approach using seasonal time segments in combination with an RNN architecture, which is both simple and extremely effective over long forecast horizons with comparable performance to many state of the art architectures. Our findings from the results of more than 120 models reveal that under certain conditions jointly training covariates with target variables can improve overall performance of the model, but often there exists a significant performance disparity between multivariate and univariate predictions. Surprisingly, even when provided with covariates informing the network about future target values, multivariate predictions exhibited inferior performance. In essence, compelling the network to predict multiple values can prove detrimental to model performance, even in the presence of informative covariates. These results suggest that LSTM architectures may not be suitable for forecasting tasks where predicting covariates would typically be expected to enhance model accuracy.
[ { "created": "Mon, 29 Apr 2024 09:51:25 GMT", "version": "v1" } ]
2024-04-30
[ [ "Davies", "Gareth", "" ] ]
Autoregressive Recurrent Neural Networks are widely employed in time-series forecasting tasks, demonstrating effectiveness in univariate and certain multivariate scenarios. However, their inherent structure does not readily accommodate the integration of future, time-dependent covariates. A proposed solution, outlined by Salinas et al 2019, suggests forecasting both covariates and the target variable in a multivariate framework. In this study, we conducted comprehensive tests on publicly available time-series datasets, artificially introducing highly correlated covariates to future time-step values. Our evaluation aimed to assess the performance of an LSTM network when considering these covariates and compare it against a univariate baseline. As part of this study we introduce a novel approach using seasonal time segments in combination with an RNN architecture, which is both simple and extremely effective over long forecast horizons with comparable performance to many state of the art architectures. Our findings from the results of more than 120 models reveal that under certain conditions jointly training covariates with target variables can improve overall performance of the model, but often there exists a significant performance disparity between multivariate and univariate predictions. Surprisingly, even when provided with covariates informing the network about future target values, multivariate predictions exhibited inferior performance. In essence, compelling the network to predict multiple values can prove detrimental to model performance, even in the presence of informative covariates. These results suggest that LSTM architectures may not be suitable for forecasting tasks where predicting covariates would typically be expected to enhance model accuracy.
2102.07571
Alberto Mart\'in-Mart\'in
Alberto Mart\'in-Mart\'in, Emilio Delgado L\'opez-C\'ozar
Large coverage fluctuations in Google Scholar: a case study
null
null
null
null
cs.DL
http://creativecommons.org/licenses/by/4.0/
Unlike other academic bibliographic databases, Google Scholar intentionally operates in a way that does not maintain coverage stability: documents that stop being available to Google Scholar's crawlers are removed from the system. This can also affect Google Scholar's citation graph (citation counts can decrease). Furthermore, because Google Scholar is not transparent about its coverage, the only way to directly observe coverage loss is through regular monitorization of Google Scholar data. Because of this, few studies have empirically documented this phenomenon. This study analyses a large decrease in coverage of documents in the field of Astronomy and Astrophysics that took place in 2019 and its subsequent recovery, using longitudinal data from previous analyses and a new dataset extracted in 2020. Documents from most of the larger publishers in the field disappeared from Google Scholar despite continuing to be available on the Web, which suggests an error on Google Scholar's side. Disappeared documents did not reappear until the following index-wide update, many months after the problem was discovered. The slowness with which Google Scholar is currently able to resolve indexing errors is a clear limitation of the platform both for literature search and bibliometric use cases.
[ { "created": "Mon, 15 Feb 2021 14:17:44 GMT", "version": "v1" } ]
2021-02-16
[ [ "Martín-Martín", "Alberto", "" ], [ "López-Cózar", "Emilio Delgado", "" ] ]
Unlike other academic bibliographic databases, Google Scholar intentionally operates in a way that does not maintain coverage stability: documents that stop being available to Google Scholar's crawlers are removed from the system. This can also affect Google Scholar's citation graph (citation counts can decrease). Furthermore, because Google Scholar is not transparent about its coverage, the only way to directly observe coverage loss is through regular monitorization of Google Scholar data. Because of this, few studies have empirically documented this phenomenon. This study analyses a large decrease in coverage of documents in the field of Astronomy and Astrophysics that took place in 2019 and its subsequent recovery, using longitudinal data from previous analyses and a new dataset extracted in 2020. Documents from most of the larger publishers in the field disappeared from Google Scholar despite continuing to be available on the Web, which suggests an error on Google Scholar's side. Disappeared documents did not reappear until the following index-wide update, many months after the problem was discovered. The slowness with which Google Scholar is currently able to resolve indexing errors is a clear limitation of the platform both for literature search and bibliometric use cases.
2010.07884
Grigorii Trofimiuk
Grigorii Trofimiuk and Peter Trifonov
Window Processing of Binary Polarization Kernels
Final version to appear in IEEE Transactions on Communications. The source code is available at https://github.com/gtrofimiuk/SCLKernelDecoder
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
A decoding algorithm for polar (sub)codes with binary $2^t\times 2^t$ polarization kernels is presented. It is based on the window processing (WP) method, which exploits the linear relationship of the polarization kernels and the Arikan matrix. This relationship enables one to compute the kernel input symbols probabilities by computing the probabilities of several paths in Arikan successive cancellation (SC) decoder. In this paper we propose an improved version of WP, which has significantly lower arithmetic complexity and operates in log-likelihood ratios (LLRs) domain. The algorithm identifies and reuses common subexpressions arising in computation of Arikan SC path scores. The proposed algorithm is applied to kernels of size 16 and 32 with improved polarization properties. It enables polar (sub)codes with the considered kernels to simultaneously provide better performance and lower decoding complexity compared with polar (sub)codes with Arikan kernel.
[ { "created": "Thu, 15 Oct 2020 17:04:45 GMT", "version": "v1" }, { "created": "Tue, 29 Dec 2020 23:05:21 GMT", "version": "v2" }, { "created": "Tue, 6 Apr 2021 15:12:29 GMT", "version": "v3" } ]
2021-04-07
[ [ "Trofimiuk", "Grigorii", "" ], [ "Trifonov", "Peter", "" ] ]
A decoding algorithm for polar (sub)codes with binary $2^t\times 2^t$ polarization kernels is presented. It is based on the window processing (WP) method, which exploits the linear relationship of the polarization kernels and the Arikan matrix. This relationship enables one to compute the kernel input symbols probabilities by computing the probabilities of several paths in Arikan successive cancellation (SC) decoder. In this paper we propose an improved version of WP, which has significantly lower arithmetic complexity and operates in log-likelihood ratios (LLRs) domain. The algorithm identifies and reuses common subexpressions arising in computation of Arikan SC path scores. The proposed algorithm is applied to kernels of size 16 and 32 with improved polarization properties. It enables polar (sub)codes with the considered kernels to simultaneously provide better performance and lower decoding complexity compared with polar (sub)codes with Arikan kernel.
2312.14091
Andranik Sargsyan
Hayk Manukyan, Andranik Sargsyan, Barsegh Atanyan, Zhangyang Wang, Shant Navasardyan, Humphrey Shi
HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent progress in text-guided image inpainting, based on the unprecedented success of text-to-image diffusion models, has led to exceptionally realistic and visually plausible results. However, there is still significant potential for improvement in current text-to-image inpainting models, particularly in better aligning the inpainted area with user prompts and performing high-resolution inpainting. Therefore, we introduce HD-Painter, a training free approach that accurately follows prompts and coherently scales to high resolution image inpainting. To this end, we design the Prompt-Aware Introverted Attention (PAIntA) layer enhancing self-attention scores by prompt information resulting in better text aligned generations. To further improve the prompt coherence we introduce the Reweighting Attention Score Guidance (RASG) mechanism seamlessly integrating a post-hoc sampling strategy into the general form of DDIM to prevent out-of-distribution latent shifts. Moreover, HD-Painter allows extension to larger scales by introducing a specialized super-resolution technique customized for inpainting, enabling the completion of missing regions in images of up to 2K resolution. Our experiments demonstrate that HD-Painter surpasses existing state-of-the-art approaches quantitatively and qualitatively across multiple metrics and a user study. Code is publicly available at: https://github.com/Picsart-AI-Research/HD-Painter
[ { "created": "Thu, 21 Dec 2023 18:09:30 GMT", "version": "v1" }, { "created": "Mon, 25 Dec 2023 20:04:02 GMT", "version": "v2" }, { "created": "Mon, 18 Mar 2024 16:48:13 GMT", "version": "v3" } ]
2024-03-19
[ [ "Manukyan", "Hayk", "" ], [ "Sargsyan", "Andranik", "" ], [ "Atanyan", "Barsegh", "" ], [ "Wang", "Zhangyang", "" ], [ "Navasardyan", "Shant", "" ], [ "Shi", "Humphrey", "" ] ]
Recent progress in text-guided image inpainting, based on the unprecedented success of text-to-image diffusion models, has led to exceptionally realistic and visually plausible results. However, there is still significant potential for improvement in current text-to-image inpainting models, particularly in better aligning the inpainted area with user prompts and performing high-resolution inpainting. Therefore, we introduce HD-Painter, a training free approach that accurately follows prompts and coherently scales to high resolution image inpainting. To this end, we design the Prompt-Aware Introverted Attention (PAIntA) layer enhancing self-attention scores by prompt information resulting in better text aligned generations. To further improve the prompt coherence we introduce the Reweighting Attention Score Guidance (RASG) mechanism seamlessly integrating a post-hoc sampling strategy into the general form of DDIM to prevent out-of-distribution latent shifts. Moreover, HD-Painter allows extension to larger scales by introducing a specialized super-resolution technique customized for inpainting, enabling the completion of missing regions in images of up to 2K resolution. Our experiments demonstrate that HD-Painter surpasses existing state-of-the-art approaches quantitatively and qualitatively across multiple metrics and a user study. Code is publicly available at: https://github.com/Picsart-AI-Research/HD-Painter
1512.00519
Bryan Knowles
Bryan A. Knowles and Mustafa Atici
Proposed Approximate Dynamic Programming for Pathfinding under Visible Uncertainty
6 pages
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continuing our preleminary work \cite{knowles14}, we define the safest-with-sight pathfinding problems and explore its solution using techniques borrowed from measure-theoretic probability theory. We find a simple recursive definition for the probability that an ideal pathfinder will select an edge in a given scenario of an uncertain network where edges have probabilities of failure and vertices provide "vision" of edges via lines-of-sight. We propose an approximate solution based on our theoretical findings that would borrow techniques from approximate dynamic programming.
[ { "created": "Mon, 30 Nov 2015 17:29:43 GMT", "version": "v1" } ]
2015-12-03
[ [ "Knowles", "Bryan A.", "" ], [ "Atici", "Mustafa", "" ] ]
Continuing our preleminary work \cite{knowles14}, we define the safest-with-sight pathfinding problems and explore its solution using techniques borrowed from measure-theoretic probability theory. We find a simple recursive definition for the probability that an ideal pathfinder will select an edge in a given scenario of an uncertain network where edges have probabilities of failure and vertices provide "vision" of edges via lines-of-sight. We propose an approximate solution based on our theoretical findings that would borrow techniques from approximate dynamic programming.
1803.04035
Richard Nock
Richard Nock and Stephen Hardy and Wilko Henecka and Hamish Ivey-Law and Giorgio Patrini and Guillaume Smith and Brian Thorne
Entity Resolution and Federated Learning get a Federated Resolution
arXiv admin note: text overlap with arXiv:1711.10677
null
null
null
cs.DB cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consider two data providers, each maintaining records of different feature sets about common entities. They aim to learn a linear model over the whole set of features. This problem of federated learning over vertically partitioned data includes a crucial upstream issue: entity resolution, i.e. finding the correspondence between the rows of the datasets. It is well known that entity resolution, just like learning, is mistake-prone in the real world. Despite the importance of the problem, there has been no formal assessment of how errors in entity resolution impact learning. In this paper, we provide a thorough answer to this question, answering how optimal classifiers, empirical losses, margins and generalisation abilities are affected. While our answer spans a wide set of losses --- going beyond proper, convex, or classification calibrated ---, it brings simple practical arguments to upgrade entity resolution as a preprocessing step to learning. One of these suggests that entity resolution should be aimed at controlling or minimizing the number of matching errors between examples of distinct classes. In our experiments, we modify a simple token-based entity resolution algorithm so that it indeed aims at avoiding matching rows belonging to different classes, and perform experiments in the setting where entity resolution relies on noisy data, which is very relevant to real world domains. Notably, our approach covers the case where one peer \textit{does not} have classes, or a noisy record of classes. Experiments display that using the class information during entity resolution can buy significant uplift for learning at little expense from the complexity standpoint.
[ { "created": "Sun, 11 Mar 2018 20:53:18 GMT", "version": "v1" }, { "created": "Tue, 20 Mar 2018 21:46:12 GMT", "version": "v2" } ]
2018-03-22
[ [ "Nock", "Richard", "" ], [ "Hardy", "Stephen", "" ], [ "Henecka", "Wilko", "" ], [ "Ivey-Law", "Hamish", "" ], [ "Patrini", "Giorgio", "" ], [ "Smith", "Guillaume", "" ], [ "Thorne", "Brian", "" ] ]
Consider two data providers, each maintaining records of different feature sets about common entities. They aim to learn a linear model over the whole set of features. This problem of federated learning over vertically partitioned data includes a crucial upstream issue: entity resolution, i.e. finding the correspondence between the rows of the datasets. It is well known that entity resolution, just like learning, is mistake-prone in the real world. Despite the importance of the problem, there has been no formal assessment of how errors in entity resolution impact learning. In this paper, we provide a thorough answer to this question, answering how optimal classifiers, empirical losses, margins and generalisation abilities are affected. While our answer spans a wide set of losses --- going beyond proper, convex, or classification calibrated ---, it brings simple practical arguments to upgrade entity resolution as a preprocessing step to learning. One of these suggests that entity resolution should be aimed at controlling or minimizing the number of matching errors between examples of distinct classes. In our experiments, we modify a simple token-based entity resolution algorithm so that it indeed aims at avoiding matching rows belonging to different classes, and perform experiments in the setting where entity resolution relies on noisy data, which is very relevant to real world domains. Notably, our approach covers the case where one peer \textit{does not} have classes, or a noisy record of classes. Experiments display that using the class information during entity resolution can buy significant uplift for learning at little expense from the complexity standpoint.
1812.04840
Amir Aly
Amir Aly and Tadahiro Taniguchi
Towards Understanding Language through Perception in Situated Human-Robot Interaction: From Word Grounding to Grammar Induction
Proceedings of the International Conference on Social Cognition in Humans and Robots (socSMCs), Germany, 2018
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robots are widely collaborating with human users in diferent tasks that require high-level cognitive functions to make them able to discover the surrounding environment. A difcult challenge that we briefy highlight in this short paper is inferring the latent grammatical structure of language, which includes grounding parts of speech (e.g., verbs, nouns, adjectives, and prepositions) through visual perception, and induction of Combinatory Categorial Grammar (CCG) for phrases. This paves the way towards grounding phrases so as to make a robot able to understand human instructions appropriately during interaction.
[ { "created": "Wed, 12 Dec 2018 08:06:30 GMT", "version": "v1" }, { "created": "Thu, 6 Feb 2020 14:58:27 GMT", "version": "v2" }, { "created": "Fri, 13 Mar 2020 08:22:51 GMT", "version": "v3" } ]
2020-03-16
[ [ "Aly", "Amir", "" ], [ "Taniguchi", "Tadahiro", "" ] ]
Robots are widely collaborating with human users in diferent tasks that require high-level cognitive functions to make them able to discover the surrounding environment. A difcult challenge that we briefy highlight in this short paper is inferring the latent grammatical structure of language, which includes grounding parts of speech (e.g., verbs, nouns, adjectives, and prepositions) through visual perception, and induction of Combinatory Categorial Grammar (CCG) for phrases. This paves the way towards grounding phrases so as to make a robot able to understand human instructions appropriately during interaction.
2406.11757
Laura Weidinger
Laura Weidinger, John Mellor, Bernat Guillen Pegueroles, Nahema Marchal, Ravin Kumar, Kristian Lum, Canfer Akbulut, Mark Diaz, Stevie Bergman, Mikel Rodriguez, Verena Rieser, William Isaac
STAR: SocioTechnical Approach to Red Teaming Language Models
8 pages, 5 figures, 5 pages appendix. * denotes equal contribution
null
null
null
cs.AI cs.CL cs.CY cs.HC
http://creativecommons.org/licenses/by/4.0/
This research introduces STAR, a sociotechnical framework that improves on current best practices for red teaming safety of large language models. STAR makes two key contributions: it enhances steerability by generating parameterised instructions for human red teamers, leading to improved coverage of the risk surface. Parameterised instructions also provide more detailed insights into model failures at no increased cost. Second, STAR improves signal quality by matching demographics to assess harms for specific groups, resulting in more sensitive annotations. STAR further employs a novel step of arbitration to leverage diverse viewpoints and improve label reliability, treating disagreement not as noise but as a valuable contribution to signal quality.
[ { "created": "Mon, 17 Jun 2024 17:16:45 GMT", "version": "v1" }, { "created": "Wed, 10 Jul 2024 13:53:11 GMT", "version": "v2" }, { "created": "Tue, 6 Aug 2024 09:17:59 GMT", "version": "v3" } ]
2024-08-07
[ [ "Weidinger", "Laura", "" ], [ "Mellor", "John", "" ], [ "Pegueroles", "Bernat Guillen", "" ], [ "Marchal", "Nahema", "" ], [ "Kumar", "Ravin", "" ], [ "Lum", "Kristian", "" ], [ "Akbulut", "Canfer", "" ],...
This research introduces STAR, a sociotechnical framework that improves on current best practices for red teaming safety of large language models. STAR makes two key contributions: it enhances steerability by generating parameterised instructions for human red teamers, leading to improved coverage of the risk surface. Parameterised instructions also provide more detailed insights into model failures at no increased cost. Second, STAR improves signal quality by matching demographics to assess harms for specific groups, resulting in more sensitive annotations. STAR further employs a novel step of arbitration to leverage diverse viewpoints and improve label reliability, treating disagreement not as noise but as a valuable contribution to signal quality.
1908.05033
Ruihao Gong
Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, Junjie Yan
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks
IEEE ICCV 2019
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hardware-friendly network quantization (e.g., binary/uniform quantization) can efficiently accelerate the inference and meanwhile reduce memory consumption of the deep neural networks, which is crucial for model deployment on resource-limited devices like mobile phones. However, due to the discreteness of low-bit quantization, existing quantization methods often face the unstable training process and severe performance degradation. To address this problem, in this paper we propose Differentiable Soft Quantization (DSQ) to bridge the gap between the full-precision and low-bit networks. DSQ can automatically evolve during training to gradually approximate the standard quantization. Owing to its differentiable property, DSQ can help pursue the accurate gradients in backward propagation, and reduce the quantization loss in forward process with an appropriate clipping range. Extensive experiments over several popular network structures show that training low-bit neural networks with DSQ can consistently outperform state-of-the-art quantization methods. Besides, our first efficient implementation for deploying 2 to 4-bit DSQ on devices with ARM architecture achieves up to 1.7$\times$ speed up, compared with the open-source 8-bit high-performance inference framework NCNN. [31]
[ { "created": "Wed, 14 Aug 2019 09:22:41 GMT", "version": "v1" } ]
2019-08-15
[ [ "Gong", "Ruihao", "" ], [ "Liu", "Xianglong", "" ], [ "Jiang", "Shenghu", "" ], [ "Li", "Tianxiang", "" ], [ "Hu", "Peng", "" ], [ "Lin", "Jiazhen", "" ], [ "Yu", "Fengwei", "" ], [ "Yan", "Junj...
Hardware-friendly network quantization (e.g., binary/uniform quantization) can efficiently accelerate the inference and meanwhile reduce memory consumption of the deep neural networks, which is crucial for model deployment on resource-limited devices like mobile phones. However, due to the discreteness of low-bit quantization, existing quantization methods often face the unstable training process and severe performance degradation. To address this problem, in this paper we propose Differentiable Soft Quantization (DSQ) to bridge the gap between the full-precision and low-bit networks. DSQ can automatically evolve during training to gradually approximate the standard quantization. Owing to its differentiable property, DSQ can help pursue the accurate gradients in backward propagation, and reduce the quantization loss in forward process with an appropriate clipping range. Extensive experiments over several popular network structures show that training low-bit neural networks with DSQ can consistently outperform state-of-the-art quantization methods. Besides, our first efficient implementation for deploying 2 to 4-bit DSQ on devices with ARM architecture achieves up to 1.7$\times$ speed up, compared with the open-source 8-bit high-performance inference framework NCNN. [31]
1512.08133
Akshay Balsubramani
Akshay Balsubramani
The Utility of Abstaining in Binary Classification
Short survey
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore the problem of binary classification in machine learning, with a twist - the classifier is allowed to abstain on any datum, professing ignorance about the true class label without committing to any prediction. This is directly motivated by applications like medical diagnosis and fraud risk assessment, in which incorrect predictions have potentially calamitous consequences. We focus on a recent spate of theoretically driven work in this area that characterizes how allowing abstentions can lead to fewer errors in very general settings. Two areas are highlighted: the surprising possibility of zero-error learning, and the fundamental tradeoff between predicting sufficiently often and avoiding incorrect predictions. We review efficient algorithms with provable guarantees for each of these areas. We also discuss connections to other scenarios, notably active learning, as they suggest promising directions of further inquiry in this emerging field.
[ { "created": "Sat, 26 Dec 2015 19:02:00 GMT", "version": "v1" } ]
2015-12-29
[ [ "Balsubramani", "Akshay", "" ] ]
We explore the problem of binary classification in machine learning, with a twist - the classifier is allowed to abstain on any datum, professing ignorance about the true class label without committing to any prediction. This is directly motivated by applications like medical diagnosis and fraud risk assessment, in which incorrect predictions have potentially calamitous consequences. We focus on a recent spate of theoretically driven work in this area that characterizes how allowing abstentions can lead to fewer errors in very general settings. Two areas are highlighted: the surprising possibility of zero-error learning, and the fundamental tradeoff between predicting sufficiently often and avoiding incorrect predictions. We review efficient algorithms with provable guarantees for each of these areas. We also discuss connections to other scenarios, notably active learning, as they suggest promising directions of further inquiry in this emerging field.
1107.2059
Gloria Serrano Sotelo
J.A. Dom\'inguez P\'erez, J.M. Mu\~noz Porras and G. Serrano Sotelo
One dimensional Convolutional Goppa Codes over the projective line
null
null
null
null
cs.IT math.AG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a general method to construct MDS one-dimensional convolutional codes. Our method generalizes previous constructions of H. Gluesing-Luerssen and B. Langfeld. Moreover we give a classification of one-dimensional Convolutional Goppa Codes and propose a characterization of MDS codes of this type.
[ { "created": "Mon, 11 Jul 2011 15:36:34 GMT", "version": "v1" } ]
2011-07-12
[ [ "Pérez", "J. A. Domínguez", "" ], [ "Porras", "J. M. Muñoz", "" ], [ "Sotelo", "G. Serrano", "" ] ]
We give a general method to construct MDS one-dimensional convolutional codes. Our method generalizes previous constructions of H. Gluesing-Luerssen and B. Langfeld. Moreover we give a classification of one-dimensional Convolutional Goppa Codes and propose a characterization of MDS codes of this type.
2309.07998
Sarah Finch
Sarah E. Finch, James D. Finch, Jinho D. Choi
Exploring the Impact of Human Evaluator Group on Chat-Oriented Dialogue Evaluation
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human evaluation has been widely accepted as the standard for evaluating chat-oriented dialogue systems. However, there is a significant variation in previous work regarding who gets recruited as evaluators. Evaluator groups such as domain experts, university students, and professional annotators have been used to assess and compare dialogue systems, although it is unclear to what extent the choice of an evaluator group can affect results. This paper analyzes the evaluator group impact on dialogue system evaluation by testing 4 state-of-the-art dialogue systems using 4 distinct evaluator groups. Our analysis reveals a robustness towards evaluator groups for Likert evaluations that is not seen for Pairwise, with only minor differences observed when changing evaluator groups. Furthermore, two notable limitations to this robustness are observed, which reveal discrepancies between evaluators with different levels of chatbot expertise and indicate that evaluator objectivity is beneficial for certain dialogue metrics.
[ { "created": "Thu, 14 Sep 2023 19:19:50 GMT", "version": "v1" } ]
2023-09-18
[ [ "Finch", "Sarah E.", "" ], [ "Finch", "James D.", "" ], [ "Choi", "Jinho D.", "" ] ]
Human evaluation has been widely accepted as the standard for evaluating chat-oriented dialogue systems. However, there is a significant variation in previous work regarding who gets recruited as evaluators. Evaluator groups such as domain experts, university students, and professional annotators have been used to assess and compare dialogue systems, although it is unclear to what extent the choice of an evaluator group can affect results. This paper analyzes the evaluator group impact on dialogue system evaluation by testing 4 state-of-the-art dialogue systems using 4 distinct evaluator groups. Our analysis reveals a robustness towards evaluator groups for Likert evaluations that is not seen for Pairwise, with only minor differences observed when changing evaluator groups. Furthermore, two notable limitations to this robustness are observed, which reveal discrepancies between evaluators with different levels of chatbot expertise and indicate that evaluator objectivity is beneficial for certain dialogue metrics.
2009.10680
Wei Zhu
Wei Zhu, Xipeng Qiu, Yuan Ni and Guotong Xie
AutoRC: Improving BERT Based Relation Classification Models via Architecture Search
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although BERT based relation classification (RC) models have achieved significant improvements over the traditional deep learning models, it seems that no consensus can be reached on what is the optimal architecture. Firstly, there are multiple alternatives for entity span identification. Second, there are a collection of pooling operations to aggregate the representations of entities and contexts into fixed length vectors. Third, it is difficult to manually decide which feature vectors, including their interactions, are beneficial for classifying the relation types. In this work, we design a comprehensive search space for BERT based RC models and employ neural architecture search (NAS) method to automatically discover the design choices mentioned above. Experiments on seven benchmark RC tasks show that our method is efficient and effective in finding better architectures than the baseline BERT based RC model. Ablation study demonstrates the necessity of our search space design and the effectiveness of our search method.
[ { "created": "Tue, 22 Sep 2020 16:55:49 GMT", "version": "v1" }, { "created": "Sun, 27 Sep 2020 02:37:03 GMT", "version": "v2" } ]
2020-09-29
[ [ "Zhu", "Wei", "" ], [ "Qiu", "Xipeng", "" ], [ "Ni", "Yuan", "" ], [ "Xie", "Guotong", "" ] ]
Although BERT based relation classification (RC) models have achieved significant improvements over the traditional deep learning models, it seems that no consensus can be reached on what is the optimal architecture. Firstly, there are multiple alternatives for entity span identification. Second, there are a collection of pooling operations to aggregate the representations of entities and contexts into fixed length vectors. Third, it is difficult to manually decide which feature vectors, including their interactions, are beneficial for classifying the relation types. In this work, we design a comprehensive search space for BERT based RC models and employ neural architecture search (NAS) method to automatically discover the design choices mentioned above. Experiments on seven benchmark RC tasks show that our method is efficient and effective in finding better architectures than the baseline BERT based RC model. Ablation study demonstrates the necessity of our search space design and the effectiveness of our search method.
1805.11351
Qiuchi Li
Qiuchi Li, Sagar Uprety, Benyou Wang, Dawei Song
Quantum-inspired Complex Word Embedding
This paper has been accepted by the 3rd Workshop on Representation Learning for NLP (RepL4NLP)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A challenging task for word embeddings is to capture the emergent meaning or polarity of a combination of individual words. For example, existing approaches in word embeddings will assign high probabilities to the words "Penguin" and "Fly" if they frequently co-occur, but it fails to capture the fact that they occur in an opposite sense - Penguins do not fly. We hypothesize that humans do not associate a single polarity or sentiment to each word. The word contributes to the overall polarity of a combination of words depending upon which other words it is combined with. This is analogous to the behavior of microscopic particles which exist in all possible states at the same time and interfere with each other to give rise to new states depending upon their relative phases. We make use of the Hilbert Space representation of such particles in Quantum Mechanics where we subscribe a relative phase to each word, which is a complex number, and investigate two such quantum inspired models to derive the meaning of a combination of words. The proposed models achieve better performances than state-of-the-art non-quantum models on the binary sentence classification task.
[ { "created": "Tue, 29 May 2018 10:46:30 GMT", "version": "v1" } ]
2018-05-30
[ [ "Li", "Qiuchi", "" ], [ "Uprety", "Sagar", "" ], [ "Wang", "Benyou", "" ], [ "Song", "Dawei", "" ] ]
A challenging task for word embeddings is to capture the emergent meaning or polarity of a combination of individual words. For example, existing approaches in word embeddings will assign high probabilities to the words "Penguin" and "Fly" if they frequently co-occur, but it fails to capture the fact that they occur in an opposite sense - Penguins do not fly. We hypothesize that humans do not associate a single polarity or sentiment to each word. The word contributes to the overall polarity of a combination of words depending upon which other words it is combined with. This is analogous to the behavior of microscopic particles which exist in all possible states at the same time and interfere with each other to give rise to new states depending upon their relative phases. We make use of the Hilbert Space representation of such particles in Quantum Mechanics where we subscribe a relative phase to each word, which is a complex number, and investigate two such quantum inspired models to derive the meaning of a combination of words. The proposed models achieve better performances than state-of-the-art non-quantum models on the binary sentence classification task.
2208.13064
Mayukh Bagchi
Mayukh Bagchi
A Diversity-Aware Domain Development Methodology
41st International Conference on Conceptual Modeling (ER 2022), Online (Virtual)
null
null
null
cs.AI cs.DB
http://creativecommons.org/licenses/by-nc-nd/4.0/
The development of domain ontological models, though being a mature research arena backed by well-established methodologies, still suffer from two key shortcomings. Firstly, the issues concerning the semantic persistency of ontology concepts and their flexible reuse in domain development employing existing approaches. Secondly, due to the difficulty in understanding and reusing top-level concepts in existing foundational ontologies, the obfuscation regarding the semantic nature of domain representations. The paper grounds the aforementioned shortcomings in representation diversity and proposes a three-fold solution - (i) a pipeline for rendering concepts reuse-ready, (ii) a first characterization of a minimalistic foundational knowledge model, named foundational teleology, semantically explicating foundational distinctions enforcing the static as well as dynamic nature of domain representations, and (iii) a flexible, reuse-native methodology for diversity-aware domain development exploiting solutions (i) and (ii). The preliminary work reported validates the potentiality of the solution components.
[ { "created": "Sat, 27 Aug 2022 17:58:47 GMT", "version": "v1" } ]
2022-08-30
[ [ "Bagchi", "Mayukh", "" ] ]
The development of domain ontological models, though being a mature research arena backed by well-established methodologies, still suffer from two key shortcomings. Firstly, the issues concerning the semantic persistency of ontology concepts and their flexible reuse in domain development employing existing approaches. Secondly, due to the difficulty in understanding and reusing top-level concepts in existing foundational ontologies, the obfuscation regarding the semantic nature of domain representations. The paper grounds the aforementioned shortcomings in representation diversity and proposes a three-fold solution - (i) a pipeline for rendering concepts reuse-ready, (ii) a first characterization of a minimalistic foundational knowledge model, named foundational teleology, semantically explicating foundational distinctions enforcing the static as well as dynamic nature of domain representations, and (iii) a flexible, reuse-native methodology for diversity-aware domain development exploiting solutions (i) and (ii). The preliminary work reported validates the potentiality of the solution components.
2406.09489
Anh Nguyen
An Dinh Vuong, Minh Nhat Vu, Baoru Huang, Nghia Nguyen, Hieu Le, Thieu Vo, Anh Nguyen
Language-driven Grasp Detection
19 pages. Accepted to CVPR24
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Grasp detection is a persistent and intricate challenge with various industrial applications. Recently, many methods and datasets have been proposed to tackle the grasp detection problem. However, most of them do not consider using natural language as a condition to detect the grasp poses. In this paper, we introduce Grasp-Anything++, a new language-driven grasp detection dataset featuring 1M samples, over 3M objects, and upwards of 10M grasping instructions. We utilize foundation models to create a large-scale scene corpus with corresponding images and grasp prompts. We approach the language-driven grasp detection task as a conditional generation problem. Drawing on the success of diffusion models in generative tasks and given that language plays a vital role in this task, we propose a new language-driven grasp detection method based on diffusion models. Our key contribution is the contrastive training objective, which explicitly contributes to the denoising process to detect the grasp pose given the language instructions. We illustrate that our approach is theoretically supportive. The intensive experiments show that our method outperforms state-of-the-art approaches and allows real-world robotic grasping. Finally, we demonstrate our large-scale dataset enables zero-short grasp detection and is a challenging benchmark for future work. Project website: https://airvlab.github.io/grasp-anything/
[ { "created": "Thu, 13 Jun 2024 16:06:59 GMT", "version": "v1" } ]
2024-06-17
[ [ "Vuong", "An Dinh", "" ], [ "Vu", "Minh Nhat", "" ], [ "Huang", "Baoru", "" ], [ "Nguyen", "Nghia", "" ], [ "Le", "Hieu", "" ], [ "Vo", "Thieu", "" ], [ "Nguyen", "Anh", "" ] ]
Grasp detection is a persistent and intricate challenge with various industrial applications. Recently, many methods and datasets have been proposed to tackle the grasp detection problem. However, most of them do not consider using natural language as a condition to detect the grasp poses. In this paper, we introduce Grasp-Anything++, a new language-driven grasp detection dataset featuring 1M samples, over 3M objects, and upwards of 10M grasping instructions. We utilize foundation models to create a large-scale scene corpus with corresponding images and grasp prompts. We approach the language-driven grasp detection task as a conditional generation problem. Drawing on the success of diffusion models in generative tasks and given that language plays a vital role in this task, we propose a new language-driven grasp detection method based on diffusion models. Our key contribution is the contrastive training objective, which explicitly contributes to the denoising process to detect the grasp pose given the language instructions. We illustrate that our approach is theoretically supportive. The intensive experiments show that our method outperforms state-of-the-art approaches and allows real-world robotic grasping. Finally, we demonstrate our large-scale dataset enables zero-short grasp detection and is a challenging benchmark for future work. Project website: https://airvlab.github.io/grasp-anything/
2401.13722
Mohammad Asif
Mohammad Asif, Sudhakar Mishra, Ankush Sonker, Sanidhya Gupta, Somesh Kumar Maurya and Uma Shanker Tiwary
Proactive Emotion Tracker: AI-Driven Continuous Mood and Emotion Monitoring
null
null
null
null
cs.HC cs.AI
http://creativecommons.org/licenses/by/4.0/
This research project aims to tackle the growing mental health challenges in today's digital age. It employs a modified pre-trained BERT model to detect depressive text within social media and users' web browsing data, achieving an impressive 93% test accuracy. Simultaneously, the project aims to incorporate physiological signals from wearable devices, such as smartwatches and EEG sensors, to provide long-term tracking and prognosis of mood disorders and emotional states. This comprehensive approach holds promise for enhancing early detection of depression and advancing overall mental health outcomes.
[ { "created": "Wed, 24 Jan 2024 15:05:11 GMT", "version": "v1" } ]
2024-01-26
[ [ "Asif", "Mohammad", "" ], [ "Mishra", "Sudhakar", "" ], [ "Sonker", "Ankush", "" ], [ "Gupta", "Sanidhya", "" ], [ "Maurya", "Somesh Kumar", "" ], [ "Tiwary", "Uma Shanker", "" ] ]
This research project aims to tackle the growing mental health challenges in today's digital age. It employs a modified pre-trained BERT model to detect depressive text within social media and users' web browsing data, achieving an impressive 93% test accuracy. Simultaneously, the project aims to incorporate physiological signals from wearable devices, such as smartwatches and EEG sensors, to provide long-term tracking and prognosis of mood disorders and emotional states. This comprehensive approach holds promise for enhancing early detection of depression and advancing overall mental health outcomes.
2009.07118
Timo Schick
Timo Schick, Hinrich Sch\"utze
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners
Accepted at NAACL2021
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When scaled to hundreds of billions of parameters, pretrained language models such as GPT-3 (Brown et al., 2020) achieve remarkable few-shot performance. However, enormous amounts of compute are required for training and applying such big models, resulting in a large carbon footprint and making it difficult for researchers and practitioners to use them. We show that performance similar to GPT-3 can be obtained with language models that are much "greener" in that their parameter count is several orders of magnitude smaller. This is achieved by converting textual inputs into cloze questions that contain a task description, combined with gradient-based optimization; exploiting unlabeled data gives further improvements. We identify key factors required for successful natural language understanding with small language models.
[ { "created": "Tue, 15 Sep 2020 14:18:53 GMT", "version": "v1" }, { "created": "Mon, 12 Apr 2021 08:16:59 GMT", "version": "v2" } ]
2021-04-13
[ [ "Schick", "Timo", "" ], [ "Schütze", "Hinrich", "" ] ]
When scaled to hundreds of billions of parameters, pretrained language models such as GPT-3 (Brown et al., 2020) achieve remarkable few-shot performance. However, enormous amounts of compute are required for training and applying such big models, resulting in a large carbon footprint and making it difficult for researchers and practitioners to use them. We show that performance similar to GPT-3 can be obtained with language models that are much "greener" in that their parameter count is several orders of magnitude smaller. This is achieved by converting textual inputs into cloze questions that contain a task description, combined with gradient-based optimization; exploiting unlabeled data gives further improvements. We identify key factors required for successful natural language understanding with small language models.
2303.14501
Bastian Wittmann
Bastian Wittmann, Johannes C. Paetzold, Chinmay Prabhakar, Daniel Rueckert, Bjoern Menze
Link Prediction for Flow-Driven Spatial Networks
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Link prediction algorithms aim to infer the existence of connections (or links) between nodes in network-structured data and are typically applied to refine the connectivity among nodes. In this work, we focus on link prediction for flow-driven spatial networks, which are embedded in a Euclidean space and relate to physical exchange and transportation processes (e.g., blood flow in vessels or traffic flow in road networks). To this end, we propose the Graph Attentive Vectors (GAV) link prediction framework. GAV models simplified dynamics of physical flow in spatial networks via an attentive, neighborhood-aware message-passing paradigm, updating vector embeddings in a constrained manner. We evaluate GAV on eight flow-driven spatial networks given by whole-brain vessel graphs and road networks. GAV demonstrates superior performances across all datasets and metrics and outperformed the state-of-the-art on the ogbl-vessel benchmark at the time of submission by 12% (98.38 vs. 87.98 AUC). All code is publicly available on GitHub.
[ { "created": "Sat, 25 Mar 2023 15:42:27 GMT", "version": "v1" }, { "created": "Thu, 18 Jan 2024 20:26:45 GMT", "version": "v2" } ]
2024-01-22
[ [ "Wittmann", "Bastian", "" ], [ "Paetzold", "Johannes C.", "" ], [ "Prabhakar", "Chinmay", "" ], [ "Rueckert", "Daniel", "" ], [ "Menze", "Bjoern", "" ] ]
Link prediction algorithms aim to infer the existence of connections (or links) between nodes in network-structured data and are typically applied to refine the connectivity among nodes. In this work, we focus on link prediction for flow-driven spatial networks, which are embedded in a Euclidean space and relate to physical exchange and transportation processes (e.g., blood flow in vessels or traffic flow in road networks). To this end, we propose the Graph Attentive Vectors (GAV) link prediction framework. GAV models simplified dynamics of physical flow in spatial networks via an attentive, neighborhood-aware message-passing paradigm, updating vector embeddings in a constrained manner. We evaluate GAV on eight flow-driven spatial networks given by whole-brain vessel graphs and road networks. GAV demonstrates superior performances across all datasets and metrics and outperformed the state-of-the-art on the ogbl-vessel benchmark at the time of submission by 12% (98.38 vs. 87.98 AUC). All code is publicly available on GitHub.
2403.08502
Christos Papadimitriou
Christos Papadimitriou, Giorgos Filandrianos, Maria Lymperaiou, Giorgos Stamou
Masked Generative Story Transformer with Character Guidance and Caption Augmentation
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Story Visualization (SV) is a challenging generative vision task, that requires both visual quality and consistency between different frames in generated image sequences. Previous approaches either employ some kind of memory mechanism to maintain context throughout an auto-regressive generation of the image sequence, or model the generation of the characters and their background separately, to improve the rendering of characters. On the contrary, we embrace a completely parallel transformer-based approach, exclusively relying on Cross-Attention with past and future captions to achieve consistency. Additionally, we propose a Character Guidance technique to focus on the generation of characters in an implicit manner, by forming a combination of text-conditional and character-conditional logits in the logit space. We also employ a caption-augmentation technique, carried out by a Large Language Model (LLM), to enhance the robustness of our approach. The combination of these methods culminates into state-of-the-art (SOTA) results over various metrics in the most prominent SV benchmark (Pororo-SV), attained with constraint resources while achieving superior computational complexity compared to previous arts. The validity of our quantitative results is supported by a human survey.
[ { "created": "Wed, 13 Mar 2024 13:10:20 GMT", "version": "v1" } ]
2024-03-14
[ [ "Papadimitriou", "Christos", "" ], [ "Filandrianos", "Giorgos", "" ], [ "Lymperaiou", "Maria", "" ], [ "Stamou", "Giorgos", "" ] ]
Story Visualization (SV) is a challenging generative vision task, that requires both visual quality and consistency between different frames in generated image sequences. Previous approaches either employ some kind of memory mechanism to maintain context throughout an auto-regressive generation of the image sequence, or model the generation of the characters and their background separately, to improve the rendering of characters. On the contrary, we embrace a completely parallel transformer-based approach, exclusively relying on Cross-Attention with past and future captions to achieve consistency. Additionally, we propose a Character Guidance technique to focus on the generation of characters in an implicit manner, by forming a combination of text-conditional and character-conditional logits in the logit space. We also employ a caption-augmentation technique, carried out by a Large Language Model (LLM), to enhance the robustness of our approach. The combination of these methods culminates into state-of-the-art (SOTA) results over various metrics in the most prominent SV benchmark (Pororo-SV), attained with constraint resources while achieving superior computational complexity compared to previous arts. The validity of our quantitative results is supported by a human survey.
2401.10189
Qingyun Wang
Qingyun Wang, Zixuan Zhang, Hongxiang Li, Xuan Liu, Jiawei Han, Huimin Zhao, Heng Ji
Chem-FINESE: Validating Fine-Grained Few-shot Entity Extraction through Text Reconstruction
16 pages. Accepted by Findings of the Association for Computational Linguistics: EACL 2024. Code and resources are available at https://github.com/EagleW/Chem-FINESE
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fine-grained few-shot entity extraction in the chemical domain faces two unique challenges. First, compared with entity extraction tasks in the general domain, sentences from chemical papers usually contain more entities. Moreover, entity extraction models usually have difficulty extracting entities of long-tailed types. In this paper, we propose Chem-FINESE, a novel sequence-to-sequence (seq2seq) based few-shot entity extraction approach, to address these two challenges. Our Chem-FINESE has two components: a seq2seq entity extractor to extract named entities from the input sentence and a seq2seq self-validation module to reconstruct the original input sentence from extracted entities. Inspired by the fact that a good entity extraction system needs to extract entities faithfully, our new self-validation module leverages entity extraction results to reconstruct the original input sentence. Besides, we design a new contrastive loss to reduce excessive copying during the extraction process. Finally, we release ChemNER+, a new fine-grained chemical entity extraction dataset that is annotated by domain experts with the ChemNER schema. Experiments in few-shot settings with both ChemNER+ and CHEMET datasets show that our newly proposed framework has contributed up to 8.26% and 6.84% absolute F1-score gains respectively.
[ { "created": "Thu, 18 Jan 2024 18:20:15 GMT", "version": "v1" }, { "created": "Sun, 21 Jan 2024 03:37:41 GMT", "version": "v2" }, { "created": "Thu, 25 Jan 2024 22:55:42 GMT", "version": "v3" }, { "created": "Wed, 29 May 2024 18:24:15 GMT", "version": "v4" } ]
2024-05-31
[ [ "Wang", "Qingyun", "" ], [ "Zhang", "Zixuan", "" ], [ "Li", "Hongxiang", "" ], [ "Liu", "Xuan", "" ], [ "Han", "Jiawei", "" ], [ "Zhao", "Huimin", "" ], [ "Ji", "Heng", "" ] ]
Fine-grained few-shot entity extraction in the chemical domain faces two unique challenges. First, compared with entity extraction tasks in the general domain, sentences from chemical papers usually contain more entities. Moreover, entity extraction models usually have difficulty extracting entities of long-tailed types. In this paper, we propose Chem-FINESE, a novel sequence-to-sequence (seq2seq) based few-shot entity extraction approach, to address these two challenges. Our Chem-FINESE has two components: a seq2seq entity extractor to extract named entities from the input sentence and a seq2seq self-validation module to reconstruct the original input sentence from extracted entities. Inspired by the fact that a good entity extraction system needs to extract entities faithfully, our new self-validation module leverages entity extraction results to reconstruct the original input sentence. Besides, we design a new contrastive loss to reduce excessive copying during the extraction process. Finally, we release ChemNER+, a new fine-grained chemical entity extraction dataset that is annotated by domain experts with the ChemNER schema. Experiments in few-shot settings with both ChemNER+ and CHEMET datasets show that our newly proposed framework has contributed up to 8.26% and 6.84% absolute F1-score gains respectively.
2109.06014
Aditi Chaudhary
Aditi Chaudhary, Kayo Yin, Antonios Anastasopoulos, Graham Neubig
When is Wall a Pared and when a Muro? -- Extracting Rules Governing Lexical Selection
Accepted at EMNLP 2021
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning fine-grained distinctions between vocabulary items is a key challenge in learning a new language. For example, the noun "wall" has different lexical manifestations in Spanish -- "pared" refers to an indoor wall while "muro" refers to an outside wall. However, this variety of lexical distinction may not be obvious to non-native learners unless the distinction is explained in such a way. In this work, we present a method for automatically identifying fine-grained lexical distinctions, and extracting concise descriptions explaining these distinctions in a human- and machine-readable format. We confirm the quality of these extracted descriptions in a language learning setup for two languages, Spanish and Greek, where we use them to teach non-native speakers when to translate a given ambiguous word into its different possible translations. Code and data are publicly released here (https://github.com/Aditi138/LexSelection)
[ { "created": "Mon, 13 Sep 2021 14:49:00 GMT", "version": "v1" } ]
2021-09-14
[ [ "Chaudhary", "Aditi", "" ], [ "Yin", "Kayo", "" ], [ "Anastasopoulos", "Antonios", "" ], [ "Neubig", "Graham", "" ] ]
Learning fine-grained distinctions between vocabulary items is a key challenge in learning a new language. For example, the noun "wall" has different lexical manifestations in Spanish -- "pared" refers to an indoor wall while "muro" refers to an outside wall. However, this variety of lexical distinction may not be obvious to non-native learners unless the distinction is explained in such a way. In this work, we present a method for automatically identifying fine-grained lexical distinctions, and extracting concise descriptions explaining these distinctions in a human- and machine-readable format. We confirm the quality of these extracted descriptions in a language learning setup for two languages, Spanish and Greek, where we use them to teach non-native speakers when to translate a given ambiguous word into its different possible translations. Code and data are publicly released here (https://github.com/Aditi138/LexSelection)
1708.04100
Ioannis Kokkinis
Ioannis Kokkinis
The Complexity of Probabilistic Justification Logic
presented to the 11th Panhellenic Logic Symposium (http://pls11.cs.ntua.gr/)
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic justification logic is a modal logic with two kind of modalities: probability measures and explicit justification terms. We present a tableau procedure that can be used to decide the satisfiability problem for this logic in polynomial space. We show that this upper complexity bound is tight.
[ { "created": "Mon, 14 Aug 2017 12:42:45 GMT", "version": "v1" } ]
2017-08-15
[ [ "Kokkinis", "Ioannis", "" ] ]
Probabilistic justification logic is a modal logic with two kind of modalities: probability measures and explicit justification terms. We present a tableau procedure that can be used to decide the satisfiability problem for this logic in polynomial space. We show that this upper complexity bound is tight.
2406.00409
Ahmed Heakl Mr
Mazen Balat, Youssef Mohamed, Ahmed Heakl, Ahmed Zaky
Arabic Handwritten Text for Person Biometric Identification: A Deep Learning Approach
6 pages, 11 figures, 4 tables, International IEEE Conference on the Intelligent Methods, Systems, and Applications (IMSA)
null
null
null
cs.CV cs.AI cs.LG cs.MM cs.NE
http://creativecommons.org/licenses/by/4.0/
This study thoroughly investigates how well deep learning models can recognize Arabic handwritten text for person biometric identification. It compares three advanced architectures -- ResNet50, MobileNetV2, and EfficientNetB7 -- using three widely recognized datasets: AHAWP, Khatt, and LAMIS-MSHD. Results show that EfficientNetB7 outperforms the others, achieving test accuracies of 98.57\%, 99.15\%, and 99.79\% on AHAWP, Khatt, and LAMIS-MSHD datasets, respectively. EfficientNetB7's exceptional performance is credited to its innovative techniques, including compound scaling, depth-wise separable convolutions, and squeeze-and-excitation blocks. These features allow the model to extract more abstract and distinctive features from handwritten text images. The study's findings hold significant implications for enhancing identity verification and authentication systems, highlighting the potential of deep learning in Arabic handwritten text recognition for person biometric identification.
[ { "created": "Sat, 1 Jun 2024 11:43:00 GMT", "version": "v1" } ]
2024-06-04
[ [ "Balat", "Mazen", "" ], [ "Mohamed", "Youssef", "" ], [ "Heakl", "Ahmed", "" ], [ "Zaky", "Ahmed", "" ] ]
This study thoroughly investigates how well deep learning models can recognize Arabic handwritten text for person biometric identification. It compares three advanced architectures -- ResNet50, MobileNetV2, and EfficientNetB7 -- using three widely recognized datasets: AHAWP, Khatt, and LAMIS-MSHD. Results show that EfficientNetB7 outperforms the others, achieving test accuracies of 98.57\%, 99.15\%, and 99.79\% on AHAWP, Khatt, and LAMIS-MSHD datasets, respectively. EfficientNetB7's exceptional performance is credited to its innovative techniques, including compound scaling, depth-wise separable convolutions, and squeeze-and-excitation blocks. These features allow the model to extract more abstract and distinctive features from handwritten text images. The study's findings hold significant implications for enhancing identity verification and authentication systems, highlighting the potential of deep learning in Arabic handwritten text recognition for person biometric identification.
2406.01937
Yiqiu Wang
Yiqiu Wang, Meixia Tao, and Shu Sun
Cram\'er-Rao Bound Analysis and Beamforming Design for Integrated Sensing and Communication with Extended Targets
Submitted to IEEE Transactions on Wireless Communications. arXiv admin note: text overlap with arXiv:2312.10641
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies an integrated sensing and communication (ISAC) system, where a multi-antenna base station transmits beamformed signals for joint downlink multi-user communication and radar sensing of an extended target (ET). By considering echo signals as reflections from valid elements on the ET contour, a set of novel Cram\'er-Rao bounds (CRBs) is derived for parameter estimation of the ET, including central range, direction, and orientation. The ISAC transmit beamforming design is then formulated as an optimization problem, aiming to minimize the CRB associated with radar sensing, while satisfying a minimum signal-to-interference-pulse-noise ratio requirement for each communication user, along with a 3-dB beam coverage constraint tailored for the ET. To solve this non-convex problem, we utilize semidefinite relaxation (SDR) and propose a rank-one solution extraction scheme for non-tight relaxation circumstances. To reduce the computation complexity, we further employ an efficient zero-forcing (ZF) based beamforming design, where the sensing task is performed in the null space of communication channels. Numerical results validate the effectiveness of the obtained CRB, revealing the diverse features of CRB for differently shaped ETs. The proposed SDR beamforming design outperforms benchmark designs with lower estimation error and CRB, while the ZF beamforming design greatly improves computation efficiency with minor sensing performance loss.
[ { "created": "Tue, 4 Jun 2024 03:42:59 GMT", "version": "v1" } ]
2024-06-05
[ [ "Wang", "Yiqiu", "" ], [ "Tao", "Meixia", "" ], [ "Sun", "Shu", "" ] ]
This paper studies an integrated sensing and communication (ISAC) system, where a multi-antenna base station transmits beamformed signals for joint downlink multi-user communication and radar sensing of an extended target (ET). By considering echo signals as reflections from valid elements on the ET contour, a set of novel Cram\'er-Rao bounds (CRBs) is derived for parameter estimation of the ET, including central range, direction, and orientation. The ISAC transmit beamforming design is then formulated as an optimization problem, aiming to minimize the CRB associated with radar sensing, while satisfying a minimum signal-to-interference-pulse-noise ratio requirement for each communication user, along with a 3-dB beam coverage constraint tailored for the ET. To solve this non-convex problem, we utilize semidefinite relaxation (SDR) and propose a rank-one solution extraction scheme for non-tight relaxation circumstances. To reduce the computation complexity, we further employ an efficient zero-forcing (ZF) based beamforming design, where the sensing task is performed in the null space of communication channels. Numerical results validate the effectiveness of the obtained CRB, revealing the diverse features of CRB for differently shaped ETs. The proposed SDR beamforming design outperforms benchmark designs with lower estimation error and CRB, while the ZF beamforming design greatly improves computation efficiency with minor sensing performance loss.
1411.4037
Vladimir Salnikov
Vladimir Salnikov, Sophie Lemaitre, Daniel Cho\"i, Philippe Karamian-Surville
Measure of combined effects of morphological parameters of inclusions within composite materials via stochastic homogenization to determine effective mechanical properties
23 pages, updated figures, version accepted to Composite Structures 2015
null
10.1016/j.compstruct.2015.03.076
null
cs.CE cond-mat.mtrl-sci
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In our previous papers we have described efficient and reliable methods of generation of representative volume elements (RVE) perfectly suitable for analysis of composite materials via stochastic homogenization. In this paper we profit from these methods to analyze the influence of the morphology on the effective mechanical properties of the samples. More precisely, we study the dependence of main mechanical characteristics of a composite medium on various parameters of the mixture of inclusions composed of spheres and cylinders. On top of that we introduce various imperfections to inclusions and observe the evolution of effective properties related to that. The main computational approach used throughout the work is the FFT-based homogenization technique, validated however by comparison with the direct finite elements method. We give details on the features of the method and the validation campaign as well. Keywords: Composite materials, Cylindrical and spherical reinforcements, Mechanical properties, Stochastic homogenization.
[ { "created": "Fri, 14 Nov 2014 20:44:35 GMT", "version": "v1" }, { "created": "Thu, 9 Apr 2015 13:06:43 GMT", "version": "v2" } ]
2015-04-10
[ [ "Salnikov", "Vladimir", "" ], [ "Lemaitre", "Sophie", "" ], [ "Choï", "Daniel", "" ], [ "Karamian-Surville", "Philippe", "" ] ]
In our previous papers we have described efficient and reliable methods of generation of representative volume elements (RVE) perfectly suitable for analysis of composite materials via stochastic homogenization. In this paper we profit from these methods to analyze the influence of the morphology on the effective mechanical properties of the samples. More precisely, we study the dependence of main mechanical characteristics of a composite medium on various parameters of the mixture of inclusions composed of spheres and cylinders. On top of that we introduce various imperfections to inclusions and observe the evolution of effective properties related to that. The main computational approach used throughout the work is the FFT-based homogenization technique, validated however by comparison with the direct finite elements method. We give details on the features of the method and the validation campaign as well. Keywords: Composite materials, Cylindrical and spherical reinforcements, Mechanical properties, Stochastic homogenization.
2310.10295
Stefano Zacchiroli
Roberto Di Cosmo (UPCit\'e), Stefano Zacchiroli (IP Paris, LTCI)
The Software Heritage Open Science Ecosystem
null
Software Ecosystems, Springer International Publishing, pp.33-61, 2023
10.1007/978-3-031-36060-2_2
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software Heritage is the largest public archive of software source code and associated development history, as captured by modern version control systems. As of July 2023, it has archived more than 16 billion unique source code files coming from more than 250 million collaborative development projects. In this chapter, we describe the Software Heritage ecosystem, focusing on research and open science use cases.On the one hand, Software Heritage supports empirical research on software by materializing in a single Merkle direct acyclic graph the development history of public code. This giant graph of source code artifacts (files, directories, and commits) can be used-and has been used-to study repository forks, open source contributors, vulnerability propagation, software provenance tracking, source code indexing, and more.On the other hand, Software Heritage ensures availability and guarantees integrity of the source code of software artifacts used in any field that relies on software to conduct experiments, contributing to making research reproducible. The source code used in scientific experiments can be archived-e.g., via integration with open-access repositories-referenced using persistent identifiers that allow downstream integrity checks and linked to/from other scholarly digital artifacts.
[ { "created": "Mon, 16 Oct 2023 11:32:03 GMT", "version": "v1" } ]
2023-10-17
[ [ "Di Cosmo", "Roberto", "", "UPCité" ], [ "Zacchiroli", "Stefano", "", "IP Paris, LTCI" ] ]
Software Heritage is the largest public archive of software source code and associated development history, as captured by modern version control systems. As of July 2023, it has archived more than 16 billion unique source code files coming from more than 250 million collaborative development projects. In this chapter, we describe the Software Heritage ecosystem, focusing on research and open science use cases.On the one hand, Software Heritage supports empirical research on software by materializing in a single Merkle direct acyclic graph the development history of public code. This giant graph of source code artifacts (files, directories, and commits) can be used-and has been used-to study repository forks, open source contributors, vulnerability propagation, software provenance tracking, source code indexing, and more.On the other hand, Software Heritage ensures availability and guarantees integrity of the source code of software artifacts used in any field that relies on software to conduct experiments, contributing to making research reproducible. The source code used in scientific experiments can be archived-e.g., via integration with open-access repositories-referenced using persistent identifiers that allow downstream integrity checks and linked to/from other scholarly digital artifacts.
2012.13695
Sagar Gubbi Venkatesh
Sagar Gubbi Venkatesh and Raviteja Upadrashta and Bharadwaj Amrutur
Translating Natural Language Instructions to Computer Programs for Robot Manipulation
Submitted to IROS 2021
null
null
null
cs.RO cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
It is highly desirable for robots that work alongside humans to be able to understand instructions in natural language. Existing language conditioned imitation learning models directly predict the actuator commands from the image observation and the instruction text. Rather than directly predicting actuator commands, we propose translating the natural language instruction to a Python function which queries the scene by accessing the output of the object detector and controls the robot to perform the specified task. This enables the use of non-differentiable modules such as a constraint solver when computing commands to the robot. Moreover, the labels in this setup are significantly more informative computer programs that capture the intent of the expert rather than teleoperated demonstrations. We show that the proposed method performs better than training a neural network to directly predict the robot actions.
[ { "created": "Sat, 26 Dec 2020 07:57:55 GMT", "version": "v1" }, { "created": "Sat, 20 Mar 2021 07:33:27 GMT", "version": "v2" } ]
2021-03-23
[ [ "Venkatesh", "Sagar Gubbi", "" ], [ "Upadrashta", "Raviteja", "" ], [ "Amrutur", "Bharadwaj", "" ] ]
It is highly desirable for robots that work alongside humans to be able to understand instructions in natural language. Existing language conditioned imitation learning models directly predict the actuator commands from the image observation and the instruction text. Rather than directly predicting actuator commands, we propose translating the natural language instruction to a Python function which queries the scene by accessing the output of the object detector and controls the robot to perform the specified task. This enables the use of non-differentiable modules such as a constraint solver when computing commands to the robot. Moreover, the labels in this setup are significantly more informative computer programs that capture the intent of the expert rather than teleoperated demonstrations. We show that the proposed method performs better than training a neural network to directly predict the robot actions.
1408.6736
Haya Shajaiah
Haya Shajaiah, Ahmed Abdelhadi, and Charles Clancy
Impact of Radar and Communication Coexistence on Radar's Detectable Target Parameters
Submitted to IEEE
null
null
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present our spectrum sharing algorithm between a multi-input multi-output (MIMO) radar and Long Term Evolution (LTE) cellular system with multiple base stations (BS)s. We analyze the performance of MIMO radars in detecting the angle of arrival, propagation delay and Doppler angular frequency by projecting orthogonal waveforms onto the null-space of interference channel matrix. We compare and analyze the radar's detectable target parameters in the case of the original radar waveform and the case of null-projected radar waveform. Our proposed spectrum-sharing algorithm causes minimum loss in radar performance by selecting the best interference channel that does not cause interference to the i'th LTE base station due to the radar signal. We show through our analytical and simulation results that the loss in the radar performance in detecting the target parameters is minimal when our proposed spectrum sharing algorithm is used to select the best channel onto which radar signals are projected.
[ { "created": "Thu, 28 Aug 2014 14:40:02 GMT", "version": "v1" } ]
2014-08-29
[ [ "Shajaiah", "Haya", "" ], [ "Abdelhadi", "Ahmed", "" ], [ "Clancy", "Charles", "" ] ]
In this paper, we present our spectrum sharing algorithm between a multi-input multi-output (MIMO) radar and Long Term Evolution (LTE) cellular system with multiple base stations (BS)s. We analyze the performance of MIMO radars in detecting the angle of arrival, propagation delay and Doppler angular frequency by projecting orthogonal waveforms onto the null-space of interference channel matrix. We compare and analyze the radar's detectable target parameters in the case of the original radar waveform and the case of null-projected radar waveform. Our proposed spectrum-sharing algorithm causes minimum loss in radar performance by selecting the best interference channel that does not cause interference to the i'th LTE base station due to the radar signal. We show through our analytical and simulation results that the loss in the radar performance in detecting the target parameters is minimal when our proposed spectrum sharing algorithm is used to select the best channel onto which radar signals are projected.
2108.13873
Qiongkai Xu
Qiongkai Xu, Xuanli He, Lingjuan Lyu, Lizhen Qu, Gholamreza Haffari
Student Surpasses Teacher: Imitation Attack for Black-Box NLP APIs
COLING 2022 (oral)
null
null
null
cs.CR cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Machine-learning-as-a-service (MLaaS) has attracted millions of users to their splendid large-scale models. Although published as black-box APIs, the valuable models behind these services are still vulnerable to imitation attacks. Recently, a series of works have demonstrated that attackers manage to steal or extract the victim models. Nonetheless, none of the previous stolen models can outperform the original black-box APIs. In this work, we conduct unsupervised domain adaptation and multi-victim ensemble to showing that attackers could potentially surpass victims, which is beyond previous understanding of model extraction. Extensive experiments on both benchmark datasets and real-world APIs validate that the imitators can succeed in outperforming the original black-box models on transferred domains. We consider our work as a milestone in the research of imitation attack, especially on NLP APIs, as the superior performance could influence the defense or even publishing strategy of API providers.
[ { "created": "Sun, 29 Aug 2021 10:52:04 GMT", "version": "v1" }, { "created": "Sun, 4 Sep 2022 12:42:05 GMT", "version": "v2" } ]
2022-09-07
[ [ "Xu", "Qiongkai", "" ], [ "He", "Xuanli", "" ], [ "Lyu", "Lingjuan", "" ], [ "Qu", "Lizhen", "" ], [ "Haffari", "Gholamreza", "" ] ]
Machine-learning-as-a-service (MLaaS) has attracted millions of users to their splendid large-scale models. Although published as black-box APIs, the valuable models behind these services are still vulnerable to imitation attacks. Recently, a series of works have demonstrated that attackers manage to steal or extract the victim models. Nonetheless, none of the previous stolen models can outperform the original black-box APIs. In this work, we conduct unsupervised domain adaptation and multi-victim ensemble to showing that attackers could potentially surpass victims, which is beyond previous understanding of model extraction. Extensive experiments on both benchmark datasets and real-world APIs validate that the imitators can succeed in outperforming the original black-box models on transferred domains. We consider our work as a milestone in the research of imitation attack, especially on NLP APIs, as the superior performance could influence the defense or even publishing strategy of API providers.
2111.11072
Ashutosh Shankar
Siddharth Bhandari, Prahladh Harsha, Mrinal Kumar, Ashutosh Shankar
Algorithmizing the Multiplicity Schwartz-Zippel Lemma
null
In Proc. 34th SODA, pages 2816-2835, 2023
10.1137/1.9781611977554.ch106
null
cs.CC cs.DM cs.IT math.IT
http://creativecommons.org/licenses/by-nc-sa/4.0/
The multiplicity Schwartz-Zippel lemma asserts that over a field, a low-degree polynomial cannot vanish with high multiplicity very often on a sufficiently large product set. Since its discovery in a work of Dvir, Kopparty, Saraf and Sudan [SIAM J. Comput., 2013], the lemma has found numerous applications in both math and computer science; in particular, in the definition and properties of multiplicity codes by Kopparty, Saraf and Yekhanin [J. ACM, 2014]. In this work, we show how to algorithmize the multiplicity Schwartz-Zippel lemma for arbitrary product sets over any field. In other words, we give an efficient algorithm for unique decoding of multivariate multiplicity codes from half their minimum distance on arbitrary product sets over all fields. Previously, such an algorithm was known either when the underlying product set had a nice algebraic structure: for instance, was a subfield (by Kopparty [ToC, 2015]) or when the underlying field had large (or zero) characteristic, the multiplicity parameter was sufficiently large and the multiplicity code had distance bounded away from $1$ (Bhandari, Harsha, Kumar and Sudan [STOC 2021]). In particular, even unique decoding of bivariate multiplicity codes with multiplicity two from half their minimum distance was not known over arbitrary product sets over any field. Our algorithm builds upon a result of Kim and Kopparty [ToC, 2017] who gave an algorithmic version of the Schwartz-Zippel lemma (without multiplicities) or equivalently, an efficient algorithm for unique decoding of Reed-Muller codes over arbitrary product sets. We introduce a refined notion of distance based on the multiplicity Schwartz-Zippel lemma and design a unique decoding algorithm for this distance measure. On the way, we give an alternate analysis of Forney's classical generalized minimum distance decoder that might be of independent interest.
[ { "created": "Mon, 22 Nov 2021 09:35:38 GMT", "version": "v1" }, { "created": "Mon, 18 Apr 2022 05:33:21 GMT", "version": "v2" } ]
2023-12-27
[ [ "Bhandari", "Siddharth", "" ], [ "Harsha", "Prahladh", "" ], [ "Kumar", "Mrinal", "" ], [ "Shankar", "Ashutosh", "" ] ]
The multiplicity Schwartz-Zippel lemma asserts that over a field, a low-degree polynomial cannot vanish with high multiplicity very often on a sufficiently large product set. Since its discovery in a work of Dvir, Kopparty, Saraf and Sudan [SIAM J. Comput., 2013], the lemma has found numerous applications in both math and computer science; in particular, in the definition and properties of multiplicity codes by Kopparty, Saraf and Yekhanin [J. ACM, 2014]. In this work, we show how to algorithmize the multiplicity Schwartz-Zippel lemma for arbitrary product sets over any field. In other words, we give an efficient algorithm for unique decoding of multivariate multiplicity codes from half their minimum distance on arbitrary product sets over all fields. Previously, such an algorithm was known either when the underlying product set had a nice algebraic structure: for instance, was a subfield (by Kopparty [ToC, 2015]) or when the underlying field had large (or zero) characteristic, the multiplicity parameter was sufficiently large and the multiplicity code had distance bounded away from $1$ (Bhandari, Harsha, Kumar and Sudan [STOC 2021]). In particular, even unique decoding of bivariate multiplicity codes with multiplicity two from half their minimum distance was not known over arbitrary product sets over any field. Our algorithm builds upon a result of Kim and Kopparty [ToC, 2017] who gave an algorithmic version of the Schwartz-Zippel lemma (without multiplicities) or equivalently, an efficient algorithm for unique decoding of Reed-Muller codes over arbitrary product sets. We introduce a refined notion of distance based on the multiplicity Schwartz-Zippel lemma and design a unique decoding algorithm for this distance measure. On the way, we give an alternate analysis of Forney's classical generalized minimum distance decoder that might be of independent interest.
1308.6118
Sorin Alupoaie
Sorin Alupoaie, P\'adraig Cunningham
Using tf-idf as an edge weighting scheme in user-object bipartite networks
null
null
null
null
cs.SI cs.IR physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bipartite user-object networks are becoming increasingly popular in representing user interaction data in a web or e-commerce environment. They have certain characteristics and challenges that differentiates them from other bipartite networks. This paper analyzes the properties of five real world user-object networks. In all cases we found a heavy tail object degree distribution with popular objects connecting together a large part of the users causing significant edge inflation in the projected users network. We propose a novel edge weighting strategy based on tf-idf and show that the new scheme improves both the density and the quality of the community structure in the projections. The improvement is also noticed when comparing to partially random networks.
[ { "created": "Wed, 28 Aug 2013 10:25:02 GMT", "version": "v1" } ]
2013-08-29
[ [ "Alupoaie", "Sorin", "" ], [ "Cunningham", "Pádraig", "" ] ]
Bipartite user-object networks are becoming increasingly popular in representing user interaction data in a web or e-commerce environment. They have certain characteristics and challenges that differentiates them from other bipartite networks. This paper analyzes the properties of five real world user-object networks. In all cases we found a heavy tail object degree distribution with popular objects connecting together a large part of the users causing significant edge inflation in the projected users network. We propose a novel edge weighting strategy based on tf-idf and show that the new scheme improves both the density and the quality of the community structure in the projections. The improvement is also noticed when comparing to partially random networks.
1901.09888
Muhammad Ammad-Ud-Din Ph.D.
Muhammad Ammad-ud-din, Elena Ivannikova, Suleiman A. Khan, Were Oyomno, Qiang Fu, Kuan Eeik Tan and Adrian Flanagan
Federated Collaborative Filtering for Privacy-Preserving Personalized Recommendation System
12 pages, 2 figures, 2 tables, submitted to a conference
null
null
null
cs.IR cs.AI cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
The increasing interest in user privacy is leading to new privacy preserving machine learning paradigms. In the Federated Learning paradigm, a master machine learning model is distributed to user clients, the clients use their locally stored data and model for both inference and calculating model updates. The model updates are sent back and aggregated on the server to update the master model then redistributed to the clients. In this paradigm, the user data never leaves the client, greatly enhancing the user' privacy, in contrast to the traditional paradigm of collecting, storing and processing user data on a backend server beyond the user's control. In this paper we introduce, as far as we are aware, the first federated implementation of a Collaborative Filter. The federated updates to the model are based on a stochastic gradient approach. As a classical case study in machine learning, we explore a personalized recommendation system based on users' implicit feedback and demonstrate the method's applicability to both the MovieLens and an in-house dataset. Empirical validation confirms a collaborative filter can be federated without a loss of accuracy compared to a standard implementation, hence enhancing the user's privacy in a widely used recommender application while maintaining recommender performance.
[ { "created": "Tue, 29 Jan 2019 14:18:38 GMT", "version": "v1" } ]
2019-01-30
[ [ "Ammad-ud-din", "Muhammad", "" ], [ "Ivannikova", "Elena", "" ], [ "Khan", "Suleiman A.", "" ], [ "Oyomno", "Were", "" ], [ "Fu", "Qiang", "" ], [ "Tan", "Kuan Eeik", "" ], [ "Flanagan", "Adrian", "" ] ]
The increasing interest in user privacy is leading to new privacy preserving machine learning paradigms. In the Federated Learning paradigm, a master machine learning model is distributed to user clients, the clients use their locally stored data and model for both inference and calculating model updates. The model updates are sent back and aggregated on the server to update the master model then redistributed to the clients. In this paradigm, the user data never leaves the client, greatly enhancing the user' privacy, in contrast to the traditional paradigm of collecting, storing and processing user data on a backend server beyond the user's control. In this paper we introduce, as far as we are aware, the first federated implementation of a Collaborative Filter. The federated updates to the model are based on a stochastic gradient approach. As a classical case study in machine learning, we explore a personalized recommendation system based on users' implicit feedback and demonstrate the method's applicability to both the MovieLens and an in-house dataset. Empirical validation confirms a collaborative filter can be federated without a loss of accuracy compared to a standard implementation, hence enhancing the user's privacy in a widely used recommender application while maintaining recommender performance.
1911.06226
Olivier Spanjaard
Hugo Gilbert, Tom Portoleau, Olivier Spanjaard
Beyond Pairwise Comparisons in Social Choice: A Setwise Kemeny Aggregation Problem
36 pages, extends a work published at AAAI 2020. Compared to the previous version on arXiv, some notations have been changed, and section 5 has been added
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we advocate the use of setwise contests for aggregating a set of input rankings into an output ranking. We propose a generalization of the Kemeny rule where one minimizes the number of k-wise disagreements instead of pairwise disagreements (one counts 1 disagreement each time the top choice in a subset of alternatives of cardinality at most k differs between an input ranking and the output ranking). After an algorithmic study of this k-wise Kemeny aggregation problem, we introduce a k-wise counterpart of the majority graph. This graph reveals useful to divide the aggregation problem into several sub-problems, which enables to speed up the exact computation of a consensus ranking. By introducing a k-wise counterpart of the Spearman distance, we also provide a 2-approximation algorithm for the k-wise Kemeny aggregation problem. We conclude with numerical tests.
[ { "created": "Thu, 14 Nov 2019 16:37:00 GMT", "version": "v1" }, { "created": "Wed, 9 Feb 2022 15:18:48 GMT", "version": "v2" } ]
2022-02-10
[ [ "Gilbert", "Hugo", "" ], [ "Portoleau", "Tom", "" ], [ "Spanjaard", "Olivier", "" ] ]
In this paper, we advocate the use of setwise contests for aggregating a set of input rankings into an output ranking. We propose a generalization of the Kemeny rule where one minimizes the number of k-wise disagreements instead of pairwise disagreements (one counts 1 disagreement each time the top choice in a subset of alternatives of cardinality at most k differs between an input ranking and the output ranking). After an algorithmic study of this k-wise Kemeny aggregation problem, we introduce a k-wise counterpart of the majority graph. This graph reveals useful to divide the aggregation problem into several sub-problems, which enables to speed up the exact computation of a consensus ranking. By introducing a k-wise counterpart of the Spearman distance, we also provide a 2-approximation algorithm for the k-wise Kemeny aggregation problem. We conclude with numerical tests.
2001.05586
Xiaoshen Song
Xiaoshen Song, Giuseppe Caire
Queue-Aware Beam Scheduling for Half-Duplex mmWave Relay Networks
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Millimeter wave (mmWave) bands are considered a powerful key enabler for next generation (5G) mobile networks by providing multi-Gbps data rates. However, their severe pathloss and sensitivity to blockage present challenges in practical implementation. One effective way to mitigate these effects and to increase communication range is beamforming in combination with relaying. In this paper, we focus on two typical mmWave relay networks and for each network, we propose three beam scheduling methods to approach the network information theoretic capacity. The proposed beam scheduling methods include the deterministic horizontal continuous edge coloring (HC-EC) scheduler, the adaptive back pressure (BP) scheduler and the adaptive low-delay new back pressure (newBP) scheduler. With the aid of computer simulations, we show that within the network capacity range, the proposed schedulers provide good guarantees for the network stability, meanwhile achieve very low packet end-to-end delay.
[ { "created": "Wed, 15 Jan 2020 22:46:55 GMT", "version": "v1" } ]
2020-01-17
[ [ "Song", "Xiaoshen", "" ], [ "Caire", "Giuseppe", "" ] ]
Millimeter wave (mmWave) bands are considered a powerful key enabler for next generation (5G) mobile networks by providing multi-Gbps data rates. However, their severe pathloss and sensitivity to blockage present challenges in practical implementation. One effective way to mitigate these effects and to increase communication range is beamforming in combination with relaying. In this paper, we focus on two typical mmWave relay networks and for each network, we propose three beam scheduling methods to approach the network information theoretic capacity. The proposed beam scheduling methods include the deterministic horizontal continuous edge coloring (HC-EC) scheduler, the adaptive back pressure (BP) scheduler and the adaptive low-delay new back pressure (newBP) scheduler. With the aid of computer simulations, we show that within the network capacity range, the proposed schedulers provide good guarantees for the network stability, meanwhile achieve very low packet end-to-end delay.
2102.08192
Federico Chiariotti
Federico Chiariotti
A Survey on 360-Degree Video: Coding, Quality of Experience and Streaming
Submitted to Elsevier Computer Communications
null
null
null
cs.MM cs.NI eess.IV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The commercialization of Virtual Reality (VR) headsets has made immersive and 360-degree video streaming the subject of intense interest in the industry and research communities. While the basic principles of video streaming are the same, immersive video presents a set of specific challenges that need to be addressed. In this survey, we present the latest developments in the relevant literature on four of the most important ones: (i) omnidirectional video coding and compression, (ii) subjective and objective Quality of Experience (QoE) and the factors that can affect it, (iii) saliency measurement and Field of View (FoV) prediction, and (iv) the adaptive streaming of immersive 360-degree videos. The final objective of the survey is to provide an overview of the research on all the elements of an immersive video streaming system, giving the reader an understanding of their interplay and performance.
[ { "created": "Tue, 16 Feb 2021 14:39:59 GMT", "version": "v1" } ]
2021-02-17
[ [ "Chiariotti", "Federico", "" ] ]
The commercialization of Virtual Reality (VR) headsets has made immersive and 360-degree video streaming the subject of intense interest in the industry and research communities. While the basic principles of video streaming are the same, immersive video presents a set of specific challenges that need to be addressed. In this survey, we present the latest developments in the relevant literature on four of the most important ones: (i) omnidirectional video coding and compression, (ii) subjective and objective Quality of Experience (QoE) and the factors that can affect it, (iii) saliency measurement and Field of View (FoV) prediction, and (iv) the adaptive streaming of immersive 360-degree videos. The final objective of the survey is to provide an overview of the research on all the elements of an immersive video streaming system, giving the reader an understanding of their interplay and performance.
1904.08847
Ezio Bartocci
Ezio Bartocci, Luca Bortolussi, Michele Loreti, Laura Nenzi
Monitoring Mobile and Spatially Distributed Cyber-Physical Systems
null
MEMOCODE 2017, ACM, pp 146--155, 2017
10.1145/3127041.3127050
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cyber-Physical Systems~(CPS) consist of collaborative, networked and tightly intertwined computational (logical) and physical components, each operating at different spatial and temporal scales. Hence, the spatial and temporal requirements play an essential role for their correct and safe execution. Furthermore, the local interactions among the system components result in global spatio-temporal emergent behaviors often impossible to predict at the design time. In this work, we pursue a complementary approach by introducing STREL a novel spatio-temporal logic that enables the specification of spatio-temporal requirements and their monitoring over the execution of mobile and spatially distributed CPS. Our logic extends the Signal Temporal Logic with two novel spatial operators reach and escape from which is possible to derive other spatial modalities such as everywhere, somewhere and surround. These operators enable a monitoring procedure where the satisfaction of the property at each location depends only on the satisfaction of its neighbours, opening the way to future distributed online monitoring algorithms. We propose both a qualitative and quantitative semantics based on constraint semirings, an algebraic structure suitable for constraint satisfaction and optimisation. We prove that, for a subclass of models, all the spatial properties expressed with reach and escape, using euclidean distance, satisfy all the model transformations using rotation, reflection and translation. Finally, we provide an offline monitoring algorithm for STREL and, to demonstrate the feasibility of our approach, we show its application using the monitoring of a simulated mobile ad-hoc sensor network as running example.
[ { "created": "Mon, 15 Apr 2019 22:02:33 GMT", "version": "v1" } ]
2019-04-19
[ [ "Bartocci", "Ezio", "" ], [ "Bortolussi", "Luca", "" ], [ "Loreti", "Michele", "" ], [ "Nenzi", "Laura", "" ] ]
Cyber-Physical Systems~(CPS) consist of collaborative, networked and tightly intertwined computational (logical) and physical components, each operating at different spatial and temporal scales. Hence, the spatial and temporal requirements play an essential role for their correct and safe execution. Furthermore, the local interactions among the system components result in global spatio-temporal emergent behaviors often impossible to predict at the design time. In this work, we pursue a complementary approach by introducing STREL a novel spatio-temporal logic that enables the specification of spatio-temporal requirements and their monitoring over the execution of mobile and spatially distributed CPS. Our logic extends the Signal Temporal Logic with two novel spatial operators reach and escape from which is possible to derive other spatial modalities such as everywhere, somewhere and surround. These operators enable a monitoring procedure where the satisfaction of the property at each location depends only on the satisfaction of its neighbours, opening the way to future distributed online monitoring algorithms. We propose both a qualitative and quantitative semantics based on constraint semirings, an algebraic structure suitable for constraint satisfaction and optimisation. We prove that, for a subclass of models, all the spatial properties expressed with reach and escape, using euclidean distance, satisfy all the model transformations using rotation, reflection and translation. Finally, we provide an offline monitoring algorithm for STREL and, to demonstrate the feasibility of our approach, we show its application using the monitoring of a simulated mobile ad-hoc sensor network as running example.
2112.00250
Zhonghao Chen
Hongmin Gao, Zhonghao Chen, and Chenming Li
Shallow Network Based on Depthwise Over-Parameterized Convolution for Hyperspectral Image Classification
null
null
10.1109/LGRS.2021.3133598
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Recently, convolutional neural network (CNN) techniques have gained popularity as a tool for hyperspectral image classification (HSIC). To improve the feature extraction efficiency of HSIC under the condition of limited samples, the current methods generally use deep models with plenty of layers. However, deep network models are prone to overfitting and gradient vanishing problems when samples are limited. In addition, the spatial resolution decreases severely with deeper depth, which is very detrimental to spatial edge feature extraction. Therefore, this letter proposes a shallow model for HSIC, which is called depthwise over-parameterized convolutional neural network (DOCNN). To ensure the effective extraction of the shallow model, the depthwise over-parameterized convolution (DO-Conv) kernel is introduced to extract the discriminative features. The depthwise over-parameterized Convolution kernel is composed of a standard convolution kernel and a depthwise convolution kernel, which can extract the spatial feature of the different channels individually and fuse the spatial features of the whole channels simultaneously. Moreover, to further reduce the loss of spatial edge features due to the convolution operation, a dense residual connection (DRC) structure is proposed to apply to the feature extraction part of the whole network. Experimental results obtained from three benchmark data sets show that the proposed method outperforms other state-of-the-art methods in terms of classification accuracy and computational efficiency.
[ { "created": "Wed, 1 Dec 2021 03:10:02 GMT", "version": "v1" } ]
2023-04-06
[ [ "Gao", "Hongmin", "" ], [ "Chen", "Zhonghao", "" ], [ "Li", "Chenming", "" ] ]
Recently, convolutional neural network (CNN) techniques have gained popularity as a tool for hyperspectral image classification (HSIC). To improve the feature extraction efficiency of HSIC under the condition of limited samples, the current methods generally use deep models with plenty of layers. However, deep network models are prone to overfitting and gradient vanishing problems when samples are limited. In addition, the spatial resolution decreases severely with deeper depth, which is very detrimental to spatial edge feature extraction. Therefore, this letter proposes a shallow model for HSIC, which is called depthwise over-parameterized convolutional neural network (DOCNN). To ensure the effective extraction of the shallow model, the depthwise over-parameterized convolution (DO-Conv) kernel is introduced to extract the discriminative features. The depthwise over-parameterized Convolution kernel is composed of a standard convolution kernel and a depthwise convolution kernel, which can extract the spatial feature of the different channels individually and fuse the spatial features of the whole channels simultaneously. Moreover, to further reduce the loss of spatial edge features due to the convolution operation, a dense residual connection (DRC) structure is proposed to apply to the feature extraction part of the whole network. Experimental results obtained from three benchmark data sets show that the proposed method outperforms other state-of-the-art methods in terms of classification accuracy and computational efficiency.
2012.08729
Yixuan Wang
Ranjan Pal, Junhui Li, Yixuan Wang, Mingyan Liu, Swades De, and Jon Crowcroft
Data Trading with a Monopoly Social Network: Outcomes are Mostly Privacy Welfare Damaging
incrementally updated version to version in IEEE Networking Letters; This work is based upon results in NBER w26296
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
This paper argues that data of strategic individuals with heterogeneous privacy valuations in a distributed online social network (e.g., Facebook) will be under-priced, if traded in a monopoly buyer setting, and will lead to diminishing utilitarian welfare. This result, for a certain family of online community data trading problems, is in stark contrast to a popular information economics intuition that increased amounts of end-user data signals in a data market improves its efficiency. Our proposed theory paves the way for a future (counter-intuitive) analysis of data trading oligopoly markets for online social networks (OSNs).
[ { "created": "Wed, 16 Dec 2020 04:00:39 GMT", "version": "v1" }, { "created": "Wed, 24 Nov 2021 05:04:15 GMT", "version": "v2" } ]
2021-11-25
[ [ "Pal", "Ranjan", "" ], [ "Li", "Junhui", "" ], [ "Wang", "Yixuan", "" ], [ "Liu", "Mingyan", "" ], [ "De", "Swades", "" ], [ "Crowcroft", "Jon", "" ] ]
This paper argues that data of strategic individuals with heterogeneous privacy valuations in a distributed online social network (e.g., Facebook) will be under-priced, if traded in a monopoly buyer setting, and will lead to diminishing utilitarian welfare. This result, for a certain family of online community data trading problems, is in stark contrast to a popular information economics intuition that increased amounts of end-user data signals in a data market improves its efficiency. Our proposed theory paves the way for a future (counter-intuitive) analysis of data trading oligopoly markets for online social networks (OSNs).
1803.01580
Andrew Krizhanovsky A
Andrew Krizhanovsky, Alexander Kirillov
Calculated attributes of synonym sets
6 pages, 2 tables, 2 figures, preprint
null
null
null
cs.CL cs.IR
http://creativecommons.org/licenses/by/4.0/
The goal of formalization, proposed in this paper, is to bring together, as near as possible, the theoretic linguistic problem of synonym conception and the computer linguistic methods based generally on empirical intuitive unjustified factors. Using the word vector representation we have proposed the geometric approach to mathematical modeling of synonym set (synset). The word embedding is based on the neural networks (Skip-gram, CBOW), developed and realized as word2vec program by T. Mikolov. The standard cosine similarity is used as the distance between word-vectors. Several geometric characteristics of the synset words are introduced: the interior of synset, the synset word rank and centrality. These notions are intended to select the most significant synset words, i.e. the words which senses are the nearest to the sense of a synset. Some experiments with proposed notions, based on RusVectores resources, are represented. A brief description of this work can be viewed in slides https://goo.gl/K82Fei
[ { "created": "Mon, 5 Mar 2018 10:09:32 GMT", "version": "v1" } ]
2018-03-06
[ [ "Krizhanovsky", "Andrew", "" ], [ "Kirillov", "Alexander", "" ] ]
The goal of formalization, proposed in this paper, is to bring together, as near as possible, the theoretic linguistic problem of synonym conception and the computer linguistic methods based generally on empirical intuitive unjustified factors. Using the word vector representation we have proposed the geometric approach to mathematical modeling of synonym set (synset). The word embedding is based on the neural networks (Skip-gram, CBOW), developed and realized as word2vec program by T. Mikolov. The standard cosine similarity is used as the distance between word-vectors. Several geometric characteristics of the synset words are introduced: the interior of synset, the synset word rank and centrality. These notions are intended to select the most significant synset words, i.e. the words which senses are the nearest to the sense of a synset. Some experiments with proposed notions, based on RusVectores resources, are represented. A brief description of this work can be viewed in slides https://goo.gl/K82Fei
2310.06556
Fiona Draxler
Fiona Draxler, Daniel Buschek, Mikke Tavast, Perttu H\"am\"al\"ainen, Albrecht Schmidt, Juhi Kulshrestha, Robin Welsch
Gender, Age, and Technology Education Influence the Adoption and Appropriation of LLMs
null
null
null
null
cs.CY cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Large Language Models (LLMs) such as ChatGPT have become increasingly integrated into critical activities of daily life, raising concerns about equitable access and utilization across diverse demographics. This study investigates the usage of LLMs among 1,500 representative US citizens. Remarkably, 42% of participants reported utilizing an LLM. Our findings reveal a gender gap in LLM technology adoption (more male users than female users) with complex interaction patterns regarding age. Technology-related education eliminates the gender gap in our sample. Moreover, expert users are more likely than novices to list professional tasks as typical application scenarios, suggesting discrepancies in effective usage at the workplace. These results underscore the importance of providing education in artificial intelligence in our technology-driven society to promote equitable access to and benefits from LLMs. We urge for both international replication beyond the US and longitudinal observation of adoption.
[ { "created": "Tue, 10 Oct 2023 12:11:39 GMT", "version": "v1" } ]
2023-10-11
[ [ "Draxler", "Fiona", "" ], [ "Buschek", "Daniel", "" ], [ "Tavast", "Mikke", "" ], [ "Hämäläinen", "Perttu", "" ], [ "Schmidt", "Albrecht", "" ], [ "Kulshrestha", "Juhi", "" ], [ "Welsch", "Robin", "" ] ]
Large Language Models (LLMs) such as ChatGPT have become increasingly integrated into critical activities of daily life, raising concerns about equitable access and utilization across diverse demographics. This study investigates the usage of LLMs among 1,500 representative US citizens. Remarkably, 42% of participants reported utilizing an LLM. Our findings reveal a gender gap in LLM technology adoption (more male users than female users) with complex interaction patterns regarding age. Technology-related education eliminates the gender gap in our sample. Moreover, expert users are more likely than novices to list professional tasks as typical application scenarios, suggesting discrepancies in effective usage at the workplace. These results underscore the importance of providing education in artificial intelligence in our technology-driven society to promote equitable access to and benefits from LLMs. We urge for both international replication beyond the US and longitudinal observation of adoption.
1512.05814
Cem Sahin
Cem S. Sahin, Robert Lychev and Neal Wagner
General Framework for Evaluating Password Complexity and Strength
11 pages and 4 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although it is common for users to select bad passwords that can be easily cracked by attackers, password-based authentication remains the most widely-used method. To encourage users to select good passwords, enterprises often enforce policies. Such policies have been proven to be ineffectual in practice. Accurate assessment of a password's resistance to cracking attacks is still an unsolved problem, and our work addresses this challenge. Although the best way to determine how difficult it may be to crack a user-selected password is to check its resistance to cracking attacks employed by attackers in the wild, implementing such a strategy at an enterprise would be infeasible in practice. We first formalize the concepts of password complexity and strength with concrete definitions emphasizing their differences. Our framework captures human biases and many known techniques attackers use to recover stolen credentials in real life, such as brute-force attacks. Building on our definitions, we develop a general framework for calculating password complexity and strength that could be used in practice. Our approach is based on the key insight that an attacker's success at cracking a password must be defined by its available computational resources, time, function used to store that password, as well as the topology that bounds that attacker's search space based on that attacker's available inputs, transformations it can use to tweak and explore its inputs, and the path of exploration which can be based on the attacker's perceived probability of success. We also provide a general framework for assessing the accuracy of password complexity and strength estimators that can be used to compare other tools available in the wild.
[ { "created": "Thu, 17 Dec 2015 22:19:50 GMT", "version": "v1" } ]
2015-12-21
[ [ "Sahin", "Cem S.", "" ], [ "Lychev", "Robert", "" ], [ "Wagner", "Neal", "" ] ]
Although it is common for users to select bad passwords that can be easily cracked by attackers, password-based authentication remains the most widely-used method. To encourage users to select good passwords, enterprises often enforce policies. Such policies have been proven to be ineffectual in practice. Accurate assessment of a password's resistance to cracking attacks is still an unsolved problem, and our work addresses this challenge. Although the best way to determine how difficult it may be to crack a user-selected password is to check its resistance to cracking attacks employed by attackers in the wild, implementing such a strategy at an enterprise would be infeasible in practice. We first formalize the concepts of password complexity and strength with concrete definitions emphasizing their differences. Our framework captures human biases and many known techniques attackers use to recover stolen credentials in real life, such as brute-force attacks. Building on our definitions, we develop a general framework for calculating password complexity and strength that could be used in practice. Our approach is based on the key insight that an attacker's success at cracking a password must be defined by its available computational resources, time, function used to store that password, as well as the topology that bounds that attacker's search space based on that attacker's available inputs, transformations it can use to tweak and explore its inputs, and the path of exploration which can be based on the attacker's perceived probability of success. We also provide a general framework for assessing the accuracy of password complexity and strength estimators that can be used to compare other tools available in the wild.
1911.12008
Anthony Bagnall Dr
Anthony Bagnall, James Large and Matthew Middlehurst
A tale of two toolkits, report the second: bake off redux. Chapter 1. dictionary based classifiers
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time series classification (TSC) is the problem of learning labels from time dependent data. One class of algorithms is derived from a bag of words approach. A window is run along a series, the subseries is shortened and discretised to form a word, then features are formed from the histogram of frequency of occurrence of words. We call this type of approach to TSC dictionary based classification. We compare four dictionary based algorithms in the context of a wider project to update the great time series classification bakeoff, a comparative study published in 2017. We experimentally characterise the algorithms in terms of predictive performance, time complexity and space complexity. We find that we can improve on the previous best in terms of accuracy, but this comes at the cost of time and space. Alternatively, the same performance can be achieved with far less cost. We review the relative merits of the four algorithms before suggesting a path to possible improvement.
[ { "created": "Wed, 27 Nov 2019 08:05:48 GMT", "version": "v1" } ]
2019-11-28
[ [ "Bagnall", "Anthony", "" ], [ "Large", "James", "" ], [ "Middlehurst", "Matthew", "" ] ]
Time series classification (TSC) is the problem of learning labels from time dependent data. One class of algorithms is derived from a bag of words approach. A window is run along a series, the subseries is shortened and discretised to form a word, then features are formed from the histogram of frequency of occurrence of words. We call this type of approach to TSC dictionary based classification. We compare four dictionary based algorithms in the context of a wider project to update the great time series classification bakeoff, a comparative study published in 2017. We experimentally characterise the algorithms in terms of predictive performance, time complexity and space complexity. We find that we can improve on the previous best in terms of accuracy, but this comes at the cost of time and space. Alternatively, the same performance can be achieved with far less cost. We review the relative merits of the four algorithms before suggesting a path to possible improvement.
1910.02612
Yuanpeng Li
Yuanpeng Li, Liang Zhao, Jianyu Wang, Joel Hestness
Compositional Generalization for Primitive Substitutions
EMNLP 2019
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compositional generalization is a basic mechanism in human language learning, but current neural networks lack such ability. In this paper, we conduct fundamental research for encoding compositionality in neural networks. Conventional methods use a single representation for the input sentence, making it hard to apply prior knowledge of compositionality. In contrast, our approach leverages such knowledge with two representations, one generating attention maps, and the other mapping attended input words to output symbols. We reduce the entropy in each representation to improve generalization. Our experiments demonstrate significant improvements over the conventional methods in five NLP tasks including instruction learning and machine translation. In the SCAN domain, it boosts accuracies from 14.0% to 98.8% in Jump task, and from 92.0% to 99.7% in TurnLeft task. It also beats human performance on a few-shot learning task. We hope the proposed approach can help ease future research towards human-level compositional language learning.
[ { "created": "Mon, 7 Oct 2019 05:27:27 GMT", "version": "v1" } ]
2019-10-08
[ [ "Li", "Yuanpeng", "" ], [ "Zhao", "Liang", "" ], [ "Wang", "Jianyu", "" ], [ "Hestness", "Joel", "" ] ]
Compositional generalization is a basic mechanism in human language learning, but current neural networks lack such ability. In this paper, we conduct fundamental research for encoding compositionality in neural networks. Conventional methods use a single representation for the input sentence, making it hard to apply prior knowledge of compositionality. In contrast, our approach leverages such knowledge with two representations, one generating attention maps, and the other mapping attended input words to output symbols. We reduce the entropy in each representation to improve generalization. Our experiments demonstrate significant improvements over the conventional methods in five NLP tasks including instruction learning and machine translation. In the SCAN domain, it boosts accuracies from 14.0% to 98.8% in Jump task, and from 92.0% to 99.7% in TurnLeft task. It also beats human performance on a few-shot learning task. We hope the proposed approach can help ease future research towards human-level compositional language learning.
2307.14064
Yinghui Ye
Rui Xu, Liqin Shi, Yinghui Ye, Haijian Sun, and Gan Zheng
Relay-Enabled Backscatter Communications: Linear Mapping and Resource Allocation
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relay-enabled backscatter communication (BC) is an intriguing paradigm to alleviate energy shortage and improve throughput of Internet-of-Things (IoT) devices. Most of the existing works focus on the resource allocation that considered the unequal and continuous time allocation for both source-relay and relay-destination links. However, the continuous time allocation may be infeasible since in practice, the time allocation shall be carried out in integral multiple of the subframe duration unit. In this article, we study a discrete time scheme from the perspective of frame structure, where one transmission block is divided into two phases and the linear mapping is employed as a re-encoding method to determine the number of subframes for both phases and the power allocation for each subframe in a relay-enabled BC system. Based on this, we derive an accurate system-throughput expression and formulate a mixed-integral non-convex optimization problem to maximize the system throughput by jointly optimizing the power reflection coefficient (PRC) of the IoT node, the power allocation of the hybrid access point (HAP) and the linear mapping matrix, and solve it via a three-step approach. Accordingly, we propose a low complexity iterative algorithm to obtain the throughput maximization-based resource allocation solution. Numerical results analyze the performance of our proposed algorithm, verify the superiority of our proposed scheme, and evaluate the impacts of network parameters on the system throughput.
[ { "created": "Wed, 26 Jul 2023 09:31:30 GMT", "version": "v1" } ]
2023-07-27
[ [ "Xu", "Rui", "" ], [ "Shi", "Liqin", "" ], [ "Ye", "Yinghui", "" ], [ "Sun", "Haijian", "" ], [ "Zheng", "Gan", "" ] ]
Relay-enabled backscatter communication (BC) is an intriguing paradigm to alleviate energy shortage and improve throughput of Internet-of-Things (IoT) devices. Most of the existing works focus on the resource allocation that considered the unequal and continuous time allocation for both source-relay and relay-destination links. However, the continuous time allocation may be infeasible since in practice, the time allocation shall be carried out in integral multiple of the subframe duration unit. In this article, we study a discrete time scheme from the perspective of frame structure, where one transmission block is divided into two phases and the linear mapping is employed as a re-encoding method to determine the number of subframes for both phases and the power allocation for each subframe in a relay-enabled BC system. Based on this, we derive an accurate system-throughput expression and formulate a mixed-integral non-convex optimization problem to maximize the system throughput by jointly optimizing the power reflection coefficient (PRC) of the IoT node, the power allocation of the hybrid access point (HAP) and the linear mapping matrix, and solve it via a three-step approach. Accordingly, we propose a low complexity iterative algorithm to obtain the throughput maximization-based resource allocation solution. Numerical results analyze the performance of our proposed algorithm, verify the superiority of our proposed scheme, and evaluate the impacts of network parameters on the system throughput.
1908.08031
Christoforos Mavrogiannis
Siddhartha S. Srinivasa, Patrick Lancaster, Johan Michalove, Matt Schmittle, Colin Summers, Matthew Rockett, Rosario Scalise, Joshua R. Smith, Sanjiban Choudhury, Christoforos Mavrogiannis, Fereshteh Sadeghi
MuSHR: A Low-Cost, Open-Source Robotic Racecar for Education and Research
Added Rosario Scalise to the author list
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present MuSHR, the Multi-agent System for non-Holonomic Racing. MuSHR is a low-cost, open-source robotic racecar platform for education and research, developed by the Personal Robotics Lab in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. MuSHR aspires to contribute towards democratizing the field of robotics as a low-cost platform that can be built and deployed by following detailed, open documentation and do-it-yourself tutorials. A set of demos and lab assignments developed for the Mobile Robots course at the University of Washington provide guided, hands-on experience with the platform, and milestones for further development. MuSHR is a valuable asset for academic research labs, robotics instructors, and robotics enthusiasts.
[ { "created": "Wed, 21 Aug 2019 17:54:15 GMT", "version": "v1" }, { "created": "Thu, 22 Aug 2019 00:42:41 GMT", "version": "v2" }, { "created": "Sun, 24 Dec 2023 23:06:12 GMT", "version": "v3" } ]
2023-12-27
[ [ "Srinivasa", "Siddhartha S.", "" ], [ "Lancaster", "Patrick", "" ], [ "Michalove", "Johan", "" ], [ "Schmittle", "Matt", "" ], [ "Summers", "Colin", "" ], [ "Rockett", "Matthew", "" ], [ "Scalise", "Rosario", ...
We present MuSHR, the Multi-agent System for non-Holonomic Racing. MuSHR is a low-cost, open-source robotic racecar platform for education and research, developed by the Personal Robotics Lab in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. MuSHR aspires to contribute towards democratizing the field of robotics as a low-cost platform that can be built and deployed by following detailed, open documentation and do-it-yourself tutorials. A set of demos and lab assignments developed for the Mobile Robots course at the University of Washington provide guided, hands-on experience with the platform, and milestones for further development. MuSHR is a valuable asset for academic research labs, robotics instructors, and robotics enthusiasts.
2310.19804
Pablo Samuel Castro
Pablo Samuel Castro, Tyler Kastner, Prakash Panangaden, Mark Rowland
A Kernel Perspective on Behavioural Metrics for Markov Decision Processes
Published in TMLR
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Behavioural metrics have been shown to be an effective mechanism for constructing representations in reinforcement learning. We present a novel perspective on behavioural metrics for Markov decision processes via the use of positive definite kernels. We leverage this new perspective to define a new metric that is provably equivalent to the recently introduced MICo distance (Castro et al., 2021). The kernel perspective further enables us to provide new theoretical results, which has so far eluded prior work. These include bounding value function differences by means of our metric, and the demonstration that our metric can be provably embedded into a finite-dimensional Euclidean space with low distortion error. These are two crucial properties when using behavioural metrics for reinforcement learning representations. We complement our theory with strong empirical results that demonstrate the effectiveness of these methods in practice.
[ { "created": "Thu, 5 Oct 2023 20:44:57 GMT", "version": "v1" } ]
2023-11-01
[ [ "Castro", "Pablo Samuel", "" ], [ "Kastner", "Tyler", "" ], [ "Panangaden", "Prakash", "" ], [ "Rowland", "Mark", "" ] ]
Behavioural metrics have been shown to be an effective mechanism for constructing representations in reinforcement learning. We present a novel perspective on behavioural metrics for Markov decision processes via the use of positive definite kernels. We leverage this new perspective to define a new metric that is provably equivalent to the recently introduced MICo distance (Castro et al., 2021). The kernel perspective further enables us to provide new theoretical results, which has so far eluded prior work. These include bounding value function differences by means of our metric, and the demonstration that our metric can be provably embedded into a finite-dimensional Euclidean space with low distortion error. These are two crucial properties when using behavioural metrics for reinforcement learning representations. We complement our theory with strong empirical results that demonstrate the effectiveness of these methods in practice.
2009.13829
Ting-Yun Chang
Ting-Yun Chang and Chi-Jen Lu
TinyGAN: Distilling BigGAN for Conditional Image Generation
accepted by ACCV 2020
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Generative Adversarial Networks (GANs) have become a powerful approach for generative image modeling. However, GANs are notorious for their training instability, especially on large-scale, complex datasets. While the recent work of BigGAN has significantly improved the quality of image generation on ImageNet, it requires a huge model, making it hard to deploy on resource-constrained devices. To reduce the model size, we propose a black-box knowledge distillation framework for compressing GANs, which highlights a stable and efficient training process. Given BigGAN as the teacher network, we manage to train a much smaller student network to mimic its functionality, achieving competitive performance on Inception and FID scores with the generator having $16\times$ fewer parameters.
[ { "created": "Tue, 29 Sep 2020 07:33:49 GMT", "version": "v1" } ]
2020-09-30
[ [ "Chang", "Ting-Yun", "" ], [ "Lu", "Chi-Jen", "" ] ]
Generative Adversarial Networks (GANs) have become a powerful approach for generative image modeling. However, GANs are notorious for their training instability, especially on large-scale, complex datasets. While the recent work of BigGAN has significantly improved the quality of image generation on ImageNet, it requires a huge model, making it hard to deploy on resource-constrained devices. To reduce the model size, we propose a black-box knowledge distillation framework for compressing GANs, which highlights a stable and efficient training process. Given BigGAN as the teacher network, we manage to train a much smaller student network to mimic its functionality, achieving competitive performance on Inception and FID scores with the generator having $16\times$ fewer parameters.
2106.13954
Mahardhika Pratama Dr
Mao Fubing, Weng Weiwei, Mahardhika Pratama, Edward Yapp Kien Yee
Continual Learning via Inter-Task Synaptic Mapping
This paper has been published in Knowledge-based Systems
Knowledge-based Systems, 2021
10.1016/j.knosys.2021.106947
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Learning from streaming tasks leads a model to catastrophically erase unique experiences it absorbs from previous episodes. While regularization techniques such as LWF, SI, EWC have proven themselves as an effective avenue to overcome this issue by constraining important parameters of old tasks from changing when accepting new concepts, these approaches do not exploit common information of each task which can be shared to existing neurons. As a result, they do not scale well to large-scale problems since the parameter importance variables quickly explode. An Inter-Task Synaptic Mapping (ISYANA) is proposed here to underpin knowledge retention for continual learning. ISYANA combines task-to-neuron relationship as well as concept-to-concept relationship such that it prevents a neuron to embrace distinct concepts while merely accepting relevant concept. Numerical study in the benchmark continual learning problems has been carried out followed by comparison against prominent continual learning algorithms. ISYANA exhibits competitive performance compared to state of the arts. Codes of ISYANA is made available in \url{https://github.com/ContinualAL/ISYANAKBS}.
[ { "created": "Sat, 26 Jun 2021 06:30:43 GMT", "version": "v1" } ]
2021-06-29
[ [ "Fubing", "Mao", "" ], [ "Weiwei", "Weng", "" ], [ "Pratama", "Mahardhika", "" ], [ "Yee", "Edward Yapp Kien", "" ] ]
Learning from streaming tasks leads a model to catastrophically erase unique experiences it absorbs from previous episodes. While regularization techniques such as LWF, SI, EWC have proven themselves as an effective avenue to overcome this issue by constraining important parameters of old tasks from changing when accepting new concepts, these approaches do not exploit common information of each task which can be shared to existing neurons. As a result, they do not scale well to large-scale problems since the parameter importance variables quickly explode. An Inter-Task Synaptic Mapping (ISYANA) is proposed here to underpin knowledge retention for continual learning. ISYANA combines task-to-neuron relationship as well as concept-to-concept relationship such that it prevents a neuron to embrace distinct concepts while merely accepting relevant concept. Numerical study in the benchmark continual learning problems has been carried out followed by comparison against prominent continual learning algorithms. ISYANA exhibits competitive performance compared to state of the arts. Codes of ISYANA is made available in \url{https://github.com/ContinualAL/ISYANAKBS}.
1504.00136
Guangming Lang
Guangming Lang
Knowledge reduction of dynamic covering decision information systems with immigration of more objects
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In practical situations, it is of interest to investigate computing approximations of sets as an important step of knowledge reduction of dynamic covering decision information systems. In this paper, we present incremental approaches to computing the type-1 and type-2 characteristic matrices of dynamic coverings whose cardinalities increase with immigration of more objects. We also present the incremental algorithms of computing the second and sixth lower and upper approximations of sets in dynamic covering approximation spaces.
[ { "created": "Wed, 1 Apr 2015 08:12:01 GMT", "version": "v1" } ]
2015-04-02
[ [ "Lang", "Guangming", "" ] ]
In practical situations, it is of interest to investigate computing approximations of sets as an important step of knowledge reduction of dynamic covering decision information systems. In this paper, we present incremental approaches to computing the type-1 and type-2 characteristic matrices of dynamic coverings whose cardinalities increase with immigration of more objects. We also present the incremental algorithms of computing the second and sixth lower and upper approximations of sets in dynamic covering approximation spaces.
1912.03613
Tae Soo Kim
Tae Soo Kim, Jonathan D. Jones, Michael Peven, Zihao Xiao, Jin Bai, Yi Zhang, Weichao Qiu, Alan Yuille, Gregory D. Hager
DASZL: Dynamic Action Signatures for Zero-shot Learning
10 pages, 4 figures, 3 tables, AAAI2021 submission
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are many realistic applications of activity recognition where the set of potential activity descriptions is combinatorially large. This makes end-to-end supervised training of a recognition system impractical as no training set is practically able to encompass the entire label set. In this paper, we present an approach to fine-grained recognition that models activities as compositions of dynamic action signatures. This compositional approach allows us to reframe fine-grained recognition as zero-shot activity recognition, where a detector is composed "on the fly" from simple first-principles state machines supported by deep-learned components. We evaluate our method on the Olympic Sports and UCF101 datasets, where our model establishes a new state of the art under multiple experimental paradigms. We also extend this method to form a unique framework for zero-shot joint segmentation and classification of activities in video and demonstrate the first results in zero-shot decoding of complex action sequences on a widely-used surgical dataset. Lastly, we show that we can use off-the-shelf object detectors to recognize activities in completely de-novo settings with no additional training.
[ { "created": "Sun, 8 Dec 2019 04:30:59 GMT", "version": "v1" }, { "created": "Tue, 10 Mar 2020 18:19:04 GMT", "version": "v2" }, { "created": "Wed, 18 Nov 2020 03:53:54 GMT", "version": "v3" } ]
2020-11-19
[ [ "Kim", "Tae Soo", "" ], [ "Jones", "Jonathan D.", "" ], [ "Peven", "Michael", "" ], [ "Xiao", "Zihao", "" ], [ "Bai", "Jin", "" ], [ "Zhang", "Yi", "" ], [ "Qiu", "Weichao", "" ], [ "Yuille", "A...
There are many realistic applications of activity recognition where the set of potential activity descriptions is combinatorially large. This makes end-to-end supervised training of a recognition system impractical as no training set is practically able to encompass the entire label set. In this paper, we present an approach to fine-grained recognition that models activities as compositions of dynamic action signatures. This compositional approach allows us to reframe fine-grained recognition as zero-shot activity recognition, where a detector is composed "on the fly" from simple first-principles state machines supported by deep-learned components. We evaluate our method on the Olympic Sports and UCF101 datasets, where our model establishes a new state of the art under multiple experimental paradigms. We also extend this method to form a unique framework for zero-shot joint segmentation and classification of activities in video and demonstrate the first results in zero-shot decoding of complex action sequences on a widely-used surgical dataset. Lastly, we show that we can use off-the-shelf object detectors to recognize activities in completely de-novo settings with no additional training.
1906.10535
\v{S}t\v{e}p\'an Holub
\v{S}t\v{e}p\'an Holub
Pseudo-solutions of word equations
small corrections
Theoretical Computer Science 814 (2020) 13-18
10.1016/j.tcs.2019.12.035
null
cs.FL math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a framework which allows a uniform approach to the recently introduced concept of pseudo-repetitions on words in the morphic case. This framework is at the same time more general and simpler. We introduce the concept of a pseudo-solution and a pseudo-rank of an equation. In particular, this allows to prove that if a classical equation forces periodicity then it also forces pseudo-periodicity. Consequently, there is no need to investigate generalizations of important equations one by one.
[ { "created": "Tue, 25 Jun 2019 13:56:56 GMT", "version": "v1" }, { "created": "Fri, 13 Sep 2019 10:19:31 GMT", "version": "v2" } ]
2020-04-03
[ [ "Holub", "Štěpán", "" ] ]
We present a framework which allows a uniform approach to the recently introduced concept of pseudo-repetitions on words in the morphic case. This framework is at the same time more general and simpler. We introduce the concept of a pseudo-solution and a pseudo-rank of an equation. In particular, this allows to prove that if a classical equation forces periodicity then it also forces pseudo-periodicity. Consequently, there is no need to investigate generalizations of important equations one by one.
1710.04312
Kyle Hundman
Kyle Hundman, Chris A. Mattmann
Measurement Context Extraction from Text: Discovering Opportunities and Gaps in Earth Science
null
23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Data-Driven Discovery Workshop, Halifax, Canada, August 2017
null
null
cs.IR cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose Marve, a system for extracting measurement values, units, and related words from natural language text. Marve uses conditional random fields (CRF) to identify measurement values and units, followed by a rule-based system to find related entities, descriptors and modifiers within a sentence. Sentence tokens are represented by an undirected graphical model, and rules are based on part-of-speech and word dependency patterns connecting values and units to contextual words. Marve is unique in its focus on measurement context and early experimentation demonstrates Marve's ability to generate high-precision extractions with strong recall. We also discuss Marve's role in refining measurement requirements for NASA's proposed HyspIRI mission, a hyperspectral infrared imaging satellite that will study the world's ecosystems. In general, our work with HyspIRI demonstrates the value of semantic measurement extractions in characterizing quantitative discussion contained in large corpuses of natural language text. These extractions accelerate broad, cross-cutting research and expose scientists new algorithmic approaches and experimental nuances. They also facilitate identification of scientific opportunities enabled by HyspIRI leading to more efficient scientific investment and research.
[ { "created": "Wed, 11 Oct 2017 21:37:07 GMT", "version": "v1" } ]
2017-10-13
[ [ "Hundman", "Kyle", "" ], [ "Mattmann", "Chris A.", "" ] ]
We propose Marve, a system for extracting measurement values, units, and related words from natural language text. Marve uses conditional random fields (CRF) to identify measurement values and units, followed by a rule-based system to find related entities, descriptors and modifiers within a sentence. Sentence tokens are represented by an undirected graphical model, and rules are based on part-of-speech and word dependency patterns connecting values and units to contextual words. Marve is unique in its focus on measurement context and early experimentation demonstrates Marve's ability to generate high-precision extractions with strong recall. We also discuss Marve's role in refining measurement requirements for NASA's proposed HyspIRI mission, a hyperspectral infrared imaging satellite that will study the world's ecosystems. In general, our work with HyspIRI demonstrates the value of semantic measurement extractions in characterizing quantitative discussion contained in large corpuses of natural language text. These extractions accelerate broad, cross-cutting research and expose scientists new algorithmic approaches and experimental nuances. They also facilitate identification of scientific opportunities enabled by HyspIRI leading to more efficient scientific investment and research.
2108.00954
Sijie Mai
Shuangjia Zheng, Sijie Mai, Ya Sun, Haifeng Hu, Yuedong Yang
Subgraph-aware Few-Shot Inductive Link Prediction via Meta-Learning
under review
null
null
null
cs.LG cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Link prediction for knowledge graphs aims to predict missing connections between entities. Prevailing methods are limited to a transductive setting and hard to process unseen entities. The recent proposed subgraph-based models provided alternatives to predict links from the subgraph structure surrounding a candidate triplet. However, these methods require abundant known facts of training triplets and perform poorly on relationships that only have a few triplets. In this paper, we propose Meta-iKG, a novel subgraph-based meta-learner for few-shot inductive relation reasoning. Meta-iKG utilizes local subgraphs to transfer subgraph-specific information and learn transferable patterns faster via meta gradients. In this way, we find the model can quickly adapt to few-shot relationships using only a handful of known facts with inductive settings. Moreover, we introduce a large-shot relation update procedure to traditional meta-learning to ensure that our model can generalize well both on few-shot and large-shot relations. We evaluate Meta-iKG on inductive benchmarks sampled from NELL and Freebase, and the results show that Meta-iKG outperforms the current state-of-the-art methods both in few-shot scenarios and standard inductive settings.
[ { "created": "Mon, 26 Jul 2021 11:56:18 GMT", "version": "v1" } ]
2021-08-03
[ [ "Zheng", "Shuangjia", "" ], [ "Mai", "Sijie", "" ], [ "Sun", "Ya", "" ], [ "Hu", "Haifeng", "" ], [ "Yang", "Yuedong", "" ] ]
Link prediction for knowledge graphs aims to predict missing connections between entities. Prevailing methods are limited to a transductive setting and hard to process unseen entities. The recent proposed subgraph-based models provided alternatives to predict links from the subgraph structure surrounding a candidate triplet. However, these methods require abundant known facts of training triplets and perform poorly on relationships that only have a few triplets. In this paper, we propose Meta-iKG, a novel subgraph-based meta-learner for few-shot inductive relation reasoning. Meta-iKG utilizes local subgraphs to transfer subgraph-specific information and learn transferable patterns faster via meta gradients. In this way, we find the model can quickly adapt to few-shot relationships using only a handful of known facts with inductive settings. Moreover, we introduce a large-shot relation update procedure to traditional meta-learning to ensure that our model can generalize well both on few-shot and large-shot relations. We evaluate Meta-iKG on inductive benchmarks sampled from NELL and Freebase, and the results show that Meta-iKG outperforms the current state-of-the-art methods both in few-shot scenarios and standard inductive settings.
1307.3346
Andriy Olenko
Andriy Olenko, Tibor K. Pog\'any
Universal truncation error upper bounds in sampling restoration
18 pages, 2 figures. This is an Author's Accepted Manuscript of an article published in the Georgian Mathematical Journal. Vol.17, No. 4. (2010), 765-786. The final publication is available at De Gruyter. DOI: 10.1515/gmj.2010.033
Georgian Mathematical Journal. Vol.17, No. 4. (2010), 765--786
10.1515/gmj.2010.033
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Universal (pointwise uniform and time shifted) truncation error upper bounds are presented for the Whittaker--Kotel'nikov--Shannon (WKS) sampling restoration sum for Bernstein function classes $B_{\pi,d}^q,\, q>1,\, d\in \mathbb N$, when the decay rate of the sampled functions is unknown. The case of regular sampling is discussed. Extremal properties of related series of sinc functions are investigated.
[ { "created": "Fri, 12 Jul 2013 06:47:23 GMT", "version": "v1" } ]
2013-07-15
[ [ "Olenko", "Andriy", "" ], [ "Pogány", "Tibor K.", "" ] ]
Universal (pointwise uniform and time shifted) truncation error upper bounds are presented for the Whittaker--Kotel'nikov--Shannon (WKS) sampling restoration sum for Bernstein function classes $B_{\pi,d}^q,\, q>1,\, d\in \mathbb N$, when the decay rate of the sampled functions is unknown. The case of regular sampling is discussed. Extremal properties of related series of sinc functions are investigated.
0910.5040
Li Chen
Li Chen, Yong Liu, Feng Luo
A Note on Gradually Varied Functions and Harmonic Functions
7 pages and 2 figures
null
null
null
cs.DM math.CA
http://creativecommons.org/licenses/by-nc-sa/3.0/
Any constructive continuous function must have a gradually varied approximation in compact space. However, the refinement of domain for $\sigma-$-net might be very small. Keeping the original discretization (square or triangulation), can we get some interesting properties related to gradual variation? In this note, we try to prove that many harmonic functions are gradually varied or near gradually varied; this means that the value of the center point differs from that of its neighbor at most by 2. It is obvious that most of the gradually varied functions are not harmonic.This note discusses some of the basic harmonic functions in relation to gradually varied functions.
[ { "created": "Tue, 27 Oct 2009 04:11:04 GMT", "version": "v1" } ]
2009-10-28
[ [ "Chen", "Li", "" ], [ "Liu", "Yong", "" ], [ "Luo", "Feng", "" ] ]
Any constructive continuous function must have a gradually varied approximation in compact space. However, the refinement of domain for $\sigma-$-net might be very small. Keeping the original discretization (square or triangulation), can we get some interesting properties related to gradual variation? In this note, we try to prove that many harmonic functions are gradually varied or near gradually varied; this means that the value of the center point differs from that of its neighbor at most by 2. It is obvious that most of the gradually varied functions are not harmonic.This note discusses some of the basic harmonic functions in relation to gradually varied functions.
2104.01477
Hosein Mohebbi
Hosein Mohebbi, Ali Modarressi, Mohammad Taher Pilehvar
Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
Accepted to EMNLP 2021 (main conference)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several studies have been carried out on revealing linguistic features captured by BERT. This is usually achieved by training a diagnostic classifier on the representations obtained from different layers of BERT. The subsequent classification accuracy is then interpreted as the ability of the model in encoding the corresponding linguistic property. Despite providing insights, these studies have left out the potential role of token representations. In this paper, we provide a more in-depth analysis on the representation space of BERT in search for distinct and meaningful subspaces that can explain the reasons behind these probing results. Based on a set of probing tasks and with the help of attribution methods we show that BERT tends to encode meaningful knowledge in specific token representations (which are often ignored in standard classification setups), allowing the model to detect syntactic and semantic abnormalities, and to distinctively separate grammatical number and tense subspaces.
[ { "created": "Sat, 3 Apr 2021 20:40:42 GMT", "version": "v1" }, { "created": "Sat, 11 Sep 2021 11:49:20 GMT", "version": "v2" } ]
2021-09-14
[ [ "Mohebbi", "Hosein", "" ], [ "Modarressi", "Ali", "" ], [ "Pilehvar", "Mohammad Taher", "" ] ]
Several studies have been carried out on revealing linguistic features captured by BERT. This is usually achieved by training a diagnostic classifier on the representations obtained from different layers of BERT. The subsequent classification accuracy is then interpreted as the ability of the model in encoding the corresponding linguistic property. Despite providing insights, these studies have left out the potential role of token representations. In this paper, we provide a more in-depth analysis on the representation space of BERT in search for distinct and meaningful subspaces that can explain the reasons behind these probing results. Based on a set of probing tasks and with the help of attribution methods we show that BERT tends to encode meaningful knowledge in specific token representations (which are often ignored in standard classification setups), allowing the model to detect syntactic and semantic abnormalities, and to distinctively separate grammatical number and tense subspaces.
2002.05417
G\"okhan G\"uler
G\"okhan G\"uler, A. C\"uneyd Tantu\u{g}
Comparison of Turkish Word Representations Trained on Different Morphological Forms
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Increased popularity of different text representations has also brought many improvements in Natural Language Processing (NLP) tasks. Without need of supervised data, embeddings trained on large corpora provide us meaningful relations to be used on different NLP tasks. Even though training these vectors is relatively easy with recent methods, information gained from the data heavily depends on the structure of the corpus language. Since the popularly researched languages have a similar morphological structure, problems occurring for morphologically rich languages are mainly disregarded in studies. For morphologically rich languages, context-free word vectors ignore morphological structure of languages. In this study, we prepared texts in morphologically different forms in a morphologically rich language, Turkish, and compared the results on different intrinsic and extrinsic tasks. To see the effect of morphological structure, we trained word2vec model on texts which lemma and suffixes are treated differently. We also trained subword model fastText and compared the embeddings on word analogy, text classification, sentimental analysis, and language model tasks.
[ { "created": "Thu, 13 Feb 2020 10:09:31 GMT", "version": "v1" } ]
2020-02-14
[ [ "Güler", "Gökhan", "" ], [ "Tantuğ", "A. Cüneyd", "" ] ]
Increased popularity of different text representations has also brought many improvements in Natural Language Processing (NLP) tasks. Without need of supervised data, embeddings trained on large corpora provide us meaningful relations to be used on different NLP tasks. Even though training these vectors is relatively easy with recent methods, information gained from the data heavily depends on the structure of the corpus language. Since the popularly researched languages have a similar morphological structure, problems occurring for morphologically rich languages are mainly disregarded in studies. For morphologically rich languages, context-free word vectors ignore morphological structure of languages. In this study, we prepared texts in morphologically different forms in a morphologically rich language, Turkish, and compared the results on different intrinsic and extrinsic tasks. To see the effect of morphological structure, we trained word2vec model on texts which lemma and suffixes are treated differently. We also trained subword model fastText and compared the embeddings on word analogy, text classification, sentimental analysis, and language model tasks.
2109.08580
Aleksandr Timofeev
Aleksandr Timofeev, Grigorios G. Chrysos, Volkan Cevher
Self-Supervised Neural Architecture Search for Imbalanced Datasets
Published in ICML 2021 Workshop: Self-Supervised Learning for Reasoning and Perception. Code: https://github.com/TimofeevAlex/ssnas_imbalanced
null
null
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by-sa/4.0/
Neural Architecture Search (NAS) provides state-of-the-art results when trained on well-curated datasets with annotated labels. However, annotating data or even having balanced number of samples can be a luxury for practitioners from different scientific fields, e.g., in the medical domain. To that end, we propose a NAS-based framework that bears the threefold contributions: (a) we focus on the self-supervised scenario, i.e., where no labels are required to determine the architecture, and (b) we assume the datasets are imbalanced, (c) we design each component to be able to run on a resource constrained setup, i.e., on a single GPU (e.g. Google Colab). Our components build on top of recent developments in self-supervised learning~\citep{zbontar2021barlow}, self-supervised NAS~\citep{kaplan2020self} and extend them for the case of imbalanced datasets. We conduct experiments on an (artificially) imbalanced version of CIFAR-10 and we demonstrate our proposed method outperforms standard neural networks, while using $27\times$ less parameters. To validate our assumption on a naturally imbalanced dataset, we also conduct experiments on ChestMNIST and COVID-19 X-ray. The results demonstrate how the proposed method can be used in imbalanced datasets, while it can be fully run on a single GPU. Code is available \href{https://github.com/TimofeevAlex/ssnas_imbalanced}{here}.
[ { "created": "Fri, 17 Sep 2021 14:56:36 GMT", "version": "v1" }, { "created": "Mon, 20 Sep 2021 16:16:05 GMT", "version": "v2" } ]
2021-09-21
[ [ "Timofeev", "Aleksandr", "" ], [ "Chrysos", "Grigorios G.", "" ], [ "Cevher", "Volkan", "" ] ]
Neural Architecture Search (NAS) provides state-of-the-art results when trained on well-curated datasets with annotated labels. However, annotating data or even having balanced number of samples can be a luxury for practitioners from different scientific fields, e.g., in the medical domain. To that end, we propose a NAS-based framework that bears the threefold contributions: (a) we focus on the self-supervised scenario, i.e., where no labels are required to determine the architecture, and (b) we assume the datasets are imbalanced, (c) we design each component to be able to run on a resource constrained setup, i.e., on a single GPU (e.g. Google Colab). Our components build on top of recent developments in self-supervised learning~\citep{zbontar2021barlow}, self-supervised NAS~\citep{kaplan2020self} and extend them for the case of imbalanced datasets. We conduct experiments on an (artificially) imbalanced version of CIFAR-10 and we demonstrate our proposed method outperforms standard neural networks, while using $27\times$ less parameters. To validate our assumption on a naturally imbalanced dataset, we also conduct experiments on ChestMNIST and COVID-19 X-ray. The results demonstrate how the proposed method can be used in imbalanced datasets, while it can be fully run on a single GPU. Code is available \href{https://github.com/TimofeevAlex/ssnas_imbalanced}{here}.
2105.08741
Dominik Dold
Josep Soler Garrido, Dominik Dold, Johannes Frank
Machine learning on knowledge graphs for context-aware security monitoring
Accepted for publication at IEEE-CSR 2021. Data is available on https://github.com/dodo47/cyberML
null
10.1109/CSR51186.2021.9527927
null
cs.CR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Machine learning techniques are gaining attention in the context of intrusion detection due to the increasing amounts of data generated by monitoring tools, as well as the sophistication displayed by attackers in hiding their activity. However, existing methods often exhibit important limitations in terms of the quantity and relevance of the generated alerts. Recently, knowledge graphs are finding application in the cybersecurity domain, showing the potential to alleviate some of these drawbacks thanks to their ability to seamlessly integrate data from multiple domains using human-understandable vocabularies. We discuss the application of machine learning on knowledge graphs for intrusion detection and experimentally evaluate a link-prediction method for scoring anomalous activity in industrial systems. After initial unsupervised training, the proposed method is shown to produce intuitively well-calibrated and interpretable alerts in a diverse range of scenarios, hinting at the potential benefits of relational machine learning on knowledge graphs for intrusion detection purposes.
[ { "created": "Tue, 18 May 2021 18:00:19 GMT", "version": "v1" } ]
2023-08-25
[ [ "Garrido", "Josep Soler", "" ], [ "Dold", "Dominik", "" ], [ "Frank", "Johannes", "" ] ]
Machine learning techniques are gaining attention in the context of intrusion detection due to the increasing amounts of data generated by monitoring tools, as well as the sophistication displayed by attackers in hiding their activity. However, existing methods often exhibit important limitations in terms of the quantity and relevance of the generated alerts. Recently, knowledge graphs are finding application in the cybersecurity domain, showing the potential to alleviate some of these drawbacks thanks to their ability to seamlessly integrate data from multiple domains using human-understandable vocabularies. We discuss the application of machine learning on knowledge graphs for intrusion detection and experimentally evaluate a link-prediction method for scoring anomalous activity in industrial systems. After initial unsupervised training, the proposed method is shown to produce intuitively well-calibrated and interpretable alerts in a diverse range of scenarios, hinting at the potential benefits of relational machine learning on knowledge graphs for intrusion detection purposes.
2309.03072
Michael Jungo
Michael Jungo, Beat Wolf, Andrii Maksai, Claudiu Musat and Andreas Fischer
Character Queries: A Transformer-based Approach to On-Line Handwritten Character Segmentation
ICDAR 2023 Best Student Paper Award. Code available at https://github.com/jungomi/character-queries
International Conference on Document Analysis and Recognition - ICDAR 2023, pp. 98-114. Cham: Springer Nature Switzerland
10.1007/978-3-031-41676-7_6
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
On-line handwritten character segmentation is often associated with handwriting recognition and even though recognition models include mechanisms to locate relevant positions during the recognition process, it is typically insufficient to produce a precise segmentation. Decoupling the segmentation from the recognition unlocks the potential to further utilize the result of the recognition. We specifically focus on the scenario where the transcription is known beforehand, in which case the character segmentation becomes an assignment problem between sampling points of the stylus trajectory and characters in the text. Inspired by the $k$-means clustering algorithm, we view it from the perspective of cluster assignment and present a Transformer-based architecture where each cluster is formed based on a learned character query in the Transformer decoder block. In order to assess the quality of our approach, we create character segmentation ground truths for two popular on-line handwriting datasets, IAM-OnDB and HANDS-VNOnDB, and evaluate multiple methods on them, demonstrating that our approach achieves the overall best results.
[ { "created": "Wed, 6 Sep 2023 15:19:04 GMT", "version": "v1" } ]
2023-09-07
[ [ "Jungo", "Michael", "" ], [ "Wolf", "Beat", "" ], [ "Maksai", "Andrii", "" ], [ "Musat", "Claudiu", "" ], [ "Fischer", "Andreas", "" ] ]
On-line handwritten character segmentation is often associated with handwriting recognition and even though recognition models include mechanisms to locate relevant positions during the recognition process, it is typically insufficient to produce a precise segmentation. Decoupling the segmentation from the recognition unlocks the potential to further utilize the result of the recognition. We specifically focus on the scenario where the transcription is known beforehand, in which case the character segmentation becomes an assignment problem between sampling points of the stylus trajectory and characters in the text. Inspired by the $k$-means clustering algorithm, we view it from the perspective of cluster assignment and present a Transformer-based architecture where each cluster is formed based on a learned character query in the Transformer decoder block. In order to assess the quality of our approach, we create character segmentation ground truths for two popular on-line handwriting datasets, IAM-OnDB and HANDS-VNOnDB, and evaluate multiple methods on them, demonstrating that our approach achieves the overall best results.
2009.00467
Farhad Shirani Chaharsooghi
Farhad Shirani, Siddharth Garg and Elza Erkip
A Concentration of Measure Approach to Correlated Graph Matching
arXiv admin note: text overlap with arXiv:2001.06962, arXiv:1810.13347
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The graph matching problem emerges naturally in various applications such as web privacy, image processing and computational biology. In this paper, graph matching is considered under a stochastic model, where a pair of randomly generated graphs with pairwise correlated edges are to be matched such that given the labeling of the vertices in the first graph, the labels in the second graph are recovered by leveraging the correlation among their edges. The problem is considered under various settings and graph models. In the first step, the Correlated Erd\"{o}s-R\'enyi (CER) graph model is studied, where all edge pairs whose vertices have similar labels are generated based on identical distributions and independently of other edges. A matching scheme called the \textit{typicality matching scheme} is introduced. The scheme operates by investigating the joint typicality of the adjacency matrices of the two graphs. New results on the typicality of permutations of sequences lead to necessary and sufficient conditions for successful matching based on the parameters of the CER model. In the next step, the results are extended to graphs with community structure generated based on the Stochastic Block Model (SBM). The SBM model is a generalization of the CER model where each vertex in the graph is associated with a community label, which affects its edge statistics. The results are further extended to matching of ensembles of more than two correlated graphs. Lastly, the problem of seeded graph matching is investigated where a subset of the labels in the second graph are known prior to matching. In this scenario, in addition to obtaining necessary and sufficient conditions for successful matching, a polytime matching algorithm is proposed.
[ { "created": "Sun, 30 Aug 2020 18:02:23 GMT", "version": "v1" }, { "created": "Tue, 26 Jan 2021 03:45:38 GMT", "version": "v2" } ]
2021-01-27
[ [ "Shirani", "Farhad", "" ], [ "Garg", "Siddharth", "" ], [ "Erkip", "Elza", "" ] ]
The graph matching problem emerges naturally in various applications such as web privacy, image processing and computational biology. In this paper, graph matching is considered under a stochastic model, where a pair of randomly generated graphs with pairwise correlated edges are to be matched such that given the labeling of the vertices in the first graph, the labels in the second graph are recovered by leveraging the correlation among their edges. The problem is considered under various settings and graph models. In the first step, the Correlated Erd\"{o}s-R\'enyi (CER) graph model is studied, where all edge pairs whose vertices have similar labels are generated based on identical distributions and independently of other edges. A matching scheme called the \textit{typicality matching scheme} is introduced. The scheme operates by investigating the joint typicality of the adjacency matrices of the two graphs. New results on the typicality of permutations of sequences lead to necessary and sufficient conditions for successful matching based on the parameters of the CER model. In the next step, the results are extended to graphs with community structure generated based on the Stochastic Block Model (SBM). The SBM model is a generalization of the CER model where each vertex in the graph is associated with a community label, which affects its edge statistics. The results are further extended to matching of ensembles of more than two correlated graphs. Lastly, the problem of seeded graph matching is investigated where a subset of the labels in the second graph are known prior to matching. In this scenario, in addition to obtaining necessary and sufficient conditions for successful matching, a polytime matching algorithm is proposed.
1703.05853
Vivienne Sze
Amr Suleiman, Yu-Hsin Chen, Joel Emer, Vivienne Sze
Towards Closing the Energy Gap Between HOG and CNN Features for Embedded Vision
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computer vision enables a wide range of applications in robotics/drones, self-driving cars, smart Internet of Things, and portable/wearable electronics. For many of these applications, local embedded processing is preferred due to privacy and/or latency concerns. Accordingly, energy-efficient embedded vision hardware delivering real-time and robust performance is crucial. While deep learning is gaining popularity in several computer vision algorithms, a significant energy consumption difference exists compared to traditional hand-crafted approaches. In this paper, we provide an in-depth analysis of the computation, energy and accuracy trade-offs between learned features such as deep Convolutional Neural Networks (CNN) and hand-crafted features such as Histogram of Oriented Gradients (HOG). This analysis is supported by measurements from two chips that implement these algorithms. Our goal is to understand the source of the energy discrepancy between the two approaches and to provide insight about the potential areas where CNNs can be improved and eventually approach the energy-efficiency of HOG while maintaining its outstanding performance accuracy.
[ { "created": "Fri, 17 Mar 2017 00:17:50 GMT", "version": "v1" } ]
2017-03-20
[ [ "Suleiman", "Amr", "" ], [ "Chen", "Yu-Hsin", "" ], [ "Emer", "Joel", "" ], [ "Sze", "Vivienne", "" ] ]
Computer vision enables a wide range of applications in robotics/drones, self-driving cars, smart Internet of Things, and portable/wearable electronics. For many of these applications, local embedded processing is preferred due to privacy and/or latency concerns. Accordingly, energy-efficient embedded vision hardware delivering real-time and robust performance is crucial. While deep learning is gaining popularity in several computer vision algorithms, a significant energy consumption difference exists compared to traditional hand-crafted approaches. In this paper, we provide an in-depth analysis of the computation, energy and accuracy trade-offs between learned features such as deep Convolutional Neural Networks (CNN) and hand-crafted features such as Histogram of Oriented Gradients (HOG). This analysis is supported by measurements from two chips that implement these algorithms. Our goal is to understand the source of the energy discrepancy between the two approaches and to provide insight about the potential areas where CNNs can be improved and eventually approach the energy-efficiency of HOG while maintaining its outstanding performance accuracy.
2212.14784
Nicolas Wagner
Nicolas Wagner, Ulrich Schwanecke, Mario Botsch
Neural Volumetric Blendshapes: Computationally Efficient Physics-Based Facial Blendshapes
null
null
null
null
cs.GR
http://creativecommons.org/licenses/by/4.0/
Computationally weak systems and demanding graphical applications are still mostly dependent on linear blendshapes for facial animations. The accompanying artifacts such as self-intersections, loss of volume, or missing soft tissue elasticity can be avoided by using physics-based animation models. However, these are cumbersome to implement and require immense computational effort. We propose neural volumetric blendshapes, an approach that combines the advantages of physics-based simulations with realtime runtimes even on consumer-grade CPUs. To this end, we present a neural network that efficiently approximates the involved volumetric simulations and generalizes across human identities as well as facial expressions. Our approach can be used on top of any linear blendshape system and, hence, can be deployed straightforwardly. Furthermore, it only requires a single neutral face mesh as input in the minimal setting. Along with the design of the network, we introduce a pipeline for the challenging creation of anatomically and physically plausible training data. Part of the pipeline is a novel hybrid regressor that densely positions a skull within a skin surface while avoiding intersections. The fidelity of all parts of the data generation pipeline as well as the accuracy and efficiency of the network are evaluated in this work. Upon publication, the trained models and associated code will be released.
[ { "created": "Fri, 23 Dec 2022 08:17:25 GMT", "version": "v1" }, { "created": "Fri, 20 Jan 2023 12:57:34 GMT", "version": "v2" } ]
2023-01-23
[ [ "Wagner", "Nicolas", "" ], [ "Schwanecke", "Ulrich", "" ], [ "Botsch", "Mario", "" ] ]
Computationally weak systems and demanding graphical applications are still mostly dependent on linear blendshapes for facial animations. The accompanying artifacts such as self-intersections, loss of volume, or missing soft tissue elasticity can be avoided by using physics-based animation models. However, these are cumbersome to implement and require immense computational effort. We propose neural volumetric blendshapes, an approach that combines the advantages of physics-based simulations with realtime runtimes even on consumer-grade CPUs. To this end, we present a neural network that efficiently approximates the involved volumetric simulations and generalizes across human identities as well as facial expressions. Our approach can be used on top of any linear blendshape system and, hence, can be deployed straightforwardly. Furthermore, it only requires a single neutral face mesh as input in the minimal setting. Along with the design of the network, we introduce a pipeline for the challenging creation of anatomically and physically plausible training data. Part of the pipeline is a novel hybrid regressor that densely positions a skull within a skin surface while avoiding intersections. The fidelity of all parts of the data generation pipeline as well as the accuracy and efficiency of the network are evaluated in this work. Upon publication, the trained models and associated code will be released.
2203.06317
Xudong Han
Xudong Han, Timothy Baldwin, Trevor Cohn
Towards Equal Opportunity Fairness through Adversarial Learning
8 pages
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adversarial training is a common approach for bias mitigation in natural language processing. Although most work on debiasing is motivated by equal opportunity, it is not explicitly captured in standard adversarial training. In this paper, we propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features and more explicitly model equal opportunity. Experimental results over two datasets show that our method substantially improves over standard adversarial debiasing methods, in terms of the performance--fairness trade-off.
[ { "created": "Sat, 12 Mar 2022 02:22:58 GMT", "version": "v1" }, { "created": "Sun, 15 May 2022 12:51:28 GMT", "version": "v2" } ]
2022-05-17
[ [ "Han", "Xudong", "" ], [ "Baldwin", "Timothy", "" ], [ "Cohn", "Trevor", "" ] ]
Adversarial training is a common approach for bias mitigation in natural language processing. Although most work on debiasing is motivated by equal opportunity, it is not explicitly captured in standard adversarial training. In this paper, we propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features and more explicitly model equal opportunity. Experimental results over two datasets show that our method substantially improves over standard adversarial debiasing methods, in terms of the performance--fairness trade-off.
2009.08311
Wenhao Ding
Wenhao Ding, Baiming Chen, Bo Li, Kim Ji Eun, Ding Zhao
Multimodal Safety-Critical Scenarios Generation for Decision-Making Algorithms Evaluation
8 pages, 7 figures
null
null
null
cs.LG cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing neural network-based autonomous systems are shown to be vulnerable against adversarial attacks, therefore sophisticated evaluation on their robustness is of great importance. However, evaluating the robustness only under the worst-case scenarios based on known attacks is not comprehensive, not to mention that some of them even rarely occur in the real world. In addition, the distribution of safety-critical data is usually multimodal, while most traditional attacks and evaluation methods focus on a single modality. To solve the above challenges, we propose a flow-based multimodal safety-critical scenario generator for evaluating decisionmaking algorithms. The proposed generative model is optimized with weighted likelihood maximization and a gradient-based sampling procedure is integrated to improve the sampling efficiency. The safety-critical scenarios are generated by querying the task algorithms and the log-likelihood of the generated scenarios is in proportion to the risk level. Experiments on a self-driving task demonstrate our advantages in terms of testing efficiency and multimodal modeling capability. We evaluate six Reinforcement Learning algorithms with our generated traffic scenarios and provide empirical conclusions about their robustness.
[ { "created": "Wed, 16 Sep 2020 15:16:43 GMT", "version": "v1" }, { "created": "Fri, 25 Sep 2020 00:16:13 GMT", "version": "v2" }, { "created": "Sat, 26 Dec 2020 16:54:12 GMT", "version": "v3" } ]
2020-12-29
[ [ "Ding", "Wenhao", "" ], [ "Chen", "Baiming", "" ], [ "Li", "Bo", "" ], [ "Eun", "Kim Ji", "" ], [ "Zhao", "Ding", "" ] ]
Existing neural network-based autonomous systems are shown to be vulnerable against adversarial attacks, therefore sophisticated evaluation on their robustness is of great importance. However, evaluating the robustness only under the worst-case scenarios based on known attacks is not comprehensive, not to mention that some of them even rarely occur in the real world. In addition, the distribution of safety-critical data is usually multimodal, while most traditional attacks and evaluation methods focus on a single modality. To solve the above challenges, we propose a flow-based multimodal safety-critical scenario generator for evaluating decisionmaking algorithms. The proposed generative model is optimized with weighted likelihood maximization and a gradient-based sampling procedure is integrated to improve the sampling efficiency. The safety-critical scenarios are generated by querying the task algorithms and the log-likelihood of the generated scenarios is in proportion to the risk level. Experiments on a self-driving task demonstrate our advantages in terms of testing efficiency and multimodal modeling capability. We evaluate six Reinforcement Learning algorithms with our generated traffic scenarios and provide empirical conclusions about their robustness.
2002.01125
Mahdi Biparva
Mahdi Biparva, John Tsotsos
Selective Segmentation Networks Using Top-Down Attention
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks model the transformation of the input sensory data at the bottom of a network hierarchy to the semantic information at the top of the visual hierarchy. Feedforward processing is sufficient for some object recognition tasks. Top-Down selection is potentially required in addition to the Bottom-Up feedforward pass. It can, in part, address the shortcoming of the loss of location information imposed by the hierarchical feature pyramids. We propose a unified 2-pass framework for object segmentation that augments Bottom-Up \convnets with a Top-Down selection network. We utilize the top-down selection gating activities to modulate the bottom-up hidden activities for segmentation predictions. We develop an end-to-end multi-task framework with loss terms satisfying task requirements at the two ends of the network. We evaluate the proposed network on benchmark datasets for semantic segmentation, and show that networks with the Top-Down selection capability outperform the baseline model. Additionally, we shed light on the superior aspects of the new segmentation paradigm and qualitatively and quantitatively support the efficiency of the novel framework over the baseline model that relies purely on parametric skip connections.
[ { "created": "Tue, 4 Feb 2020 04:47:23 GMT", "version": "v1" } ]
2020-02-05
[ [ "Biparva", "Mahdi", "" ], [ "Tsotsos", "John", "" ] ]
Convolutional neural networks model the transformation of the input sensory data at the bottom of a network hierarchy to the semantic information at the top of the visual hierarchy. Feedforward processing is sufficient for some object recognition tasks. Top-Down selection is potentially required in addition to the Bottom-Up feedforward pass. It can, in part, address the shortcoming of the loss of location information imposed by the hierarchical feature pyramids. We propose a unified 2-pass framework for object segmentation that augments Bottom-Up \convnets with a Top-Down selection network. We utilize the top-down selection gating activities to modulate the bottom-up hidden activities for segmentation predictions. We develop an end-to-end multi-task framework with loss terms satisfying task requirements at the two ends of the network. We evaluate the proposed network on benchmark datasets for semantic segmentation, and show that networks with the Top-Down selection capability outperform the baseline model. Additionally, we shed light on the superior aspects of the new segmentation paradigm and qualitatively and quantitatively support the efficiency of the novel framework over the baseline model that relies purely on parametric skip connections.
1902.08788
Yuedong Chen
Yuedong Chen, Jianfeng Wang, Shikai Chen, Zhongchao Shi, Jianfei Cai
Facial Motion Prior Networks for Facial Expression Recognition
VCIP 2019, Oral. Code is available at https://github.com/donydchen/FMPN-FER
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning based facial expression recognition (FER) has received a lot of attention in the past few years. Most of the existing deep learning based FER methods do not consider domain knowledge well, which thereby fail to extract representative features. In this work, we propose a novel FER framework, named Facial Motion Prior Networks (FMPN). Particularly, we introduce an addition branch to generate a facial mask so as to focus on facial muscle moving regions. To guide the facial mask learning, we propose to incorporate prior domain knowledge by using the average differences between neutral faces and the corresponding expressive faces as the training guidance. Extensive experiments on three facial expression benchmark datasets demonstrate the effectiveness of the proposed method, compared with the state-of-the-art approaches.
[ { "created": "Sat, 23 Feb 2019 13:26:45 GMT", "version": "v1" }, { "created": "Mon, 2 Dec 2019 03:14:57 GMT", "version": "v2" } ]
2019-12-03
[ [ "Chen", "Yuedong", "" ], [ "Wang", "Jianfeng", "" ], [ "Chen", "Shikai", "" ], [ "Shi", "Zhongchao", "" ], [ "Cai", "Jianfei", "" ] ]
Deep learning based facial expression recognition (FER) has received a lot of attention in the past few years. Most of the existing deep learning based FER methods do not consider domain knowledge well, which thereby fail to extract representative features. In this work, we propose a novel FER framework, named Facial Motion Prior Networks (FMPN). Particularly, we introduce an addition branch to generate a facial mask so as to focus on facial muscle moving regions. To guide the facial mask learning, we propose to incorporate prior domain knowledge by using the average differences between neutral faces and the corresponding expressive faces as the training guidance. Extensive experiments on three facial expression benchmark datasets demonstrate the effectiveness of the proposed method, compared with the state-of-the-art approaches.
2109.08240
Lydia T. Liu
Lydia T. Liu, Nikhil Garg, Christian Borgs
Strategic Ranking
30 pages. To appear in the conference proceedings of AISTATS 2022
null
null
null
cs.GT cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Strategic classification studies the design of a classifier robust to the manipulation of input by strategic individuals. However, the existing literature does not consider the effect of competition among individuals as induced by the algorithm design. Motivated by constrained allocation settings such as college admissions, we introduce strategic ranking, in which the (designed) individual reward depends on an applicant's post-effort rank in a measurement of interest. Our results illustrate how competition among applicants affects the resulting equilibria and model insights. We analyze how various ranking reward designs, belonging to a family of step functions, trade off applicant, school, and societal utility, as well as how ranking design counters inequities arising from disparate access to resources. In particular, we find that randomization in the reward design can mitigate two measures of disparate impact, welfare gap and access.
[ { "created": "Thu, 16 Sep 2021 22:04:24 GMT", "version": "v1" }, { "created": "Mon, 21 Feb 2022 20:30:06 GMT", "version": "v2" } ]
2022-02-23
[ [ "Liu", "Lydia T.", "" ], [ "Garg", "Nikhil", "" ], [ "Borgs", "Christian", "" ] ]
Strategic classification studies the design of a classifier robust to the manipulation of input by strategic individuals. However, the existing literature does not consider the effect of competition among individuals as induced by the algorithm design. Motivated by constrained allocation settings such as college admissions, we introduce strategic ranking, in which the (designed) individual reward depends on an applicant's post-effort rank in a measurement of interest. Our results illustrate how competition among applicants affects the resulting equilibria and model insights. We analyze how various ranking reward designs, belonging to a family of step functions, trade off applicant, school, and societal utility, as well as how ranking design counters inequities arising from disparate access to resources. In particular, we find that randomization in the reward design can mitigate two measures of disparate impact, welfare gap and access.