id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2203.01057
Le Yang
Le Yang, Junwei Han, Dingwen Zhang
Colar: Effective and Efficient Online Action Detection by Consulting Exemplars
CVPR 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online action detection has attracted increasing research interests in recent years. Current works model historical dependencies and anticipate the future to perceive the action evolution within a video segment and improve the detection accuracy. However, the existing paradigm ignores category-level modeling and does not pay sufficient attention to efficiency. Considering a category, its representative frames exhibit various characteristics. Thus, the category-level modeling can provide complimentary guidance to the temporal dependencies modeling. This paper develops an effective exemplar-consultation mechanism that first measures the similarity between a frame and exemplary frames, and then aggregates exemplary features based on the similarity weights. This is also an efficient mechanism, as both similarity measurement and feature aggregation require limited computations. Based on the exemplar-consultation mechanism, the long-term dependencies can be captured by regarding historical frames as exemplars, while the category-level modeling can be achieved by regarding representative frames from a category as exemplars. Due to the complementarity from the category-level modeling, our method employs a lightweight architecture but achieves new high performance on three benchmarks. In addition, using a spatio-temporal network to tackle video frames, our method makes a good trade-off between effectiveness and efficiency. Code is available at https://github.com/VividLe/Online-Action-Detection.
[ { "created": "Wed, 2 Mar 2022 12:13:08 GMT", "version": "v1" }, { "created": "Tue, 22 Mar 2022 13:31:53 GMT", "version": "v2" } ]
2022-03-23
[ [ "Yang", "Le", "" ], [ "Han", "Junwei", "" ], [ "Zhang", "Dingwen", "" ] ]
Online action detection has attracted increasing research interests in recent years. Current works model historical dependencies and anticipate the future to perceive the action evolution within a video segment and improve the detection accuracy. However, the existing paradigm ignores category-level modeling and does not pay sufficient attention to efficiency. Considering a category, its representative frames exhibit various characteristics. Thus, the category-level modeling can provide complimentary guidance to the temporal dependencies modeling. This paper develops an effective exemplar-consultation mechanism that first measures the similarity between a frame and exemplary frames, and then aggregates exemplary features based on the similarity weights. This is also an efficient mechanism, as both similarity measurement and feature aggregation require limited computations. Based on the exemplar-consultation mechanism, the long-term dependencies can be captured by regarding historical frames as exemplars, while the category-level modeling can be achieved by regarding representative frames from a category as exemplars. Due to the complementarity from the category-level modeling, our method employs a lightweight architecture but achieves new high performance on three benchmarks. In addition, using a spatio-temporal network to tackle video frames, our method makes a good trade-off between effectiveness and efficiency. Code is available at https://github.com/VividLe/Online-Action-Detection.
2002.02450
Pavel Gulyaev
Pavel Gulyaev, Eugenia Elistratova, Vasily Konovalov, Yuri Kuratov, Leonid Pugachev, Mikhail Burtsev
Goal-Oriented Multi-Task BERT-Based Dialogue State Tracker
null
null
null
null
cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dialogue State Tracking (DST) is a core component of virtual assistants such as Alexa or Siri. To accomplish various tasks, these assistants need to support an increasing number of services and APIs. The Schema-Guided State Tracking track of the 8th Dialogue System Technology Challenge highlighted the DST problem for unseen services. The organizers introduced the Schema-Guided Dialogue (SGD) dataset with multi-domain conversations and released a zero-shot dialogue state tracking model. In this work, we propose a GOaL-Oriented Multi-task BERT-based dialogue state tracker (GOLOMB) inspired by architectures for reading comprehension question answering systems. The model "queries" dialogue history with descriptions of slots and services as well as possible values of slots. This allows to transfer slot values in multi-domain dialogues and have a capability to scale to unseen slot types. Our model achieves a joint goal accuracy of 53.97% on the SGD dataset, outperforming the baseline model.
[ { "created": "Wed, 5 Feb 2020 22:56:12 GMT", "version": "v1" } ]
2020-02-10
[ [ "Gulyaev", "Pavel", "" ], [ "Elistratova", "Eugenia", "" ], [ "Konovalov", "Vasily", "" ], [ "Kuratov", "Yuri", "" ], [ "Pugachev", "Leonid", "" ], [ "Burtsev", "Mikhail", "" ] ]
Dialogue State Tracking (DST) is a core component of virtual assistants such as Alexa or Siri. To accomplish various tasks, these assistants need to support an increasing number of services and APIs. The Schema-Guided State Tracking track of the 8th Dialogue System Technology Challenge highlighted the DST problem for unseen services. The organizers introduced the Schema-Guided Dialogue (SGD) dataset with multi-domain conversations and released a zero-shot dialogue state tracking model. In this work, we propose a GOaL-Oriented Multi-task BERT-based dialogue state tracker (GOLOMB) inspired by architectures for reading comprehension question answering systems. The model "queries" dialogue history with descriptions of slots and services as well as possible values of slots. This allows to transfer slot values in multi-domain dialogues and have a capability to scale to unseen slot types. Our model achieves a joint goal accuracy of 53.97% on the SGD dataset, outperforming the baseline model.
2405.13721
Zhiwei Bai
Zhiwei Bai, Jiajie Zhao, Yaoyu Zhang
Connectivity Shapes Implicit Regularization in Matrix Factorization Models for Matrix Completion
34 pages
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Matrix factorization models have been extensively studied as a valuable test-bed for understanding the implicit biases of overparameterized models. Although both low nuclear norm and low rank regularization have been studied for these models, a unified understanding of when, how, and why they achieve different implicit regularization effects remains elusive. In this work, we systematically investigate the implicit regularization of matrix factorization for solving matrix completion problems. We empirically discover that the connectivity of observed data plays a crucial role in the implicit bias, with a transition from low nuclear norm to low rank as data shifts from disconnected to connected with increased observations. We identify a hierarchy of intrinsic invariant manifolds in the loss landscape that guide the training trajectory to evolve from low-rank to higher-rank solutions. Based on this finding, we theoretically characterize the training trajectory as following the hierarchical invariant manifold traversal process, generalizing the characterization of Li et al. (2020) to include the disconnected case. Furthermore, we establish conditions that guarantee minimum nuclear norm, closely aligning with our experimental findings, and we provide a dynamics characterization condition for ensuring minimum rank. Our work reveals the intricate interplay between data connectivity, training dynamics, and implicit regularization in matrix factorization models.
[ { "created": "Wed, 22 May 2024 15:12:14 GMT", "version": "v1" } ]
2024-05-24
[ [ "Bai", "Zhiwei", "" ], [ "Zhao", "Jiajie", "" ], [ "Zhang", "Yaoyu", "" ] ]
Matrix factorization models have been extensively studied as a valuable test-bed for understanding the implicit biases of overparameterized models. Although both low nuclear norm and low rank regularization have been studied for these models, a unified understanding of when, how, and why they achieve different implicit regularization effects remains elusive. In this work, we systematically investigate the implicit regularization of matrix factorization for solving matrix completion problems. We empirically discover that the connectivity of observed data plays a crucial role in the implicit bias, with a transition from low nuclear norm to low rank as data shifts from disconnected to connected with increased observations. We identify a hierarchy of intrinsic invariant manifolds in the loss landscape that guide the training trajectory to evolve from low-rank to higher-rank solutions. Based on this finding, we theoretically characterize the training trajectory as following the hierarchical invariant manifold traversal process, generalizing the characterization of Li et al. (2020) to include the disconnected case. Furthermore, we establish conditions that guarantee minimum nuclear norm, closely aligning with our experimental findings, and we provide a dynamics characterization condition for ensuring minimum rank. Our work reveals the intricate interplay between data connectivity, training dynamics, and implicit regularization in matrix factorization models.
1807.08532
Baha Eddine Youcef Belmekki
Baha Eddine Youcef Belmekki, Abdelkrim Hamza, Beno\^it Escrig
Performance Analysis of Cooperative Communications at Road Intersections Using Stochastic Geometry Tools
31 pages, 10 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicular safety communications (VSCs) are known to provide relevant contributions to avoid congestions and prevent road accidents, and more particularly at road intersections since these areas are more prone to accidents. In this context, one of the main impairments that affect the performance of VSCs are interference. In this paper, we develop a tractable framework to model cooperative transmissions in presence of interference for VSCs at intersections. We use tools from stochastic geometry, and model interferer vehicles locations as a Poisson point process. First, we calculate the outage probability (OP) for a direct transmission when the received node can be anywhere on the plan. Then, we analyze the OP performance of a cooperative transmission scheme. The analysis takes into account two dimensions: the decoding strategy at the receiver, and the vehicles mobility. We derive the optimal relay position, from analytical and simulation results, for different traffic densities and for different vehicles mobility models. We also show that the OP does not improve after the number of infrastructure relays reached a threshold value. Finally, we show that the OP performance of VSCs is higher at intersections than on highways. We validated our analytical results by Monte-Carlo simulations.
[ { "created": "Mon, 23 Jul 2018 11:12:26 GMT", "version": "v1" }, { "created": "Sat, 13 Jul 2019 15:39:00 GMT", "version": "v2" }, { "created": "Tue, 26 Nov 2019 17:14:05 GMT", "version": "v3" } ]
2019-11-27
[ [ "Belmekki", "Baha Eddine Youcef", "" ], [ "Hamza", "Abdelkrim", "" ], [ "Escrig", "Benoît", "" ] ]
Vehicular safety communications (VSCs) are known to provide relevant contributions to avoid congestions and prevent road accidents, and more particularly at road intersections since these areas are more prone to accidents. In this context, one of the main impairments that affect the performance of VSCs are interference. In this paper, we develop a tractable framework to model cooperative transmissions in presence of interference for VSCs at intersections. We use tools from stochastic geometry, and model interferer vehicles locations as a Poisson point process. First, we calculate the outage probability (OP) for a direct transmission when the received node can be anywhere on the plan. Then, we analyze the OP performance of a cooperative transmission scheme. The analysis takes into account two dimensions: the decoding strategy at the receiver, and the vehicles mobility. We derive the optimal relay position, from analytical and simulation results, for different traffic densities and for different vehicles mobility models. We also show that the OP does not improve after the number of infrastructure relays reached a threshold value. Finally, we show that the OP performance of VSCs is higher at intersections than on highways. We validated our analytical results by Monte-Carlo simulations.
1503.03283
Ayineedi Venkateswarlu
Ayineedi Venkateswarlu, Santanu Sarkar and A. Sai Mali
On Acyclic Edge-Coloring of Complete Bipartite Graphs
17 pages, 10 figures
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An acyclic edge-coloring of a graph is a proper edge-coloring without bichromatic ($2$-colored) cycles. The acyclic chromatic index of a graph $G$, denoted by $a'(G)$, is the least integer $k$ such that $G$ admits an acyclic edge-coloring using $k$ colors. Let $\Delta = \Delta(G)$ denote the maximum degree of a vertex in a graph $G$. A complete bipartite graph with $n$ vertices on each side is denoted by $K_{n,n}$. Basavaraju, Chandran and Kummini proved that $a'(K_{n,n}) \ge n+2 = \Delta + 2$ when $n$ is odd. Basavaraju and Chandran provided an acyclic edge-coloring of $K_{p,p}$ using $p+2$ colors and thus establishing $a'(K_{p,p}) = p+2 = \Delta + 2$ when $p$ is an odd prime. The main tool in their approach is perfect $1$-factorization of $K_{p,p}$. Recently, following their approach, Venkateswarlu and Sarkar have shown that $K_{2p-1,2p-1}$ admits an acyclic edge-coloring using $2p+1$ colors which implies that $a'(K_{2p-1,2p-1}) = 2p+1 = \Delta + 2$, where $p$ is an odd prime. In this paper, we generalize this approach and present a general framework to possibly get an acyclic edge-coloring of $K_{n,n}$ which possess a perfect $1$-factorization using $n+2 = \Delta+2$ colors. In this general framework, we show that $K_{p^2,p^2}$ admits an acyclic edge-coloring using $p^2+2$ colors and thus establishing $a'(K_{p^2,p^2}) = p^2+2 = \Delta + 2$ when $p\ge 5$ is an odd prime.
[ { "created": "Wed, 11 Mar 2015 11:41:31 GMT", "version": "v1" } ]
2015-03-12
[ [ "Venkateswarlu", "Ayineedi", "" ], [ "Sarkar", "Santanu", "" ], [ "Mali", "A. Sai", "" ] ]
An acyclic edge-coloring of a graph is a proper edge-coloring without bichromatic ($2$-colored) cycles. The acyclic chromatic index of a graph $G$, denoted by $a'(G)$, is the least integer $k$ such that $G$ admits an acyclic edge-coloring using $k$ colors. Let $\Delta = \Delta(G)$ denote the maximum degree of a vertex in a graph $G$. A complete bipartite graph with $n$ vertices on each side is denoted by $K_{n,n}$. Basavaraju, Chandran and Kummini proved that $a'(K_{n,n}) \ge n+2 = \Delta + 2$ when $n$ is odd. Basavaraju and Chandran provided an acyclic edge-coloring of $K_{p,p}$ using $p+2$ colors and thus establishing $a'(K_{p,p}) = p+2 = \Delta + 2$ when $p$ is an odd prime. The main tool in their approach is perfect $1$-factorization of $K_{p,p}$. Recently, following their approach, Venkateswarlu and Sarkar have shown that $K_{2p-1,2p-1}$ admits an acyclic edge-coloring using $2p+1$ colors which implies that $a'(K_{2p-1,2p-1}) = 2p+1 = \Delta + 2$, where $p$ is an odd prime. In this paper, we generalize this approach and present a general framework to possibly get an acyclic edge-coloring of $K_{n,n}$ which possess a perfect $1$-factorization using $n+2 = \Delta+2$ colors. In this general framework, we show that $K_{p^2,p^2}$ admits an acyclic edge-coloring using $p^2+2$ colors and thus establishing $a'(K_{p^2,p^2}) = p^2+2 = \Delta + 2$ when $p\ge 5$ is an odd prime.
2212.03559
Xihong Yang
Xihong Yang, Erxue Min, Ke Liang, Yue Liu, Siwei Wang, Sihang Zhou, Huijun Wu, Xinwang Liu, En Zhu
GraphLearner: Graph Node Clustering with Fully Learnable Augmentation
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters. The quality of contrastive samples is crucial for achieving better performance, making augmentation techniques a key factor in the process. However, the augmentation samples in existing methods are always predefined by human experiences, and agnostic from the downstream task clustering, thus leading to high human resource costs and poor performance. To overcome these limitations, we propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner. It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC. GraphLearner incorporates two learnable augmentors specifically designed for capturing attribute and structural information. Moreover, we introduce two refinement matrices, including the high-confidence pseudo-label matrix and the cross-view sample similarity matrix, to enhance the reliability of the learned affinity matrix. During the training procedure, we notice the distinct optimization goals for training learnable augmentors and contrastive learning networks. In other words, we should both guarantee the consistency of the embeddings as well as the diversity of the augmented samples. To address this challenge, we propose an adversarial learning mechanism within our method. Besides, we leverage a two-stage training strategy to refine the high-confidence matrices. Extensive experimental results on six benchmark datasets validate the effectiveness of GraphLearner.The code and appendix of GraphLearner are available at https://github.com/xihongyang1999/GraphLearner on Github.
[ { "created": "Wed, 7 Dec 2022 10:19:39 GMT", "version": "v1" }, { "created": "Thu, 28 Sep 2023 13:14:07 GMT", "version": "v2" }, { "created": "Tue, 6 Aug 2024 15:56:31 GMT", "version": "v3" } ]
2024-08-07
[ [ "Yang", "Xihong", "" ], [ "Min", "Erxue", "" ], [ "Liang", "Ke", "" ], [ "Liu", "Yue", "" ], [ "Wang", "Siwei", "" ], [ "Zhou", "Sihang", "" ], [ "Wu", "Huijun", "" ], [ "Liu", "Xinwang", "" ], [ "Zhu", "En", "" ] ]
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters. The quality of contrastive samples is crucial for achieving better performance, making augmentation techniques a key factor in the process. However, the augmentation samples in existing methods are always predefined by human experiences, and agnostic from the downstream task clustering, thus leading to high human resource costs and poor performance. To overcome these limitations, we propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner. It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC. GraphLearner incorporates two learnable augmentors specifically designed for capturing attribute and structural information. Moreover, we introduce two refinement matrices, including the high-confidence pseudo-label matrix and the cross-view sample similarity matrix, to enhance the reliability of the learned affinity matrix. During the training procedure, we notice the distinct optimization goals for training learnable augmentors and contrastive learning networks. In other words, we should both guarantee the consistency of the embeddings as well as the diversity of the augmented samples. To address this challenge, we propose an adversarial learning mechanism within our method. Besides, we leverage a two-stage training strategy to refine the high-confidence matrices. Extensive experimental results on six benchmark datasets validate the effectiveness of GraphLearner.The code and appendix of GraphLearner are available at https://github.com/xihongyang1999/GraphLearner on Github.
2101.02839
Haojian Zhang
Haojian Zhang, Yabin Zhang, Kui Jia, Lei Zhang
Unsupervised Domain Adaptation of Black-Box Source Models
null
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Unsupervised domain adaptation (UDA) aims to learn models for a target domain of unlabeled data by transferring knowledge from a labeled source domain. In the traditional UDA setting, labeled source data are assumed to be available for adaptation. Due to increasing concerns for data privacy, source-free UDA is highly appreciated as a new UDA setting, where only a trained source model is assumed to be available, while labeled source data remain private. However, trained source models may also be unavailable in practice since source models may have commercial values and exposing source models brings risks to the source domain, e.g., problems of model misuse and white-box attacks. In this work, we study a subtly different setting, named Black-Box Unsupervised Domain Adaptation (B$^2$UDA), where only the application programming interface of source model is accessible to the target domain; in other words, the source model itself is kept as a black-box one. To tackle B$^2$UDA, we propose a simple yet effective method, termed Iterative Learning with Noisy Labels (IterLNL). With black-box models as tools of noisy labeling, IterLNL conducts noisy labeling and learning with noisy labels (LNL), iteratively. To facilitate the implementation of LNL in B$^2$UDA, we estimate the noise rate from model predictions of unlabeled target data and propose category-wise sampling to tackle the unbalanced label noise among categories. Experiments on benchmark datasets show the efficacy of IterLNL. Given neither source data nor source models, IterLNL performs comparably with traditional UDA methods that make full use of labeled source data.
[ { "created": "Fri, 8 Jan 2021 04:00:49 GMT", "version": "v1" }, { "created": "Sun, 28 Mar 2021 02:13:16 GMT", "version": "v2" } ]
2021-03-30
[ [ "Zhang", "Haojian", "" ], [ "Zhang", "Yabin", "" ], [ "Jia", "Kui", "" ], [ "Zhang", "Lei", "" ] ]
Unsupervised domain adaptation (UDA) aims to learn models for a target domain of unlabeled data by transferring knowledge from a labeled source domain. In the traditional UDA setting, labeled source data are assumed to be available for adaptation. Due to increasing concerns for data privacy, source-free UDA is highly appreciated as a new UDA setting, where only a trained source model is assumed to be available, while labeled source data remain private. However, trained source models may also be unavailable in practice since source models may have commercial values and exposing source models brings risks to the source domain, e.g., problems of model misuse and white-box attacks. In this work, we study a subtly different setting, named Black-Box Unsupervised Domain Adaptation (B$^2$UDA), where only the application programming interface of source model is accessible to the target domain; in other words, the source model itself is kept as a black-box one. To tackle B$^2$UDA, we propose a simple yet effective method, termed Iterative Learning with Noisy Labels (IterLNL). With black-box models as tools of noisy labeling, IterLNL conducts noisy labeling and learning with noisy labels (LNL), iteratively. To facilitate the implementation of LNL in B$^2$UDA, we estimate the noise rate from model predictions of unlabeled target data and propose category-wise sampling to tackle the unbalanced label noise among categories. Experiments on benchmark datasets show the efficacy of IterLNL. Given neither source data nor source models, IterLNL performs comparably with traditional UDA methods that make full use of labeled source data.
1312.5307
Bryan Ford
Joan Feigenbaum and Bryan Ford
Seeking Anonymity in an Internet Panopticon
8 pages, 10 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Obtaining and maintaining anonymity on the Internet is challenging. The state of the art in deployed tools, such as Tor, uses onion routing (OR) to relay encrypted connections on a detour passing through randomly chosen relays scattered around the Internet. Unfortunately, OR is known to be vulnerable at least in principle to several classes of attacks for which no solution is known or believed to be forthcoming soon. Current approaches to anonymity also appear unable to offer accurate, principled measurement of the level or quality of anonymity a user might obtain. Toward this end, we offer a high-level view of the Dissent project, the first systematic effort to build a practical anonymity system based purely on foundations that offer measurable and formally provable anonymity properties. Dissent builds on two key pre-existing primitives - verifiable shuffles and dining cryptographers - but for the first time shows how to scale such techniques to offer measurable anonymity guarantees to thousands of participants. Further, Dissent represents the first anonymity system designed from the ground up to incorporate some systematic countermeasure for each of the major classes of known vulnerabilities in existing approaches, including global traffic analysis, active attacks, and intersection attacks. Finally, because no anonymity protocol alone can address risks such as software exploits or accidental self-identification, we introduce WiNon, an experimental operating system architecture to harden the uses of anonymity tools such as Tor and Dissent against such attacks.
[ { "created": "Wed, 18 Dec 2013 20:50:35 GMT", "version": "v1" }, { "created": "Sat, 29 Mar 2014 15:33:45 GMT", "version": "v2" }, { "created": "Sat, 3 Jan 2015 01:10:43 GMT", "version": "v3" } ]
2015-01-06
[ [ "Feigenbaum", "Joan", "" ], [ "Ford", "Bryan", "" ] ]
Obtaining and maintaining anonymity on the Internet is challenging. The state of the art in deployed tools, such as Tor, uses onion routing (OR) to relay encrypted connections on a detour passing through randomly chosen relays scattered around the Internet. Unfortunately, OR is known to be vulnerable at least in principle to several classes of attacks for which no solution is known or believed to be forthcoming soon. Current approaches to anonymity also appear unable to offer accurate, principled measurement of the level or quality of anonymity a user might obtain. Toward this end, we offer a high-level view of the Dissent project, the first systematic effort to build a practical anonymity system based purely on foundations that offer measurable and formally provable anonymity properties. Dissent builds on two key pre-existing primitives - verifiable shuffles and dining cryptographers - but for the first time shows how to scale such techniques to offer measurable anonymity guarantees to thousands of participants. Further, Dissent represents the first anonymity system designed from the ground up to incorporate some systematic countermeasure for each of the major classes of known vulnerabilities in existing approaches, including global traffic analysis, active attacks, and intersection attacks. Finally, because no anonymity protocol alone can address risks such as software exploits or accidental self-identification, we introduce WiNon, an experimental operating system architecture to harden the uses of anonymity tools such as Tor and Dissent against such attacks.
2206.13381
Su Yuchen
Yuchen Su, Zhiwen Shao, Yong Zhou, Fanrong Meng, Hancheng Zhu, Bing Liu, and Rui Yao
TextDCT: Arbitrary-Shaped Text Detection via Discrete Cosine Transform Mask
This paper has been accepted by IEEE Transactions on Multimedia
null
10.1109/TMM.2022.3186431
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Arbitrary-shaped scene text detection is a challenging task due to the variety of text changes in font, size, color, and orientation. Most existing regression based methods resort to regress the masks or contour points of text regions to model the text instances. However, regressing the complete masks requires high training complexity, and contour points are not sufficient to capture the details of highly curved texts. To tackle the above limitations, we propose a novel light-weight anchor-free text detection framework called TextDCT, which adopts the discrete cosine transform (DCT) to encode the text masks as compact vectors. Further, considering the imbalanced number of training samples among pyramid layers, we only employ a single-level head for top-down prediction. To model the multi-scale texts in a single-level head, we introduce a novel positive sampling strategy by treating the shrunk text region as positive samples, and design a feature awareness module (FAM) for spatial-awareness and scale-awareness by fusing rich contextual information and focusing on more significant features. Moreover, we propose a segmented non-maximum suppression (S-NMS) method that can filter low-quality mask regressions. Extensive experiments are conducted on four challenging datasets, which demonstrate our TextDCT obtains competitive performance on both accuracy and efficiency. Specifically, TextDCT achieves F-measure of 85.1 at 17.2 frames per second (FPS) and F-measure of 84.9 at 15.1 FPS for CTW1500 and Total-Text datasets, respectively.
[ { "created": "Mon, 27 Jun 2022 15:42:25 GMT", "version": "v1" } ]
2022-06-28
[ [ "Su", "Yuchen", "" ], [ "Shao", "Zhiwen", "" ], [ "Zhou", "Yong", "" ], [ "Meng", "Fanrong", "" ], [ "Zhu", "Hancheng", "" ], [ "Liu", "Bing", "" ], [ "Yao", "Rui", "" ] ]
Arbitrary-shaped scene text detection is a challenging task due to the variety of text changes in font, size, color, and orientation. Most existing regression based methods resort to regress the masks or contour points of text regions to model the text instances. However, regressing the complete masks requires high training complexity, and contour points are not sufficient to capture the details of highly curved texts. To tackle the above limitations, we propose a novel light-weight anchor-free text detection framework called TextDCT, which adopts the discrete cosine transform (DCT) to encode the text masks as compact vectors. Further, considering the imbalanced number of training samples among pyramid layers, we only employ a single-level head for top-down prediction. To model the multi-scale texts in a single-level head, we introduce a novel positive sampling strategy by treating the shrunk text region as positive samples, and design a feature awareness module (FAM) for spatial-awareness and scale-awareness by fusing rich contextual information and focusing on more significant features. Moreover, we propose a segmented non-maximum suppression (S-NMS) method that can filter low-quality mask regressions. Extensive experiments are conducted on four challenging datasets, which demonstrate our TextDCT obtains competitive performance on both accuracy and efficiency. Specifically, TextDCT achieves F-measure of 85.1 at 17.2 frames per second (FPS) and F-measure of 84.9 at 15.1 FPS for CTW1500 and Total-Text datasets, respectively.
1311.3048
Ofer Neiman
Ittai Abraham and Cyril Gavoille and Anupam Gupta and Ofer Neiman and Kunal Talwar
Cops, Robbers, and Threatening Skeletons: Padded Decomposition for Minor-Free Graphs
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that any graph excluding $K_r$ as a minor has can be partitioned into clusters of diameter at most $\Delta$ while removing at most $O(r/\Delta)$ fraction of the edges. This improves over the results of Fakcharoenphol and Talwar, who building on the work of Klein, Plotkin and Rao gave a partitioning that required to remove $O(r^2/\Delta)$ fraction of the edges. Our result is obtained by a new approach to relate the topological properties (excluding a minor) of a graph to its geometric properties (the induced shortest path metric). Specifically, we show that techniques used by Andreae in his investigation of the cops-and-robbers game on excluded-minor graphs can be used to construct padded decompositions of the metrics induced by such graphs. In particular, we get probabilistic partitions with padding parameter $O(r)$ and strong-diameter partitions with padding parameter $O(r^2)$ for $K_r$-free graphs, padding $O(k)$ for graphs with treewidth $k$, and padding $O(\log g)$ for graphs with genus $g$.
[ { "created": "Wed, 13 Nov 2013 08:21:04 GMT", "version": "v1" }, { "created": "Sun, 10 Jan 2021 14:46:19 GMT", "version": "v2" } ]
2021-01-12
[ [ "Abraham", "Ittai", "" ], [ "Gavoille", "Cyril", "" ], [ "Gupta", "Anupam", "" ], [ "Neiman", "Ofer", "" ], [ "Talwar", "Kunal", "" ] ]
We prove that any graph excluding $K_r$ as a minor has can be partitioned into clusters of diameter at most $\Delta$ while removing at most $O(r/\Delta)$ fraction of the edges. This improves over the results of Fakcharoenphol and Talwar, who building on the work of Klein, Plotkin and Rao gave a partitioning that required to remove $O(r^2/\Delta)$ fraction of the edges. Our result is obtained by a new approach to relate the topological properties (excluding a minor) of a graph to its geometric properties (the induced shortest path metric). Specifically, we show that techniques used by Andreae in his investigation of the cops-and-robbers game on excluded-minor graphs can be used to construct padded decompositions of the metrics induced by such graphs. In particular, we get probabilistic partitions with padding parameter $O(r)$ and strong-diameter partitions with padding parameter $O(r^2)$ for $K_r$-free graphs, padding $O(k)$ for graphs with treewidth $k$, and padding $O(\log g)$ for graphs with genus $g$.
2006.02051
Qiyao Deng
Qiyao Deng, Jie Cao, Yunfan Liu, Zhenhua Chai, Qi Li and Zhenan Sun
Reference-guided Face Component Editing
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Face portrait editing has achieved great progress in recent years. However, previous methods either 1) operate on pre-defined face attributes, lacking the flexibility of controlling shapes of high-level semantic facial components (e.g., eyes, nose, mouth), or 2) take manually edited mask or sketch as an intermediate representation for observable changes, but such additional input usually requires extra efforts to obtain. To break the limitations (e.g. shape, mask or sketch) of the existing methods, we propose a novel framework termed r-FACE (Reference-guided FAce Component Editing) for diverse and controllable face component editing with geometric changes. Specifically, r-FACE takes an image inpainting model as the backbone, utilizing reference images as conditions for controlling the shape of face components. In order to encourage the framework to concentrate on the target face components, an example-guided attention module is designed to fuse attention features and the target face component features extracted from the reference image. Through extensive experimental validation and comparisons, we verify the effectiveness of the proposed framework.
[ { "created": "Wed, 3 Jun 2020 05:34:54 GMT", "version": "v1" }, { "created": "Tue, 14 Jul 2020 13:37:59 GMT", "version": "v2" } ]
2020-07-15
[ [ "Deng", "Qiyao", "" ], [ "Cao", "Jie", "" ], [ "Liu", "Yunfan", "" ], [ "Chai", "Zhenhua", "" ], [ "Li", "Qi", "" ], [ "Sun", "Zhenan", "" ] ]
Face portrait editing has achieved great progress in recent years. However, previous methods either 1) operate on pre-defined face attributes, lacking the flexibility of controlling shapes of high-level semantic facial components (e.g., eyes, nose, mouth), or 2) take manually edited mask or sketch as an intermediate representation for observable changes, but such additional input usually requires extra efforts to obtain. To break the limitations (e.g. shape, mask or sketch) of the existing methods, we propose a novel framework termed r-FACE (Reference-guided FAce Component Editing) for diverse and controllable face component editing with geometric changes. Specifically, r-FACE takes an image inpainting model as the backbone, utilizing reference images as conditions for controlling the shape of face components. In order to encourage the framework to concentrate on the target face components, an example-guided attention module is designed to fuse attention features and the target face component features extracted from the reference image. Through extensive experimental validation and comparisons, we verify the effectiveness of the proposed framework.
2109.13404
Yunchou Xing
Yunchou Xing and Theodore S. Rappaport
Millimeter Wave and Terahertz Urban Microcell Propagation Measurements and Models
5 pages, 2 figures, and 3 tables
IEEE Communications Letters 2021
10.1109/LCOMM.2021.3117900
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Comparisons of outdoor Urban Microcell (UMi) large-scale path loss models, root mean square (RMS) delay spreads (DS), angular spreads (AS), and the number of spatial beams for extensive measurements performed at 28, 38, 73, and 142 GHz are presented in this letter. Measurement campaigns were conducted from 2011-2020 in downtown Austin, Texas, Manhattan (New York City), and Brooklyn, New York with communication ranges up to 930 m. Key similarities and differences in outdoor wireless channels are observed when comparing the channel statistics across a wide range of frequencies from millimeter-wave to sub-THz bands. Path loss exponents (PLEs) are remarkably similar over all measured frequencies, when referenced to the first meter free space path loss, and the RMS DS and AS decrease as frequency increases. The similar PLEs from millimeter-wave to THz frequencies imply that spacing between cellular base stations will not have to change as carrier frequencies increase towards THz, since wider bandwidth channels at sub-THz or THz carrier frequencies will cover similar distances because antenna gains increase quadratically with increasing frequency when the physical antenna area remain constant.
[ { "created": "Tue, 28 Sep 2021 00:08:56 GMT", "version": "v1" } ]
2021-10-26
[ [ "Xing", "Yunchou", "" ], [ "Rappaport", "Theodore S.", "" ] ]
Comparisons of outdoor Urban Microcell (UMi) large-scale path loss models, root mean square (RMS) delay spreads (DS), angular spreads (AS), and the number of spatial beams for extensive measurements performed at 28, 38, 73, and 142 GHz are presented in this letter. Measurement campaigns were conducted from 2011-2020 in downtown Austin, Texas, Manhattan (New York City), and Brooklyn, New York with communication ranges up to 930 m. Key similarities and differences in outdoor wireless channels are observed when comparing the channel statistics across a wide range of frequencies from millimeter-wave to sub-THz bands. Path loss exponents (PLEs) are remarkably similar over all measured frequencies, when referenced to the first meter free space path loss, and the RMS DS and AS decrease as frequency increases. The similar PLEs from millimeter-wave to THz frequencies imply that spacing between cellular base stations will not have to change as carrier frequencies increase towards THz, since wider bandwidth channels at sub-THz or THz carrier frequencies will cover similar distances because antenna gains increase quadratically with increasing frequency when the physical antenna area remain constant.
2211.09443
Amirmohammad Farzaneh
Amirmohammad Farzaneh, Mihai-Alin Badiu, Justin P. Coon
LEAST: a Low-Energy Adaptive Scalable Tree-based routing protocol for Wireless Sensor Networks
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Routing is one of the critical and ongoing challenges in Wireless Sensor Networks. The main challenge has always been to have a routing protocol that reduces the communication overhead, hence saving the energy of the sensors in the network. Hierarchical routing protocols are known to be the most energy-efficient routing protocols for Wireless Sensor Networks. In this paper, a more generalized hierarchical routing protocol is introduced for Wireless Sensor Network, which is based on tree data structures. The clustering in the proposed protocol has the format of a general tree, which is constructed in an adaptive manner based on the distance of the sensors. Results show that the proposed tree-based protocol introduces a significant benefit in energy consumption and lifetime of the network over the existing hierarchical approaches.
[ { "created": "Thu, 17 Nov 2022 10:11:42 GMT", "version": "v1" } ]
2022-11-18
[ [ "Farzaneh", "Amirmohammad", "" ], [ "Badiu", "Mihai-Alin", "" ], [ "Coon", "Justin P.", "" ] ]
Routing is one of the critical and ongoing challenges in Wireless Sensor Networks. The main challenge has always been to have a routing protocol that reduces the communication overhead, hence saving the energy of the sensors in the network. Hierarchical routing protocols are known to be the most energy-efficient routing protocols for Wireless Sensor Networks. In this paper, a more generalized hierarchical routing protocol is introduced for Wireless Sensor Network, which is based on tree data structures. The clustering in the proposed protocol has the format of a general tree, which is constructed in an adaptive manner based on the distance of the sensors. Results show that the proposed tree-based protocol introduces a significant benefit in energy consumption and lifetime of the network over the existing hierarchical approaches.
2105.14923
Bestoun Ahmed Dr.
Kamal Z. Zamli, Md. Abdul Kader, Saiful Azad, Bestoun S. Ahmed
Hybrid Henry Gas Solubility Optimization Algorithm with Dynamic Cluster-to-Algorithm Mapping for Search-based Software Engineering Problems
31 pages
Neural Computing and Applications 2021
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper discusses a new variant of the Henry Gas Solubility Optimization (HGSO) Algorithm, called Hybrid HGSO (HHGSO). Unlike its predecessor, HHGSO allows multiple clusters serving different individual meta-heuristic algorithms (i.e., with its own defined parameters and local best) to coexist within the same population. Exploiting the dynamic cluster-to-algorithm mapping via penalized and reward model with adaptive switching factor, HHGSO offers a novel approach for meta-heuristic hybridization consisting of Jaya Algorithm, Sooty Tern Optimization Algorithm, Butterfly Optimization Algorithm, and Owl Search Algorithm, respectively. The acquired results from the selected two case studies (i.e., involving team formation problem and combinatorial test suite generation) indicate that the hybridization has notably improved the performance of HGSO and gives superior performance against other competing meta-heuristic and hyper-heuristic algorithms.
[ { "created": "Mon, 31 May 2021 12:42:15 GMT", "version": "v1" } ]
2021-06-01
[ [ "Zamli", "Kamal Z.", "" ], [ "Kader", "Md. Abdul", "" ], [ "Azad", "Saiful", "" ], [ "Ahmed", "Bestoun S.", "" ] ]
This paper discusses a new variant of the Henry Gas Solubility Optimization (HGSO) Algorithm, called Hybrid HGSO (HHGSO). Unlike its predecessor, HHGSO allows multiple clusters serving different individual meta-heuristic algorithms (i.e., with its own defined parameters and local best) to coexist within the same population. Exploiting the dynamic cluster-to-algorithm mapping via penalized and reward model with adaptive switching factor, HHGSO offers a novel approach for meta-heuristic hybridization consisting of Jaya Algorithm, Sooty Tern Optimization Algorithm, Butterfly Optimization Algorithm, and Owl Search Algorithm, respectively. The acquired results from the selected two case studies (i.e., involving team formation problem and combinatorial test suite generation) indicate that the hybridization has notably improved the performance of HGSO and gives superior performance against other competing meta-heuristic and hyper-heuristic algorithms.
2308.14710
Xudong Wang
Xudong Wang and Ishan Misra and Ziyun Zeng and Rohit Girdhar and Trevor Darrell
VideoCutLER: Surprisingly Simple Unsupervised Video Instance Segmentation
Preprint. Code: https://github.com/facebookresearch/CutLER
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing approaches to unsupervised video instance segmentation typically rely on motion estimates and experience difficulties tracking small or divergent motions. We present VideoCutLER, a simple method for unsupervised multi-instance video segmentation without using motion-based learning signals like optical flow or training on natural videos. Our key insight is that using high-quality pseudo masks and a simple video synthesis method for model training is surprisingly sufficient to enable the resulting video model to effectively segment and track multiple instances across video frames. We show the first competitive unsupervised learning results on the challenging YouTubeVIS-2019 benchmark, achieving 50.7% APvideo^50 , surpassing the previous state-of-the-art by a large margin. VideoCutLER can also serve as a strong pretrained model for supervised video instance segmentation tasks, exceeding DINO by 15.9% on YouTubeVIS-2019 in terms of APvideo.
[ { "created": "Mon, 28 Aug 2023 17:10:12 GMT", "version": "v1" } ]
2023-08-29
[ [ "Wang", "Xudong", "" ], [ "Misra", "Ishan", "" ], [ "Zeng", "Ziyun", "" ], [ "Girdhar", "Rohit", "" ], [ "Darrell", "Trevor", "" ] ]
Existing approaches to unsupervised video instance segmentation typically rely on motion estimates and experience difficulties tracking small or divergent motions. We present VideoCutLER, a simple method for unsupervised multi-instance video segmentation without using motion-based learning signals like optical flow or training on natural videos. Our key insight is that using high-quality pseudo masks and a simple video synthesis method for model training is surprisingly sufficient to enable the resulting video model to effectively segment and track multiple instances across video frames. We show the first competitive unsupervised learning results on the challenging YouTubeVIS-2019 benchmark, achieving 50.7% APvideo^50 , surpassing the previous state-of-the-art by a large margin. VideoCutLER can also serve as a strong pretrained model for supervised video instance segmentation tasks, exceeding DINO by 15.9% on YouTubeVIS-2019 in terms of APvideo.
2405.12865
Noriyoshi Sukegawa
Yuki Uehara, Naoki Nishimura, Yilin Li, Jie Yang, Deddy Jobson, Koya Ohashi, Takeshi Matsumoto, Noriyoshi Sukegawa, Yuichi Takano
Robust portfolio optimization model for electronic coupon allocation
9 pages, 17 figures, AAAI-2024 Workshop on Artificial Intelligence for Operations Research
null
null
null
cs.IR math.OC
http://creativecommons.org/licenses/by/4.0/
Currently, many e-commerce websites issue online/electronic coupons as an effective tool for promoting sales of various products and services. We focus on the problem of optimally allocating coupons to customers subject to a budget constraint on an e-commerce website. We apply a robust portfolio optimization model based on customer segmentation to the coupon allocation problem. We also validate the efficacy of our method through numerical experiments using actual data from randomly distributed coupons. Main contributions of our research are twofold. First, we handle six types of coupons, thereby making it extremely difficult to accurately estimate the difference in the effects of various coupons. Second, we demonstrate from detailed numerical results that the robust optimization model achieved larger uplifts of sales than did the commonly-used multiple-choice knapsack model and the conventional mean-variance optimization model. Our results open up great potential for robust portfolio optimization as an effective tool for practical coupon allocation.
[ { "created": "Tue, 21 May 2024 15:30:25 GMT", "version": "v1" } ]
2024-05-22
[ [ "Uehara", "Yuki", "" ], [ "Nishimura", "Naoki", "" ], [ "Li", "Yilin", "" ], [ "Yang", "Jie", "" ], [ "Jobson", "Deddy", "" ], [ "Ohashi", "Koya", "" ], [ "Matsumoto", "Takeshi", "" ], [ "Sukegawa", "Noriyoshi", "" ], [ "Takano", "Yuichi", "" ] ]
Currently, many e-commerce websites issue online/electronic coupons as an effective tool for promoting sales of various products and services. We focus on the problem of optimally allocating coupons to customers subject to a budget constraint on an e-commerce website. We apply a robust portfolio optimization model based on customer segmentation to the coupon allocation problem. We also validate the efficacy of our method through numerical experiments using actual data from randomly distributed coupons. Main contributions of our research are twofold. First, we handle six types of coupons, thereby making it extremely difficult to accurately estimate the difference in the effects of various coupons. Second, we demonstrate from detailed numerical results that the robust optimization model achieved larger uplifts of sales than did the commonly-used multiple-choice knapsack model and the conventional mean-variance optimization model. Our results open up great potential for robust portfolio optimization as an effective tool for practical coupon allocation.
2209.01943
Jianhui Ma
Jianhui Ma, Qiang Li, Zilong Liu, Linsong Du, Hongyang Chen, and Nirwan Ansari
Jamming Modulation: An Active Anti-Jamming Scheme
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Providing quality communications under adversarial electronic attacks, e.g., broadband jamming attacks, is a challenging task. Unlike state-of-the-art approaches which treat jamming signals as destructive interference, this paper presents a novel active anti-jamming (AAJ) scheme for a jammed channel to enhance the communication quality between a transmitter node (TN) and receiver node (RN), where the TN actively exploits the jamming signal as a carrier to send messages. Specifically, the TN is equipped with a programmable-gain amplifier, which is capable of re-modulating the jamming signals for jamming modulation. Considering four typical jamming types, we derive both the bit error rates (BER) and the corresponding optimal detection thresholds of the AAJ scheme. The asymptotic performances of the AAJ scheme are discussed under the high jamming-to-noise ratio (JNR) and sampling rate cases. Our analysis shows that there exists a BER floor for sufficiently large JNR. Simulation results indicate that the proposed AAJ scheme allows the TN to communicate with the RN reliably even under extremely strong and/or broadband jamming. Additionally, we investigate the channel capacity of the proposed AAJ scheme and show that the channel capacity of the AAJ scheme outperforms that of the direct transmission when the JNR is relatively high.
[ { "created": "Mon, 5 Sep 2022 12:48:20 GMT", "version": "v1" } ]
2022-09-07
[ [ "Ma", "Jianhui", "" ], [ "Li", "Qiang", "" ], [ "Liu", "Zilong", "" ], [ "Du", "Linsong", "" ], [ "Chen", "Hongyang", "" ], [ "Ansari", "Nirwan", "" ] ]
Providing quality communications under adversarial electronic attacks, e.g., broadband jamming attacks, is a challenging task. Unlike state-of-the-art approaches which treat jamming signals as destructive interference, this paper presents a novel active anti-jamming (AAJ) scheme for a jammed channel to enhance the communication quality between a transmitter node (TN) and receiver node (RN), where the TN actively exploits the jamming signal as a carrier to send messages. Specifically, the TN is equipped with a programmable-gain amplifier, which is capable of re-modulating the jamming signals for jamming modulation. Considering four typical jamming types, we derive both the bit error rates (BER) and the corresponding optimal detection thresholds of the AAJ scheme. The asymptotic performances of the AAJ scheme are discussed under the high jamming-to-noise ratio (JNR) and sampling rate cases. Our analysis shows that there exists a BER floor for sufficiently large JNR. Simulation results indicate that the proposed AAJ scheme allows the TN to communicate with the RN reliably even under extremely strong and/or broadband jamming. Additionally, we investigate the channel capacity of the proposed AAJ scheme and show that the channel capacity of the AAJ scheme outperforms that of the direct transmission when the JNR is relatively high.
2405.14548
Vinicius Luiz Santos Silva
Vinicius L S Silva, Geraldine Regnier, Pablo Salinas, Claire E Heaney, Matthew D Jackson, Christopher C Pain
Rapid modelling of reactive transport in porous media using machine learning: limitations and solutions
null
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reactive transport in porous media plays a pivotal role in subsurface reservoir processes, influencing fluid properties and geochemical characteristics. However, coupling fluid flow and transport with geochemical reactions is computationally intensive, requiring geochemical calculations at each grid cell and each time step within a discretized simulation domain. Although recent advancements have integrated machine learning techniques as surrogates for geochemical simulations, ensuring computational efficiency and accuracy remains a challenge. This chapter investigates machine learning models as replacements for a geochemical module in a reactive transport in porous media simulation. We test this approach on a well-documented cation exchange problem. While the surrogate models excel in isolated predictions, they fall short in rollout predictions over successive time steps. By introducing modifications, including physics-based constraints and tailored dataset generation strategies, we show that machine learning surrogates can achieve accurate rollout predictions. Our findings emphasize that, when judiciously designed, machine learning surrogates can substantially expedite the cation exchange problem without compromising accuracy, offering significant potential for a range of reactive transport applications.
[ { "created": "Thu, 23 May 2024 13:28:10 GMT", "version": "v1" } ]
2024-05-24
[ [ "Silva", "Vinicius L S", "" ], [ "Regnier", "Geraldine", "" ], [ "Salinas", "Pablo", "" ], [ "Heaney", "Claire E", "" ], [ "Jackson", "Matthew D", "" ], [ "Pain", "Christopher C", "" ] ]
Reactive transport in porous media plays a pivotal role in subsurface reservoir processes, influencing fluid properties and geochemical characteristics. However, coupling fluid flow and transport with geochemical reactions is computationally intensive, requiring geochemical calculations at each grid cell and each time step within a discretized simulation domain. Although recent advancements have integrated machine learning techniques as surrogates for geochemical simulations, ensuring computational efficiency and accuracy remains a challenge. This chapter investigates machine learning models as replacements for a geochemical module in a reactive transport in porous media simulation. We test this approach on a well-documented cation exchange problem. While the surrogate models excel in isolated predictions, they fall short in rollout predictions over successive time steps. By introducing modifications, including physics-based constraints and tailored dataset generation strategies, we show that machine learning surrogates can achieve accurate rollout predictions. Our findings emphasize that, when judiciously designed, machine learning surrogates can substantially expedite the cation exchange problem without compromising accuracy, offering significant potential for a range of reactive transport applications.
1808.07151
Luke Rodriguez
Luke Rodriguez, Babak Salimi, Haoyue Ping, Julia Stoyanovich, Bill Howe
MobilityMirror: Bias-Adjusted Transportation Datasets
Presented at BIDU 2018 workshop and published in Springer Communications in Computer and Information Science vol 926
Big Social Data and Urban Computing. BiDU 2018. Communications in Computer and Information Science, vol 926. Springer, Cham
10.1007/978-3-030-11238-7_2
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe customized synthetic datasets for publishing mobility data. Private companies are providing new transportation modalities, and their data is of high value for integrative transportation research, policy enforcement, and public accountability. However, these companies are disincentivized from sharing data not only to protect the privacy of individuals (drivers and/or passengers), but also to protect their own competitive advantage. Moreover, demographic biases arising from how the services are delivered may be amplified if released data is used in other contexts. We describe a model and algorithm for releasing origin-destination histograms that removes selected biases in the data using causality-based methods. We compute the origin-destination histogram of the original dataset then adjust the counts to remove undesirable causal relationships that can lead to discrimination or violate contractual obligations with data owners. We evaluate the utility of the algorithm on real data from a dockless bike share program in Seattle and taxi data in New York, and show that these adjusted transportation datasets can retain utility while removing bias in the underlying data.
[ { "created": "Tue, 21 Aug 2018 22:19:48 GMT", "version": "v1" }, { "created": "Thu, 23 Aug 2018 01:28:03 GMT", "version": "v2" }, { "created": "Fri, 25 Jan 2019 00:27:21 GMT", "version": "v3" } ]
2019-01-28
[ [ "Rodriguez", "Luke", "" ], [ "Salimi", "Babak", "" ], [ "Ping", "Haoyue", "" ], [ "Stoyanovich", "Julia", "" ], [ "Howe", "Bill", "" ] ]
We describe customized synthetic datasets for publishing mobility data. Private companies are providing new transportation modalities, and their data is of high value for integrative transportation research, policy enforcement, and public accountability. However, these companies are disincentivized from sharing data not only to protect the privacy of individuals (drivers and/or passengers), but also to protect their own competitive advantage. Moreover, demographic biases arising from how the services are delivered may be amplified if released data is used in other contexts. We describe a model and algorithm for releasing origin-destination histograms that removes selected biases in the data using causality-based methods. We compute the origin-destination histogram of the original dataset then adjust the counts to remove undesirable causal relationships that can lead to discrimination or violate contractual obligations with data owners. We evaluate the utility of the algorithm on real data from a dockless bike share program in Seattle and taxi data in New York, and show that these adjusted transportation datasets can retain utility while removing bias in the underlying data.
2007.07320
Tiansi Dong
Tiansi Dong, Chengjiang Li, Christian Bauckhage, Juanzi Li, Stefan Wrobel, Armin B. Cremers
Learning Syllogism with Euler Neural-Networks
16 pages, 6 figures
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional neural networks represent everything as a vector, and are able to approximate a subset of logical reasoning to a certain degree. As basic logic relations are better represented by topological relations between regions, we propose a novel neural network that represents everything as a ball and is able to learn topological configuration as an Euler diagram. So comes the name Euler Neural-Network (ENN). The central vector of a ball is a vector that can inherit representation power of traditional neural network. ENN distinguishes four spatial statuses between balls, namely, being disconnected, being partially overlapped, being part of, being inverse part of. Within each status, ideal values are defined for efficient reasoning. A novel back-propagation algorithm with six Rectified Spatial Units (ReSU) can optimize an Euler diagram representing logical premises, from which logical conclusion can be deduced. In contrast to traditional neural network, ENN can precisely represent all 24 different structures of Syllogism. Two large datasets are created: one extracted from WordNet-3.0 covers all types of Syllogism reasoning, the other extracted all family relations from DBpedia. Experiment results approve the superior power of ENN in logical representation and reasoning. Datasets and source code are available upon request.
[ { "created": "Tue, 14 Jul 2020 19:35:35 GMT", "version": "v1" }, { "created": "Mon, 20 Jul 2020 09:58:24 GMT", "version": "v2" } ]
2020-07-21
[ [ "Dong", "Tiansi", "" ], [ "Li", "Chengjiang", "" ], [ "Bauckhage", "Christian", "" ], [ "Li", "Juanzi", "" ], [ "Wrobel", "Stefan", "" ], [ "Cremers", "Armin B.", "" ] ]
Traditional neural networks represent everything as a vector, and are able to approximate a subset of logical reasoning to a certain degree. As basic logic relations are better represented by topological relations between regions, we propose a novel neural network that represents everything as a ball and is able to learn topological configuration as an Euler diagram. So comes the name Euler Neural-Network (ENN). The central vector of a ball is a vector that can inherit representation power of traditional neural network. ENN distinguishes four spatial statuses between balls, namely, being disconnected, being partially overlapped, being part of, being inverse part of. Within each status, ideal values are defined for efficient reasoning. A novel back-propagation algorithm with six Rectified Spatial Units (ReSU) can optimize an Euler diagram representing logical premises, from which logical conclusion can be deduced. In contrast to traditional neural network, ENN can precisely represent all 24 different structures of Syllogism. Two large datasets are created: one extracted from WordNet-3.0 covers all types of Syllogism reasoning, the other extracted all family relations from DBpedia. Experiment results approve the superior power of ENN in logical representation and reasoning. Datasets and source code are available upon request.
1704.03421
Malika Bendechache
Malika Bendechache, Nhien-An Le-Khac, M-Tahar Kechadi
Efficient Large Scale Clustering based on Data Partitioning
10 pages
Data Science and Advanced Analytics (DSAA), 2016 IEEE International Conference on, 612--621, 2016
10.1109/DSAA.2016.70
null
cs.DB cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clustering techniques are very attractive for extracting and identifying patterns in datasets. However, their application to very large spatial datasets presents numerous challenges such as high-dimensionality data, heterogeneity, and high complexity of some algorithms. For instance, some algorithms may have linear complexity but they require the domain knowledge in order to determine their input parameters. Distributed clustering techniques constitute a very good alternative to the big data challenges (e.g.,Volume, Variety, Veracity, and Velocity). Usually these techniques consist of two phases. The first phase generates local models or patterns and the second one tends to aggregate the local results to obtain global models. While the first phase can be executed in parallel on each site and, therefore, efficient, the aggregation phase is complex, time consuming and may produce incorrect and ambiguous global clusters and therefore incorrect models. In this paper we propose a new distributed clustering approach to deal efficiently with both phases, generation of local results and generation of global models by aggregation. For the first phase, our approach is capable of analysing the datasets located in each site using different clustering techniques. The aggregation phase is designed in such a way that the final clusters are compact and accurate while the overall process is efficient in time and memory allocation. For the evaluation, we use two well-known clustering algorithms, K-Means and DBSCAN. One of the key outputs of this distributed clustering technique is that the number of global clusters is dynamic, no need to be fixed in advance. Experimental results show that the approach is scalable and produces high quality results.
[ { "created": "Tue, 11 Apr 2017 17:05:01 GMT", "version": "v1" }, { "created": "Mon, 26 Feb 2018 15:23:31 GMT", "version": "v2" } ]
2018-02-27
[ [ "Bendechache", "Malika", "" ], [ "Le-Khac", "Nhien-An", "" ], [ "Kechadi", "M-Tahar", "" ] ]
Clustering techniques are very attractive for extracting and identifying patterns in datasets. However, their application to very large spatial datasets presents numerous challenges such as high-dimensionality data, heterogeneity, and high complexity of some algorithms. For instance, some algorithms may have linear complexity but they require the domain knowledge in order to determine their input parameters. Distributed clustering techniques constitute a very good alternative to the big data challenges (e.g.,Volume, Variety, Veracity, and Velocity). Usually these techniques consist of two phases. The first phase generates local models or patterns and the second one tends to aggregate the local results to obtain global models. While the first phase can be executed in parallel on each site and, therefore, efficient, the aggregation phase is complex, time consuming and may produce incorrect and ambiguous global clusters and therefore incorrect models. In this paper we propose a new distributed clustering approach to deal efficiently with both phases, generation of local results and generation of global models by aggregation. For the first phase, our approach is capable of analysing the datasets located in each site using different clustering techniques. The aggregation phase is designed in such a way that the final clusters are compact and accurate while the overall process is efficient in time and memory allocation. For the evaluation, we use two well-known clustering algorithms, K-Means and DBSCAN. One of the key outputs of this distributed clustering technique is that the number of global clusters is dynamic, no need to be fixed in advance. Experimental results show that the approach is scalable and produces high quality results.
2306.06184
Aldo Pacchiano
Nataly Brukhim, Miroslav Dudik, Aldo Pacchiano, Robert Schapire
A Unified Model and Dimension for Interactive Estimation
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study an abstract framework for interactive learning called interactive estimation in which the goal is to estimate a target from its "similarity'' to points queried by the learner. We introduce a combinatorial measure called dissimilarity dimension which largely captures learnability in our model. We present a simple, general, and broadly-applicable algorithm, for which we obtain both regret and PAC generalization bounds that are polynomial in the new dimension. We show that our framework subsumes and thereby unifies two classic learning models: statistical-query learning and structured bandits. We also delineate how the dissimilarity dimension is related to well-known parameters for both frameworks, in some cases yielding significantly improved analyses.
[ { "created": "Fri, 9 Jun 2023 18:21:04 GMT", "version": "v1" } ]
2023-06-13
[ [ "Brukhim", "Nataly", "" ], [ "Dudik", "Miroslav", "" ], [ "Pacchiano", "Aldo", "" ], [ "Schapire", "Robert", "" ] ]
We study an abstract framework for interactive learning called interactive estimation in which the goal is to estimate a target from its "similarity'' to points queried by the learner. We introduce a combinatorial measure called dissimilarity dimension which largely captures learnability in our model. We present a simple, general, and broadly-applicable algorithm, for which we obtain both regret and PAC generalization bounds that are polynomial in the new dimension. We show that our framework subsumes and thereby unifies two classic learning models: statistical-query learning and structured bandits. We also delineate how the dissimilarity dimension is related to well-known parameters for both frameworks, in some cases yielding significantly improved analyses.
2408.05792
Jiafeng Xia
Jiafeng Xia, Dongsheng Li, Hansu Gu, Tun Lu and Ning Gu
GraphTransfer: A Generic Feature Fusion Framework for Collaborative Filtering
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Neural Networks (GNNs) have demonstrated effectiveness in collaborative filtering tasks due to their ability to extract powerful structural features. However, combining the graph features extracted from user-item interactions and auxiliary features extracted from user genres and item properties remains a challenge. Currently available fusion methods face two major issues: 1) simple methods such as concatenation and summation are generic, but not accurate in capturing feature relationships; 2) task-specific methods like attention mechanisms and meta paths may not be suitable for general feature fusion. To address these challenges, we present GraphTransfer, a simple but universal feature fusion framework for GNN-based collaborative filtering. Our method accurately fuses different types of features by first extracting graph features from the user-item interaction graph and auxiliary features from users and items using GCN. The proposed cross fusion module then effectively bridges the semantic gaps between the interaction scores of different features. Theoretical analysis and experiments on public datasets show that GraphTransfer outperforms other feature fusion methods in CF tasks. Additionally, we demonstrate the universality of our framework via empirical studies in three other scenarios, showing that GraphTransfer leads to significant improvements in the performance of CF algorithms.
[ { "created": "Sun, 11 Aug 2024 14:47:34 GMT", "version": "v1" } ]
2024-08-13
[ [ "Xia", "Jiafeng", "" ], [ "Li", "Dongsheng", "" ], [ "Gu", "Hansu", "" ], [ "Lu", "Tun", "" ], [ "Gu", "Ning", "" ] ]
Graph Neural Networks (GNNs) have demonstrated effectiveness in collaborative filtering tasks due to their ability to extract powerful structural features. However, combining the graph features extracted from user-item interactions and auxiliary features extracted from user genres and item properties remains a challenge. Currently available fusion methods face two major issues: 1) simple methods such as concatenation and summation are generic, but not accurate in capturing feature relationships; 2) task-specific methods like attention mechanisms and meta paths may not be suitable for general feature fusion. To address these challenges, we present GraphTransfer, a simple but universal feature fusion framework for GNN-based collaborative filtering. Our method accurately fuses different types of features by first extracting graph features from the user-item interaction graph and auxiliary features from users and items using GCN. The proposed cross fusion module then effectively bridges the semantic gaps between the interaction scores of different features. Theoretical analysis and experiments on public datasets show that GraphTransfer outperforms other feature fusion methods in CF tasks. Additionally, we demonstrate the universality of our framework via empirical studies in three other scenarios, showing that GraphTransfer leads to significant improvements in the performance of CF algorithms.
1708.05520
Steffen Rechner
Steffen Rechner
An Optimal Realization Algorithm for Bipartite Graphs with Degrees in Prescribed Intervals
Submitted to the Journal of Discrete Algorithms
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of constructing a bipartite graph whose degrees lie in prescribed intervals. Necessary and sufficient conditions for the existence of such graphs are well-known. However, existing realization algorithms suffer from large running times. In this paper, we present a realization algorithm that constructs an appropriate bipartite graph G=(U,V,E) in O(|U| + |V| + |E|) time, which is asymptotically optimal. In addition, we show that our algorithm produces edge-minimal bipartite graphs and that it can easily be modified to construct edge-maximal graphs.
[ { "created": "Fri, 18 Aug 2017 06:58:37 GMT", "version": "v1" } ]
2017-08-21
[ [ "Rechner", "Steffen", "" ] ]
We consider the problem of constructing a bipartite graph whose degrees lie in prescribed intervals. Necessary and sufficient conditions for the existence of such graphs are well-known. However, existing realization algorithms suffer from large running times. In this paper, we present a realization algorithm that constructs an appropriate bipartite graph G=(U,V,E) in O(|U| + |V| + |E|) time, which is asymptotically optimal. In addition, we show that our algorithm produces edge-minimal bipartite graphs and that it can easily be modified to construct edge-maximal graphs.
2202.02705
Sumedh Vilas Datar
Sumedh Vilas Datar and, Jesus Gonzales Bernal
Portrait Segmentation Using Deep Learning
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
A portrait is a painting, drawing, photograph, or engraving of a person, especially one depicting only the face or head and shoulders. In the digital world the portrait of a person is captured by having the person as a subject in the image and capturing the image of the person such that the background is blurred. DSLRs generally do it by reducing the aperture to focus on very close regions of interest and automatically blur the background. In this paper I have come up with a novel approach to replicate the portrait mode from DSLR using any smartphone to generate high quality portrait images.
[ { "created": "Sun, 6 Feb 2022 04:28:15 GMT", "version": "v1" } ]
2022-02-08
[ [ "and", "Sumedh Vilas Datar", "" ], [ "Bernal", "Jesus Gonzales", "" ] ]
A portrait is a painting, drawing, photograph, or engraving of a person, especially one depicting only the face or head and shoulders. In the digital world the portrait of a person is captured by having the person as a subject in the image and capturing the image of the person such that the background is blurred. DSLRs generally do it by reducing the aperture to focus on very close regions of interest and automatically blur the background. In this paper I have come up with a novel approach to replicate the portrait mode from DSLR using any smartphone to generate high quality portrait images.
2311.09787
EPTCS
Francesco Belardinelli (Imperial College London), Angelo Ferrando (University of Genoa), Vadim Malvone (Telecom Paris)
3vLTL: A Tool to Generate Automata for Three-valued LTL
In Proceedings FMAS 2023, arXiv:2311.08987
EPTCS 395, 2023, pp. 180-187
10.4204/EPTCS.395.13
null
cs.FL cs.AI
http://creativecommons.org/licenses/by/4.0/
Multi-valued logics have a long tradition in the literature on system verification, including run-time verification. However, comparatively fewer model-checking tools have been developed for multi-valued specification languages. We present 3vLTL, a tool to generate Buchi automata from formulas in Linear-time Temporal Logic (LTL) interpreted on a three-valued semantics. Given an LTL formula, a set of atomic propositions as the alphabet for the automaton, and a truth value, our procedure generates a Buchi automaton that accepts all the words that assign the chosen truth value to the LTL formula. Given the particular type of the output of the tool, it can also be seamlessly processed by third-party libraries in a natural way. That is, the Buchi automaton can then be used in the context of formal verification to check whether an LTL formula is true, false, or undefined on a given model.
[ { "created": "Thu, 16 Nov 2023 11:04:38 GMT", "version": "v1" } ]
2023-11-17
[ [ "Belardinelli", "Francesco", "", "Imperial College London" ], [ "Ferrando", "Angelo", "", "University of Genoa" ], [ "Malvone", "Vadim", "", "Telecom Paris" ] ]
Multi-valued logics have a long tradition in the literature on system verification, including run-time verification. However, comparatively fewer model-checking tools have been developed for multi-valued specification languages. We present 3vLTL, a tool to generate Buchi automata from formulas in Linear-time Temporal Logic (LTL) interpreted on a three-valued semantics. Given an LTL formula, a set of atomic propositions as the alphabet for the automaton, and a truth value, our procedure generates a Buchi automaton that accepts all the words that assign the chosen truth value to the LTL formula. Given the particular type of the output of the tool, it can also be seamlessly processed by third-party libraries in a natural way. That is, the Buchi automaton can then be used in the context of formal verification to check whether an LTL formula is true, false, or undefined on a given model.
2004.12496
Amit Levi
Xi Chen, Rajesh Jayaram, Amit Levi, Erik Waingarten
Learning and Testing Junta Distributions with Subcube Conditioning
null
null
null
null
cs.DS cs.DM cs.LG math.PR math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problems of learning and testing junta distributions on $\{-1,1\}^n$ with respect to the uniform distribution, where a distribution $p$ is a $k$-junta if its probability mass function $p(x)$ depends on a subset of at most $k$ variables. The main contribution is an algorithm for finding relevant coordinates in a $k$-junta distribution with subcube conditioning [BC18, CCKLW20]. We give two applications: 1. An algorithm for learning $k$-junta distributions with $\tilde{O}(k/\epsilon^2) \log n + O(2^k/\epsilon^2)$ subcube conditioning queries, and 2. An algorithm for testing $k$-junta distributions with $\tilde{O}((k + \sqrt{n})/\epsilon^2)$ subcube conditioning queries. All our algorithms are optimal up to poly-logarithmic factors. Our results show that subcube conditioning, as a natural model for accessing high-dimensional distributions, enables significant savings in learning and testing junta distributions compared to the standard sampling model. This addresses an open question posed by Aliakbarpour, Blais, and Rubinfeld [ABR17].
[ { "created": "Sun, 26 Apr 2020 22:52:53 GMT", "version": "v1" } ]
2020-04-28
[ [ "Chen", "Xi", "" ], [ "Jayaram", "Rajesh", "" ], [ "Levi", "Amit", "" ], [ "Waingarten", "Erik", "" ] ]
We study the problems of learning and testing junta distributions on $\{-1,1\}^n$ with respect to the uniform distribution, where a distribution $p$ is a $k$-junta if its probability mass function $p(x)$ depends on a subset of at most $k$ variables. The main contribution is an algorithm for finding relevant coordinates in a $k$-junta distribution with subcube conditioning [BC18, CCKLW20]. We give two applications: 1. An algorithm for learning $k$-junta distributions with $\tilde{O}(k/\epsilon^2) \log n + O(2^k/\epsilon^2)$ subcube conditioning queries, and 2. An algorithm for testing $k$-junta distributions with $\tilde{O}((k + \sqrt{n})/\epsilon^2)$ subcube conditioning queries. All our algorithms are optimal up to poly-logarithmic factors. Our results show that subcube conditioning, as a natural model for accessing high-dimensional distributions, enables significant savings in learning and testing junta distributions compared to the standard sampling model. This addresses an open question posed by Aliakbarpour, Blais, and Rubinfeld [ABR17].
2010.07987
Nadezhda Chirkova
Nadezhda Chirkova, Sergey Troshin
Empirical Study of Transformers for Source Code
Published at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering 2021 (ESEC/FSE'21)
null
10.1145/3468264.3468611
null
cs.LG cs.CL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Initially developed for natural language processing (NLP), Transformers are now widely used for source code processing, due to the format similarity between source code and text. In contrast to natural language, source code is strictly structured, i.e., it follows the syntax of the programming language. Several recent works develop Transformer modifications for capturing syntactic information in source code. The drawback of these works is that they do not compare to each other and consider different tasks. In this work, we conduct a thorough empirical study of the capabilities of Transformers to utilize syntactic information in different tasks. We consider three tasks (code completion, function naming and bug fixing) and re-implement different syntax-capturing modifications in a unified framework. We show that Transformers are able to make meaningful predictions based purely on syntactic information and underline the best practices of taking the syntactic information into account for improving the performance of the model.
[ { "created": "Thu, 15 Oct 2020 19:09:15 GMT", "version": "v1" }, { "created": "Thu, 24 Jun 2021 11:32:30 GMT", "version": "v2" } ]
2021-06-25
[ [ "Chirkova", "Nadezhda", "" ], [ "Troshin", "Sergey", "" ] ]
Initially developed for natural language processing (NLP), Transformers are now widely used for source code processing, due to the format similarity between source code and text. In contrast to natural language, source code is strictly structured, i.e., it follows the syntax of the programming language. Several recent works develop Transformer modifications for capturing syntactic information in source code. The drawback of these works is that they do not compare to each other and consider different tasks. In this work, we conduct a thorough empirical study of the capabilities of Transformers to utilize syntactic information in different tasks. We consider three tasks (code completion, function naming and bug fixing) and re-implement different syntax-capturing modifications in a unified framework. We show that Transformers are able to make meaningful predictions based purely on syntactic information and underline the best practices of taking the syntactic information into account for improving the performance of the model.
1109.2984
Baoqi Huang Mr
Baoqi Huang, Tao Li, Brian D.O. Anderson, Changbin Yu
A Statistically Modelling Method for Performance Limits in Sensor Localization
null
Automatica, Volume 49, Issue 2, 2013, Pages 503-509
10.1016/j.automatica.2012.11.011
null
cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study performance limits of sensor localization from a novel perspective. Specifically, we consider the Cramer-Rao Lower Bound (CRLB) in single-hop sensor localization using measurements from received signal strength (RSS), time of arrival (TOA) and bearing, respectively, but differently from the existing work, we statistically analyze the trace of the associated CRLB matrix (i.e. as a scalar metric for performance limits of sensor localization) by assuming anchor locations are random. By the Central Limit Theorems for $U$-statistics, we show that as the number of the anchors increases, this scalar metric is asymptotically normal in the RSS/bearing case, and converges to a random variable which is an affine transformation of a chi-square random variable of degree 2 in the TOA case. Moreover, we provide formulas quantitatively describing the relationship among the mean and standard deviation of the scalar metric, the number of the anchors, the parameters of communication channels, the noise statistics in measurements and the spatial distribution of the anchors. These formulas, though asymptotic in the number of the anchors, in many cases turn out to be remarkably accurate in predicting performance limits, even if the number is small. Simulations are carried out to confirm our results.
[ { "created": "Wed, 14 Sep 2011 03:40:31 GMT", "version": "v1" } ]
2018-01-30
[ [ "Huang", "Baoqi", "" ], [ "Li", "Tao", "" ], [ "Anderson", "Brian D. O.", "" ], [ "Yu", "Changbin", "" ] ]
In this paper, we study performance limits of sensor localization from a novel perspective. Specifically, we consider the Cramer-Rao Lower Bound (CRLB) in single-hop sensor localization using measurements from received signal strength (RSS), time of arrival (TOA) and bearing, respectively, but differently from the existing work, we statistically analyze the trace of the associated CRLB matrix (i.e. as a scalar metric for performance limits of sensor localization) by assuming anchor locations are random. By the Central Limit Theorems for $U$-statistics, we show that as the number of the anchors increases, this scalar metric is asymptotically normal in the RSS/bearing case, and converges to a random variable which is an affine transformation of a chi-square random variable of degree 2 in the TOA case. Moreover, we provide formulas quantitatively describing the relationship among the mean and standard deviation of the scalar metric, the number of the anchors, the parameters of communication channels, the noise statistics in measurements and the spatial distribution of the anchors. These formulas, though asymptotic in the number of the anchors, in many cases turn out to be remarkably accurate in predicting performance limits, even if the number is small. Simulations are carried out to confirm our results.
2101.06883
Guangyu Huo
Guangyu Huo, Yong Zhang, Junbin Gao, Boyue Wang, Yongli Hu, and Baocai Yin
CaEGCN: Cross-Attention Fusion based Enhanced Graph Convolutional Network for Clustering
null
IEEE Transactions on Knowledge and Data Engineering 2021
10.1109/TKDE.2021.3125020
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the powerful learning ability of deep convolutional networks, deep clustering methods can extract the most discriminative information from individual data and produce more satisfactory clustering results. However, existing deep clustering methods usually ignore the relationship between the data. Fortunately, the graph convolutional network can handle such relationship, opening up a new research direction for deep clustering. In this paper, we propose a cross-attention based deep clustering framework, named Cross-Attention Fusion based Enhanced Graph Convolutional Network (CaEGCN), which contains four main modules: the cross-attention fusion module which innovatively concatenates the Content Auto-encoder module (CAE) relating to the individual data and Graph Convolutional Auto-encoder module (GAE) relating to the relationship between the data in a layer-by-layer manner, and the self-supervised model that highlights the discriminative information for clustering tasks. While the cross-attention fusion module fuses two kinds of heterogeneous representation, the CAE module supplements the content information for the GAE module, which avoids the over-smoothing problem of GCN. In the GAE module, two novel loss functions are proposed that reconstruct the content and relationship between the data, respectively. Finally, the self-supervised module constrains the distributions of the middle layer representations of CAE and GAE to be consistent. Experimental results on different types of datasets prove the superiority and robustness of the proposed CaEGCN.
[ { "created": "Mon, 18 Jan 2021 05:21:59 GMT", "version": "v1" } ]
2022-01-10
[ [ "Huo", "Guangyu", "" ], [ "Zhang", "Yong", "" ], [ "Gao", "Junbin", "" ], [ "Wang", "Boyue", "" ], [ "Hu", "Yongli", "" ], [ "Yin", "Baocai", "" ] ]
With the powerful learning ability of deep convolutional networks, deep clustering methods can extract the most discriminative information from individual data and produce more satisfactory clustering results. However, existing deep clustering methods usually ignore the relationship between the data. Fortunately, the graph convolutional network can handle such relationship, opening up a new research direction for deep clustering. In this paper, we propose a cross-attention based deep clustering framework, named Cross-Attention Fusion based Enhanced Graph Convolutional Network (CaEGCN), which contains four main modules: the cross-attention fusion module which innovatively concatenates the Content Auto-encoder module (CAE) relating to the individual data and Graph Convolutional Auto-encoder module (GAE) relating to the relationship between the data in a layer-by-layer manner, and the self-supervised model that highlights the discriminative information for clustering tasks. While the cross-attention fusion module fuses two kinds of heterogeneous representation, the CAE module supplements the content information for the GAE module, which avoids the over-smoothing problem of GCN. In the GAE module, two novel loss functions are proposed that reconstruct the content and relationship between the data, respectively. Finally, the self-supervised module constrains the distributions of the middle layer representations of CAE and GAE to be consistent. Experimental results on different types of datasets prove the superiority and robustness of the proposed CaEGCN.
2408.00395
Cansu Betin Onur
Cansu Betin Onur
A Zero-Knowledge Proof of Knowledge for Subgroup Distance Problem
null
null
null
null
cs.CR math.GR
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this study, we introduce a novel zero-knowledge identification scheme based on the hardness of the subgroup distance problem in the Hamming metric. The proposed protocol, named Subgroup Distance Zero Knowledge Proof (SDZKP), employs a cryptographically secure pseudorandom number generator to mask secrets and utilizes a Stern-type algorithm to ensure robust security properties.
[ { "created": "Thu, 1 Aug 2024 09:04:50 GMT", "version": "v1" } ]
2024-08-02
[ [ "Onur", "Cansu Betin", "" ] ]
In this study, we introduce a novel zero-knowledge identification scheme based on the hardness of the subgroup distance problem in the Hamming metric. The proposed protocol, named Subgroup Distance Zero Knowledge Proof (SDZKP), employs a cryptographically secure pseudorandom number generator to mask secrets and utilizes a Stern-type algorithm to ensure robust security properties.
2103.04357
Lei Sun
Lei Sun
IRON: Invariant-based Highly Robust Point Cloud Registration
null
null
null
null
cs.CV cs.RO
http://creativecommons.org/publicdomain/zero/1.0/
In this paper, we present IRON (Invariant-based global Robust estimation and OptimizatioN), a non-minimal and highly robust solution for point cloud registration with a great number of outliers among the correspondences. To realize this, we decouple the registration problem into the estimation of scale, rotation and translation, respectively. Our first contribution is to propose RANSIC (RANdom Samples with Invariant Compatibility), which employs the invariant compatibility to seek inliers from random samples and robustly estimates the scale between two sets of point clouds in the meantime. Once the scale is estimated, our second contribution is to relax the non-convex global registration problem into a convex Semi-Definite Program (SDP) in a certifiable way using Sum-of-Squares (SOS) Relaxation and show that the relaxation is tight. For robust estimation, we further propose RT-GNC (Rough Trimming and Graduated Non-Convexity), a global outlier rejection heuristic having better robustness and time-efficiency than traditional GNC, as our third contribution. With these contributions, we can render our registration algorithm, IRON. Through experiments over real datasets, we show that IRON is efficient, highly accurate and robust against as many as 99% outliers whether the scale is known or unknown, outperforming the existing state-of-the-art algorithms.
[ { "created": "Sun, 7 Mar 2021 13:46:56 GMT", "version": "v1" }, { "created": "Wed, 24 Mar 2021 07:46:14 GMT", "version": "v2" } ]
2021-04-21
[ [ "Sun", "Lei", "" ] ]
In this paper, we present IRON (Invariant-based global Robust estimation and OptimizatioN), a non-minimal and highly robust solution for point cloud registration with a great number of outliers among the correspondences. To realize this, we decouple the registration problem into the estimation of scale, rotation and translation, respectively. Our first contribution is to propose RANSIC (RANdom Samples with Invariant Compatibility), which employs the invariant compatibility to seek inliers from random samples and robustly estimates the scale between two sets of point clouds in the meantime. Once the scale is estimated, our second contribution is to relax the non-convex global registration problem into a convex Semi-Definite Program (SDP) in a certifiable way using Sum-of-Squares (SOS) Relaxation and show that the relaxation is tight. For robust estimation, we further propose RT-GNC (Rough Trimming and Graduated Non-Convexity), a global outlier rejection heuristic having better robustness and time-efficiency than traditional GNC, as our third contribution. With these contributions, we can render our registration algorithm, IRON. Through experiments over real datasets, we show that IRON is efficient, highly accurate and robust against as many as 99% outliers whether the scale is known or unknown, outperforming the existing state-of-the-art algorithms.
2112.02749
Suzhen Wang
Suzhen Wang, Lincheng Li, Yu Ding, Xin Yu
One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning
Accepted by AAAI 2022
AAAI 2022
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Audio-driven one-shot talking face generation methods are usually trained on video resources of various persons. However, their created videos often suffer unnatural mouth shapes and asynchronous lips because those methods struggle to learn a consistent speech style from different speakers. We observe that it would be much easier to learn a consistent speech style from a specific speaker, which leads to authentic mouth movements. Hence, we propose a novel one-shot talking face generation framework by exploring consistent correlations between audio and visual motions from a specific speaker and then transferring audio-driven motion fields to a reference image. Specifically, we develop an Audio-Visual Correlation Transformer (AVCT) that aims to infer talking motions represented by keypoint based dense motion fields from an input audio. In particular, considering audio may come from different identities in deployment, we incorporate phonemes to represent audio signals. In this manner, our AVCT can inherently generalize to audio spoken by other identities. Moreover, as face keypoints are used to represent speakers, AVCT is agnostic against appearances of the training speaker, and thus allows us to manipulate face images of different identities readily. Considering different face shapes lead to different motions, a motion field transfer module is exploited to reduce the audio-driven dense motion field gap between the training identity and the one-shot reference. Once we obtained the dense motion field of the reference image, we employ an image renderer to generate its talking face videos from an audio clip. Thanks to our learned consistent speaking style, our method generates authentic mouth shapes and vivid movements. Extensive experiments demonstrate that our synthesized videos outperform the state-of-the-art in terms of visual quality and lip-sync.
[ { "created": "Mon, 6 Dec 2021 02:53:51 GMT", "version": "v1" } ]
2021-12-07
[ [ "Wang", "Suzhen", "" ], [ "Li", "Lincheng", "" ], [ "Ding", "Yu", "" ], [ "Yu", "Xin", "" ] ]
Audio-driven one-shot talking face generation methods are usually trained on video resources of various persons. However, their created videos often suffer unnatural mouth shapes and asynchronous lips because those methods struggle to learn a consistent speech style from different speakers. We observe that it would be much easier to learn a consistent speech style from a specific speaker, which leads to authentic mouth movements. Hence, we propose a novel one-shot talking face generation framework by exploring consistent correlations between audio and visual motions from a specific speaker and then transferring audio-driven motion fields to a reference image. Specifically, we develop an Audio-Visual Correlation Transformer (AVCT) that aims to infer talking motions represented by keypoint based dense motion fields from an input audio. In particular, considering audio may come from different identities in deployment, we incorporate phonemes to represent audio signals. In this manner, our AVCT can inherently generalize to audio spoken by other identities. Moreover, as face keypoints are used to represent speakers, AVCT is agnostic against appearances of the training speaker, and thus allows us to manipulate face images of different identities readily. Considering different face shapes lead to different motions, a motion field transfer module is exploited to reduce the audio-driven dense motion field gap between the training identity and the one-shot reference. Once we obtained the dense motion field of the reference image, we employ an image renderer to generate its talking face videos from an audio clip. Thanks to our learned consistent speaking style, our method generates authentic mouth shapes and vivid movements. Extensive experiments demonstrate that our synthesized videos outperform the state-of-the-art in terms of visual quality and lip-sync.
2206.04789
Tianxin Wei
Tianxin Wei, Jingrui He
Comprehensive Fair Meta-learned Recommender System
Accepted to SIGKDD 2022
null
10.1145/3534678.3539269
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recommender systems, one common challenge is the cold-start problem, where interactions are very limited for fresh users in the systems. To address this challenge, recently, many works introduce the meta-optimization idea into the recommendation scenarios, i.e. learning to learn the user preference by only a few past interaction items. The core idea is to learn global shared meta-initialization parameters for all users and rapidly adapt them into local parameters for each user respectively. They aim at deriving general knowledge across preference learning of various users, so as to rapidly adapt to the future new user with the learned prior and a small amount of training data. However, previous works have shown that recommender systems are generally vulnerable to bias and unfairness. Despite the success of meta-learning at improving the recommendation performance with cold-start, the fairness issues are largely overlooked. In this paper, we propose a comprehensive fair meta-learning framework, named CLOVER, for ensuring the fairness of meta-learned recommendation models. We systematically study three kinds of fairness - individual fairness, counterfactual fairness, and group fairness in the recommender systems, and propose to satisfy all three kinds via a multi-task adversarial learning scheme. Our framework offers a generic training paradigm that is applicable to different meta-learned recommender systems. We demonstrate the effectiveness of CLOVER on the representative meta-learned user preference estimator on three real-world data sets. Empirical results show that CLOVER achieves comprehensive fairness without deteriorating the overall cold-start recommendation performance.
[ { "created": "Thu, 9 Jun 2022 22:48:35 GMT", "version": "v1" } ]
2022-06-13
[ [ "Wei", "Tianxin", "" ], [ "He", "Jingrui", "" ] ]
In recommender systems, one common challenge is the cold-start problem, where interactions are very limited for fresh users in the systems. To address this challenge, recently, many works introduce the meta-optimization idea into the recommendation scenarios, i.e. learning to learn the user preference by only a few past interaction items. The core idea is to learn global shared meta-initialization parameters for all users and rapidly adapt them into local parameters for each user respectively. They aim at deriving general knowledge across preference learning of various users, so as to rapidly adapt to the future new user with the learned prior and a small amount of training data. However, previous works have shown that recommender systems are generally vulnerable to bias and unfairness. Despite the success of meta-learning at improving the recommendation performance with cold-start, the fairness issues are largely overlooked. In this paper, we propose a comprehensive fair meta-learning framework, named CLOVER, for ensuring the fairness of meta-learned recommendation models. We systematically study three kinds of fairness - individual fairness, counterfactual fairness, and group fairness in the recommender systems, and propose to satisfy all three kinds via a multi-task adversarial learning scheme. Our framework offers a generic training paradigm that is applicable to different meta-learned recommender systems. We demonstrate the effectiveness of CLOVER on the representative meta-learned user preference estimator on three real-world data sets. Empirical results show that CLOVER achieves comprehensive fairness without deteriorating the overall cold-start recommendation performance.
1311.4809
Siddhartan Govindasamy
Siddhartan Govindasamy
Uplink Performance of Large Optimum-Combining Antenna Arrays in Poisson-Cell Networks
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The uplink of a wireless network with base stations distributed according to a Poisson Point Process (PPP) is analyzed. The base stations are assumed to have a large number of antennas and use linear minimum-mean-square-error (MMSE) spatial processing for multiple access. The number of active mobiles per cell is limited to permit channel estimation using pilot sequences that are orthogonal in each cell. The cumulative distribution function (CDF) of a randomly located link in a typical cell of such a system is derived when accurate channel estimation is available. A simple bound is provided for the spectral efficiency when channel estimates suffer from pilot contamination. The results provide insight into the performance of so-called massive Multiple-Input-Multiple-Output (MIMO) systems in spatially distributed cellular networks.
[ { "created": "Tue, 19 Nov 2013 17:17:16 GMT", "version": "v1" } ]
2013-11-20
[ [ "Govindasamy", "Siddhartan", "" ] ]
The uplink of a wireless network with base stations distributed according to a Poisson Point Process (PPP) is analyzed. The base stations are assumed to have a large number of antennas and use linear minimum-mean-square-error (MMSE) spatial processing for multiple access. The number of active mobiles per cell is limited to permit channel estimation using pilot sequences that are orthogonal in each cell. The cumulative distribution function (CDF) of a randomly located link in a typical cell of such a system is derived when accurate channel estimation is available. A simple bound is provided for the spectral efficiency when channel estimates suffer from pilot contamination. The results provide insight into the performance of so-called massive Multiple-Input-Multiple-Output (MIMO) systems in spatially distributed cellular networks.
1601.03055
Yuqing Hou
Yuqing Hou, Zhouchen Lin, Jin-ge Yao
Subspace Clustering Based Tag Sharing for Inductive Tag Matrix Refinement with Complex Errors
4 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Annotating images with tags is useful for indexing and retrieving images. However, many available annotation data include missing or inaccurate annotations. In this paper, we propose an image annotation framework which sequentially performs tag completion and refinement. We utilize the subspace property of data via sparse subspace clustering for tag completion. Then we propose a novel matrix completion model for tag refinement, integrating visual correlation, semantic correlation and the novelly studied property of complex errors. The proposed method outperforms the state-of-the-art approaches on multiple benchmark datasets even when they contain certain levels of annotation noise.
[ { "created": "Tue, 12 Jan 2016 21:03:43 GMT", "version": "v1" }, { "created": "Tue, 1 Mar 2016 04:41:53 GMT", "version": "v2" }, { "created": "Tue, 21 Jun 2016 15:48:06 GMT", "version": "v3" } ]
2016-06-22
[ [ "Hou", "Yuqing", "" ], [ "Lin", "Zhouchen", "" ], [ "Yao", "Jin-ge", "" ] ]
Annotating images with tags is useful for indexing and retrieving images. However, many available annotation data include missing or inaccurate annotations. In this paper, we propose an image annotation framework which sequentially performs tag completion and refinement. We utilize the subspace property of data via sparse subspace clustering for tag completion. Then we propose a novel matrix completion model for tag refinement, integrating visual correlation, semantic correlation and the novelly studied property of complex errors. The proposed method outperforms the state-of-the-art approaches on multiple benchmark datasets even when they contain certain levels of annotation noise.
2305.13864
Qiong Chen
Yong Yang and Qiong Chen and Yuan Feng and Tianlin Huang
MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation
Accepted to CVPR 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Existing few-shot segmentation methods are based on the meta-learning strategy and extract instance knowledge from a support set and then apply the knowledge to segment target objects in a query set. However, the extracted knowledge is insufficient to cope with the variable intra-class differences since the knowledge is obtained from a few samples in the support set. To address the problem, we propose a multi-information aggregation network (MIANet) that effectively leverages the general knowledge, i.e., semantic word embeddings, and instance information for accurate segmentation. Specifically, in MIANet, a general information module (GIM) is proposed to extract a general class prototype from word embeddings as a supplement to instance information. To this end, we design a triplet loss that treats the general class prototype as an anchor and samples positive-negative pairs from local features in the support set. The calculated triplet loss can transfer semantic similarities among language identities from a word embedding space to a visual representation space. To alleviate the model biasing towards the seen training classes and to obtain multi-scale information, we then introduce a non-parametric hierarchical prior module (HPM) to generate unbiased instance-level information via calculating the pixel-level similarity between the support and query image features. Finally, an information fusion module (IFM) combines the general and instance information to make predictions for the query image. Extensive experiments on PASCAL-5i and COCO-20i show that MIANet yields superior performance and set a new state-of-the-art. Code is available at https://github.com/Aldrich2y/MIANet.
[ { "created": "Tue, 23 May 2023 09:36:27 GMT", "version": "v1" } ]
2023-05-24
[ [ "Yang", "Yong", "" ], [ "Chen", "Qiong", "" ], [ "Feng", "Yuan", "" ], [ "Huang", "Tianlin", "" ] ]
Existing few-shot segmentation methods are based on the meta-learning strategy and extract instance knowledge from a support set and then apply the knowledge to segment target objects in a query set. However, the extracted knowledge is insufficient to cope with the variable intra-class differences since the knowledge is obtained from a few samples in the support set. To address the problem, we propose a multi-information aggregation network (MIANet) that effectively leverages the general knowledge, i.e., semantic word embeddings, and instance information for accurate segmentation. Specifically, in MIANet, a general information module (GIM) is proposed to extract a general class prototype from word embeddings as a supplement to instance information. To this end, we design a triplet loss that treats the general class prototype as an anchor and samples positive-negative pairs from local features in the support set. The calculated triplet loss can transfer semantic similarities among language identities from a word embedding space to a visual representation space. To alleviate the model biasing towards the seen training classes and to obtain multi-scale information, we then introduce a non-parametric hierarchical prior module (HPM) to generate unbiased instance-level information via calculating the pixel-level similarity between the support and query image features. Finally, an information fusion module (IFM) combines the general and instance information to make predictions for the query image. Extensive experiments on PASCAL-5i and COCO-20i show that MIANet yields superior performance and set a new state-of-the-art. Code is available at https://github.com/Aldrich2y/MIANet.
2406.04910
Binglei Lou
Binglei Lou, Richard Rademacher, David Boland, Philip H.W. Leong
PolyLUT-Add: FPGA-based LUT Inference with Wide Inputs
To be published in the International Conference on Field-Programmable Logic and Applications (FPL) 2024
null
null
null
cs.LG cs.AI cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
FPGAs have distinct advantages as a technology for deploying deep neural networks (DNNs) at the edge. Lookup Table (LUT) based networks, where neurons are directly modelled using LUTs, help maximize this promise of offering ultra-low latency and high area efficiency on FPGAs. Unfortunately, LUT resource usage scales exponentially with the number of inputs to the LUT, restricting PolyLUT to small LUT sizes. This work introduces PolyLUT-Add, a technique that enhances neuron connectivity by combining $A$ PolyLUT sub-neurons via addition to improve accuracy. Moreover, we describe a novel architecture to improve its scalability. We evaluated our implementation over the MNIST, Jet Substructure classification and Network Intrusion Detection benchmark and found that for similar accuracy, PolyLUT-Add achieves a LUT reduction of $1.3-7.7\times$ with a $1.2-2.2\times$ decrease in latency.
[ { "created": "Fri, 7 Jun 2024 13:00:57 GMT", "version": "v1" } ]
2024-06-10
[ [ "Lou", "Binglei", "" ], [ "Rademacher", "Richard", "" ], [ "Boland", "David", "" ], [ "Leong", "Philip H. W.", "" ] ]
FPGAs have distinct advantages as a technology for deploying deep neural networks (DNNs) at the edge. Lookup Table (LUT) based networks, where neurons are directly modelled using LUTs, help maximize this promise of offering ultra-low latency and high area efficiency on FPGAs. Unfortunately, LUT resource usage scales exponentially with the number of inputs to the LUT, restricting PolyLUT to small LUT sizes. This work introduces PolyLUT-Add, a technique that enhances neuron connectivity by combining $A$ PolyLUT sub-neurons via addition to improve accuracy. Moreover, we describe a novel architecture to improve its scalability. We evaluated our implementation over the MNIST, Jet Substructure classification and Network Intrusion Detection benchmark and found that for similar accuracy, PolyLUT-Add achieves a LUT reduction of $1.3-7.7\times$ with a $1.2-2.2\times$ decrease in latency.
2212.03191
Yi Wang
Yi Wang, Kunchang Li, Yizhuo Li, Yinan He, Bingkun Huang, Zhiyu Zhao, Hongjie Zhang, Jilan Xu, Yi Liu, Zun Wang, Sen Xing, Guo Chen, Junting Pan, Jiashuo Yu, Yali Wang, Limin Wang, Yu Qiao
InternVideo: General Video Foundation Models via Generative and Discriminative Learning
technical report
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The foundation models have recently shown excellent performance on a variety of downstream tasks in computer vision. However, most existing vision foundation models simply focus on image-level pretraining and adpation, which are limited for dynamic and complex video-level understanding tasks. To fill the gap, we present general video foundation models, InternVideo, by taking advantage of both generative and discriminative self-supervised video learning. Specifically, InternVideo efficiently explores masked video modeling and video-language contrastive learning as the pretraining objectives, and selectively coordinates video representations of these two complementary frameworks in a learnable manner to boost various video applications. Without bells and whistles, InternVideo achieves state-of-the-art performance on 39 video datasets from extensive tasks including video action recognition/detection, video-language alignment, and open-world video applications. Especially, our methods can obtain 91.1% and 77.2% top-1 accuracy on the challenging Kinetics-400 and Something-Something V2 benchmarks, respectively. All of these results effectively show the generality of our InternVideo for video understanding. The code will be released at https://github.com/OpenGVLab/InternVideo .
[ { "created": "Tue, 6 Dec 2022 18:09:49 GMT", "version": "v1" }, { "created": "Wed, 7 Dec 2022 12:20:55 GMT", "version": "v2" } ]
2022-12-08
[ [ "Wang", "Yi", "" ], [ "Li", "Kunchang", "" ], [ "Li", "Yizhuo", "" ], [ "He", "Yinan", "" ], [ "Huang", "Bingkun", "" ], [ "Zhao", "Zhiyu", "" ], [ "Zhang", "Hongjie", "" ], [ "Xu", "Jilan", "" ], [ "Liu", "Yi", "" ], [ "Wang", "Zun", "" ], [ "Xing", "Sen", "" ], [ "Chen", "Guo", "" ], [ "Pan", "Junting", "" ], [ "Yu", "Jiashuo", "" ], [ "Wang", "Yali", "" ], [ "Wang", "Limin", "" ], [ "Qiao", "Yu", "" ] ]
The foundation models have recently shown excellent performance on a variety of downstream tasks in computer vision. However, most existing vision foundation models simply focus on image-level pretraining and adpation, which are limited for dynamic and complex video-level understanding tasks. To fill the gap, we present general video foundation models, InternVideo, by taking advantage of both generative and discriminative self-supervised video learning. Specifically, InternVideo efficiently explores masked video modeling and video-language contrastive learning as the pretraining objectives, and selectively coordinates video representations of these two complementary frameworks in a learnable manner to boost various video applications. Without bells and whistles, InternVideo achieves state-of-the-art performance on 39 video datasets from extensive tasks including video action recognition/detection, video-language alignment, and open-world video applications. Especially, our methods can obtain 91.1% and 77.2% top-1 accuracy on the challenging Kinetics-400 and Something-Something V2 benchmarks, respectively. All of these results effectively show the generality of our InternVideo for video understanding. The code will be released at https://github.com/OpenGVLab/InternVideo .
2407.09766
Stephen Jimmy
Stephen Jimmy and Kalkidan Berhane and Kevin Muhammad
User Digital Twin-Driven Video Streaming for Customized Preferences and Adaptive Transcoding
null
null
null
null
cs.MM cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the rapidly evolving field of multimedia services, video streaming has become increasingly prevalent, demanding innovative solutions to enhance user experience and system efficiency. This paper introduces a novel approach that integrates user digital twins-a dynamic digital representation of a user's preferences and behaviors-with traditional video streaming systems. We explore the potential of this integration to dynamically adjust video preferences and optimize transcoding processes according to real-time data. The methodology leverages advanced machine learning algorithms to continuously update the user's digital twin, which in turn informs the transcoding service to adapt video parameters for optimal quality and minimal buffering. Experimental results show that our approach not only improves the personalization of content delivery but also significantly enhances the overall efficiency of video streaming services by reducing bandwidth usage and improving video playback quality. The implications of such advancements suggest a shift towards more adaptive, user-centric multimedia services, potentially transforming how video content is consumed and delivered.
[ { "created": "Sat, 13 Jul 2024 04:15:02 GMT", "version": "v1" } ]
2024-07-16
[ [ "Jimmy", "Stephen", "" ], [ "Berhane", "Kalkidan", "" ], [ "Muhammad", "Kevin", "" ] ]
In the rapidly evolving field of multimedia services, video streaming has become increasingly prevalent, demanding innovative solutions to enhance user experience and system efficiency. This paper introduces a novel approach that integrates user digital twins-a dynamic digital representation of a user's preferences and behaviors-with traditional video streaming systems. We explore the potential of this integration to dynamically adjust video preferences and optimize transcoding processes according to real-time data. The methodology leverages advanced machine learning algorithms to continuously update the user's digital twin, which in turn informs the transcoding service to adapt video parameters for optimal quality and minimal buffering. Experimental results show that our approach not only improves the personalization of content delivery but also significantly enhances the overall efficiency of video streaming services by reducing bandwidth usage and improving video playback quality. The implications of such advancements suggest a shift towards more adaptive, user-centric multimedia services, potentially transforming how video content is consumed and delivered.
1211.4935
Keehang Kwon
Keehang Kwon and Daeseong Kang
Mutually Exclusive Rules in LogicWeb
4 pages
null
null
null
cs.LO cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LogicWeb has traditionally lacked devices for expressing mutually exclusive clauses. We address this limitation by adopting choice-conjunctive clauses of the form $D_0 \adc D_1$ where $D_0, D_1$ are Horn clauses and $\adc$ is a linear logic connective. Solving a goal $G$ using $D_0 \adc D_1$ -- $\prov(D_0 \adc D_1,G)$ -- has the following operational semantics: choose a successful one between $\prov(D_0,G)$ and $\prov(D_1,G)$. In other words, if $D_o$ is chosen in the course of solving $G$, then $D_1$ will be discarded and vice versa. Hence, the class of choice-conjunctive clauses precisely captures the notion of mutually exclusive clauses.
[ { "created": "Wed, 21 Nov 2012 04:54:05 GMT", "version": "v1" } ]
2012-11-22
[ [ "Kwon", "Keehang", "" ], [ "Kang", "Daeseong", "" ] ]
LogicWeb has traditionally lacked devices for expressing mutually exclusive clauses. We address this limitation by adopting choice-conjunctive clauses of the form $D_0 \adc D_1$ where $D_0, D_1$ are Horn clauses and $\adc$ is a linear logic connective. Solving a goal $G$ using $D_0 \adc D_1$ -- $\prov(D_0 \adc D_1,G)$ -- has the following operational semantics: choose a successful one between $\prov(D_0,G)$ and $\prov(D_1,G)$. In other words, if $D_o$ is chosen in the course of solving $G$, then $D_1$ will be discarded and vice versa. Hence, the class of choice-conjunctive clauses precisely captures the notion of mutually exclusive clauses.
0907.0340
James Whitacre
James M. Whitacre, Hussein A. Abbass, Ruhul Sarker, Axel Bender, Stephen Baker
Strategic Positioning in Tactical Scenario Planning
null
Genetic And Evolutionary Computation Conference 2008, Pages 1081-1088
10.1145/1389095.1389293
null
cs.NE cs.AI
http://creativecommons.org/licenses/by/3.0/
Capability planning problems are pervasive throughout many areas of human interest with prominent examples found in defense and security. Planning provides a unique context for optimization that has not been explored in great detail and involves a number of interesting challenges which are distinct from traditional optimization research. Planning problems demand solutions that can satisfy a number of competing objectives on multiple scales related to robustness, adaptiveness, risk, etc. The scenario method is a key approach for planning. Scenarios can be defined for long-term as well as short-term plans. This paper introduces computational scenario-based planning problems and proposes ways to accommodate strategic positioning within the tactical planning domain. We demonstrate the methodology in a resource planning problem that is solved with a multi-objective evolutionary algorithm. Our discussion and results highlight the fact that scenario-based planning is naturally framed within a multi-objective setting. However, the conflicting objectives occur on different system levels rather than within a single system alone. This paper also contends that planning problems are of vital interest in many human endeavors and that Evolutionary Computation may be well positioned for this problem domain.
[ { "created": "Thu, 2 Jul 2009 10:56:52 GMT", "version": "v1" } ]
2009-07-03
[ [ "Whitacre", "James M.", "" ], [ "Abbass", "Hussein A.", "" ], [ "Sarker", "Ruhul", "" ], [ "Bender", "Axel", "" ], [ "Baker", "Stephen", "" ] ]
Capability planning problems are pervasive throughout many areas of human interest with prominent examples found in defense and security. Planning provides a unique context for optimization that has not been explored in great detail and involves a number of interesting challenges which are distinct from traditional optimization research. Planning problems demand solutions that can satisfy a number of competing objectives on multiple scales related to robustness, adaptiveness, risk, etc. The scenario method is a key approach for planning. Scenarios can be defined for long-term as well as short-term plans. This paper introduces computational scenario-based planning problems and proposes ways to accommodate strategic positioning within the tactical planning domain. We demonstrate the methodology in a resource planning problem that is solved with a multi-objective evolutionary algorithm. Our discussion and results highlight the fact that scenario-based planning is naturally framed within a multi-objective setting. However, the conflicting objectives occur on different system levels rather than within a single system alone. This paper also contends that planning problems are of vital interest in many human endeavors and that Evolutionary Computation may be well positioned for this problem domain.
1709.07807
Juan Pablo Vigneaux
Juan Pablo Vigneaux
Information structures and their cohomology
54 pages, 1 figure. This improved version was finally published in Theory and Applications of Categories. It took into account multiple suggestion of the reviewer
Theory and Applications of Categories, Vol. 35, 2020, No. 38, pp 1476-1529
null
null
cs.IT math.AT math.IT math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the category of information structures, whose objects are suitable diagrams of measurable sets that encode the possible outputs of a given family of observables and their mutual relationships of refinement; they serve as mathematical models of contextuality in classical and quantum settings. Each information structure can be regarded as a ringed site with trivial topology; the structure ring is generated by the observables themselves and its multiplication corresponds to joint measurement. We extend Baudot and Bennequin's definition of information cohomology to this setting, as a derived functor in the category of modules over the structure ring, and show explicitly that the bar construction gives a projective resolution in that category, recovering in this way the cochain complexes previously considered in the literature. Finally, we study the particular case of a one-parameter family of coefficients made of functions of probability distributions. The only 1-cocycles are Shannon entropy or Tsallis $\alpha$-entropy, depending on the value of the parameter.
[ { "created": "Fri, 22 Sep 2017 15:19:11 GMT", "version": "v1" }, { "created": "Fri, 12 Oct 2018 09:39:13 GMT", "version": "v2" }, { "created": "Tue, 15 Oct 2019 17:09:25 GMT", "version": "v3" }, { "created": "Mon, 8 Nov 2021 18:52:00 GMT", "version": "v4" } ]
2021-11-09
[ [ "Vigneaux", "Juan Pablo", "" ] ]
We introduce the category of information structures, whose objects are suitable diagrams of measurable sets that encode the possible outputs of a given family of observables and their mutual relationships of refinement; they serve as mathematical models of contextuality in classical and quantum settings. Each information structure can be regarded as a ringed site with trivial topology; the structure ring is generated by the observables themselves and its multiplication corresponds to joint measurement. We extend Baudot and Bennequin's definition of information cohomology to this setting, as a derived functor in the category of modules over the structure ring, and show explicitly that the bar construction gives a projective resolution in that category, recovering in this way the cochain complexes previously considered in the literature. Finally, we study the particular case of a one-parameter family of coefficients made of functions of probability distributions. The only 1-cocycles are Shannon entropy or Tsallis $\alpha$-entropy, depending on the value of the parameter.
2009.10990
Rohun Kshirsagar
Rohun Kshirsagar, Li-Yen Hsu, Vatshank Chaturvedi, Charles H. Greenberg, Matthew McClelland, Anushadevi Mohan, Wideet Shende, Nicolas P. Tilmans, Renzo Frigato, Min Guo, Ankit Chheda, Meredith Trotter, Shonket Ray, Arnold Lee, Miguel Alvarado
Accurate and Interpretable Machine Learning for Transparent Pricing of Health Insurance Plans
Accepted for publication in The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), in the Innovative Applications of Artificial Intelligence track. This is the extended version with some stylistic fixes from the first posting and complete author list
null
null
null
cs.CY cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Health insurance companies cover half of the United States population through commercial employer-sponsored health plans and pay 1.2 trillion US dollars every year to cover medical expenses for their members. The actuary and underwriter roles at a health insurance company serve to assess which risks to take on and how to price those risks to ensure profitability of the organization. While Bayesian hierarchical models are the current standard in the industry to estimate risk, interest in machine learning as a way to improve upon these existing methods is increasing. Lumiata, a healthcare analytics company, ran a study with a large health insurance company in the United States. We evaluated the ability of machine learning models to predict the per member per month cost of employer groups in their next renewal period, especially those groups who will cost less than 95\% of what an actuarial model predicts (groups with "concession opportunities"). We developed a sequence of two models, an individual patient-level and an employer-group-level model, to predict the annual per member per month allowed amount for employer groups, based on a population of 14 million patients. Our models performed 20\% better than the insurance carrier's existing pricing model, and identified 84\% of the concession opportunities. This study demonstrates the application of a machine learning system to compute an accurate and fair price for health insurance products and analyzes how explainable machine learning models can exceed actuarial models' predictive accuracy while maintaining interpretability.
[ { "created": "Wed, 23 Sep 2020 08:07:33 GMT", "version": "v1" }, { "created": "Sat, 27 Feb 2021 22:47:22 GMT", "version": "v2" } ]
2021-03-02
[ [ "Kshirsagar", "Rohun", "" ], [ "Hsu", "Li-Yen", "" ], [ "Chaturvedi", "Vatshank", "" ], [ "Greenberg", "Charles H.", "" ], [ "McClelland", "Matthew", "" ], [ "Mohan", "Anushadevi", "" ], [ "Shende", "Wideet", "" ], [ "Tilmans", "Nicolas P.", "" ], [ "Frigato", "Renzo", "" ], [ "Guo", "Min", "" ], [ "Chheda", "Ankit", "" ], [ "Trotter", "Meredith", "" ], [ "Ray", "Shonket", "" ], [ "Lee", "Arnold", "" ], [ "Alvarado", "Miguel", "" ] ]
Health insurance companies cover half of the United States population through commercial employer-sponsored health plans and pay 1.2 trillion US dollars every year to cover medical expenses for their members. The actuary and underwriter roles at a health insurance company serve to assess which risks to take on and how to price those risks to ensure profitability of the organization. While Bayesian hierarchical models are the current standard in the industry to estimate risk, interest in machine learning as a way to improve upon these existing methods is increasing. Lumiata, a healthcare analytics company, ran a study with a large health insurance company in the United States. We evaluated the ability of machine learning models to predict the per member per month cost of employer groups in their next renewal period, especially those groups who will cost less than 95\% of what an actuarial model predicts (groups with "concession opportunities"). We developed a sequence of two models, an individual patient-level and an employer-group-level model, to predict the annual per member per month allowed amount for employer groups, based on a population of 14 million patients. Our models performed 20\% better than the insurance carrier's existing pricing model, and identified 84\% of the concession opportunities. This study demonstrates the application of a machine learning system to compute an accurate and fair price for health insurance products and analyzes how explainable machine learning models can exceed actuarial models' predictive accuracy while maintaining interpretability.
2401.15443
Zibin Dong
Zibin Dong, Jianye Hao, Yifu Yuan, Fei Ni, Yitian Wang, Pengyi Li and Yan Zheng
DiffuserLite: Towards Real-time Diffusion Planning
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Diffusion planning has been recognized as an effective decision-making paradigm in various domains. The capability of conditionally generating high-quality long-horizon trajectories makes it a promising research direction. However, existing diffusion planning methods suffer from low decision-making frequencies due to the expensive iterative sampling cost. To address this issue, we introduce DiffuserLite, a super fast and lightweight diffusion planning framework. DiffuserLite employs a planning refinement process (PRP) to generate coarse-to-fine-grained trajectories, significantly reducing the modeling of redundant information and leading to notable increases in decision-making frequency. Our experimental results demonstrate that DiffuserLite achieves a decision-making frequency of $122$Hz ($112.7$x faster than previous mainstream frameworks) and reaches state-of-the-art performance on D4RL benchmarks. In addition, our neat DiffuserLite framework can serve as a flexible plugin to enhance decision frequency in other diffusion planning algorithms, providing a structural design reference for future works. More details and visualizations are available at https://diffuserlite.github.io/.
[ { "created": "Sat, 27 Jan 2024 15:30:49 GMT", "version": "v1" }, { "created": "Tue, 30 Jan 2024 04:43:27 GMT", "version": "v2" }, { "created": "Wed, 31 Jan 2024 02:50:41 GMT", "version": "v3" }, { "created": "Fri, 2 Feb 2024 08:57:16 GMT", "version": "v4" } ]
2024-02-05
[ [ "Dong", "Zibin", "" ], [ "Hao", "Jianye", "" ], [ "Yuan", "Yifu", "" ], [ "Ni", "Fei", "" ], [ "Wang", "Yitian", "" ], [ "Li", "Pengyi", "" ], [ "Zheng", "Yan", "" ] ]
Diffusion planning has been recognized as an effective decision-making paradigm in various domains. The capability of conditionally generating high-quality long-horizon trajectories makes it a promising research direction. However, existing diffusion planning methods suffer from low decision-making frequencies due to the expensive iterative sampling cost. To address this issue, we introduce DiffuserLite, a super fast and lightweight diffusion planning framework. DiffuserLite employs a planning refinement process (PRP) to generate coarse-to-fine-grained trajectories, significantly reducing the modeling of redundant information and leading to notable increases in decision-making frequency. Our experimental results demonstrate that DiffuserLite achieves a decision-making frequency of $122$Hz ($112.7$x faster than previous mainstream frameworks) and reaches state-of-the-art performance on D4RL benchmarks. In addition, our neat DiffuserLite framework can serve as a flexible plugin to enhance decision frequency in other diffusion planning algorithms, providing a structural design reference for future works. More details and visualizations are available at https://diffuserlite.github.io/.
2002.03749
Hartmut Feld
Hartmut Feld, Bruno Mirbach, Jigyasa Katrolia, Mohamed Selim, Oliver Wasenm\"uller, Didier Stricker
DFKI Cabin Simulator: A Test Platform for Visual In-Cabin Monitoring Functions
corrected typos and bad reference
null
null
null
cs.CV cs.HC cs.LG cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a test platform for visual in-cabin scene analysis and occupant monitoring functions. The test platform is based on a driving simulator developed at the DFKI, consisting of a realistic in-cabin mock-up and a wide-angle projection system for a realistic driving experience. The platform has been equipped with a wide-angle 2D/3D camera system monitoring the entire interior of the vehicle mock-up of the simulator. It is also supplemented with a ground truth reference sensor system that allows to track and record the occupant's body movements synchronously with the 2D and 3D video streams of the camera. Thus, the resulting test platform will serve as a basis to validate numerous in-cabin monitoring functions, which are important for the realization of novel human-vehicle interfaces, advanced driver assistant systems, and automated driving. Among the considered functions are occupant presence detection, size and 3D-pose estimation and driver intention recognition. In addition, our platform will be the basis for the creation of large-scale in-cabin benchmark datasets.
[ { "created": "Tue, 28 Jan 2020 07:15:50 GMT", "version": "v1" }, { "created": "Tue, 11 Feb 2020 12:27:42 GMT", "version": "v2" } ]
2020-02-12
[ [ "Feld", "Hartmut", "" ], [ "Mirbach", "Bruno", "" ], [ "Katrolia", "Jigyasa", "" ], [ "Selim", "Mohamed", "" ], [ "Wasenmüller", "Oliver", "" ], [ "Stricker", "Didier", "" ] ]
We present a test platform for visual in-cabin scene analysis and occupant monitoring functions. The test platform is based on a driving simulator developed at the DFKI, consisting of a realistic in-cabin mock-up and a wide-angle projection system for a realistic driving experience. The platform has been equipped with a wide-angle 2D/3D camera system monitoring the entire interior of the vehicle mock-up of the simulator. It is also supplemented with a ground truth reference sensor system that allows to track and record the occupant's body movements synchronously with the 2D and 3D video streams of the camera. Thus, the resulting test platform will serve as a basis to validate numerous in-cabin monitoring functions, which are important for the realization of novel human-vehicle interfaces, advanced driver assistant systems, and automated driving. Among the considered functions are occupant presence detection, size and 3D-pose estimation and driver intention recognition. In addition, our platform will be the basis for the creation of large-scale in-cabin benchmark datasets.
0910.4049
Nizami Gasilov
N. Gasilov, \c{S}ahin Emrah Amrahov, A. Golayoglu Fatullayev, H. I. Karakas, O. Akin
A Geometric Approach to Solve Fuzzy Linear Systems
null
CMES: Computer Modeling in Engineering & Sciences, Vol. 75, No. 3, pp. 189-204, 2011
10.3970/cmes.2011.075.189
null
cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, linear systems with a crisp real coefficient matrix and with a vector of fuzzy triangular numbers on the right-hand side are studied. A new method, which is based on the geometric representations of linear transformations, is proposed to find solutions. The method uses the fact that a vector of fuzzy triangular numbers forms a rectangular prism in n-dimensional space and that the image of a parallelepiped is also a parallelepiped under a linear transformation. The suggested method clarifies why in general case different approaches do not generate solutions as fuzzy numbers. It is geometrically proved that if the coefficient matrix is a generalized permutation matrix, then the solution of a fuzzy linear system (FLS) is a vector of fuzzy numbers irrespective of the vector on the right-hand side. The most important difference between this and previous papers on FLS is that the solution is sought as a fuzzy set of vectors (with real components) rather than a vector of fuzzy numbers. Each vector in the solution set solves the given FLS with a certain possibility. The suggested method can also be applied in the case when the right-hand side is a vector of fuzzy numbers in parametric form. However, in this case, -cuts of the solution can not be determined by geometric similarity and additional computations are needed.
[ { "created": "Wed, 21 Oct 2009 11:20:37 GMT", "version": "v1" }, { "created": "Tue, 12 Apr 2011 08:06:58 GMT", "version": "v2" } ]
2011-11-03
[ [ "Gasilov", "N.", "" ], [ "Amrahov", "Şahin Emrah", "" ], [ "Fatullayev", "A. Golayoglu", "" ], [ "Karakas", "H. I.", "" ], [ "Akin", "O.", "" ] ]
In this paper, linear systems with a crisp real coefficient matrix and with a vector of fuzzy triangular numbers on the right-hand side are studied. A new method, which is based on the geometric representations of linear transformations, is proposed to find solutions. The method uses the fact that a vector of fuzzy triangular numbers forms a rectangular prism in n-dimensional space and that the image of a parallelepiped is also a parallelepiped under a linear transformation. The suggested method clarifies why in general case different approaches do not generate solutions as fuzzy numbers. It is geometrically proved that if the coefficient matrix is a generalized permutation matrix, then the solution of a fuzzy linear system (FLS) is a vector of fuzzy numbers irrespective of the vector on the right-hand side. The most important difference between this and previous papers on FLS is that the solution is sought as a fuzzy set of vectors (with real components) rather than a vector of fuzzy numbers. Each vector in the solution set solves the given FLS with a certain possibility. The suggested method can also be applied in the case when the right-hand side is a vector of fuzzy numbers in parametric form. However, in this case, -cuts of the solution can not be determined by geometric similarity and additional computations are needed.
2111.02338
Ran Liu
Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith B. Hengen, Michal Valko, Eva L. Dyer
Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity
To be published in Neurips 2021
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Meaningful and simplified representations of neural activity can yield insights into how and what information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). These transformed (or augmented) views are created by dropping out neurons and jittering samples in time, which intuitively should lead the network to a representation that maintains both temporal consistency and invariance to the specific neurons used to represent the neural state. Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
[ { "created": "Wed, 3 Nov 2021 16:39:43 GMT", "version": "v1" } ]
2021-11-04
[ [ "Liu", "Ran", "" ], [ "Azabou", "Mehdi", "" ], [ "Dabagia", "Max", "" ], [ "Lin", "Chi-Heng", "" ], [ "Azar", "Mohammad Gheshlaghi", "" ], [ "Hengen", "Keith B.", "" ], [ "Valko", "Michal", "" ], [ "Dyer", "Eva L.", "" ] ]
Meaningful and simplified representations of neural activity can yield insights into how and what information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). These transformed (or augmented) views are created by dropping out neurons and jittering samples in time, which intuitively should lead the network to a representation that maintains both temporal consistency and invariance to the specific neurons used to represent the neural state. Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
1104.4155
Sergiy Vorobyov A.
Zengmao Chen, Cheng-Xiang Wang, Xuemin Hong, John Thompson, Sergiy A. Vorobyov, Feng Zhao, Hailin Xiao, and Xiaohu Ge
Interference Mitigation for Cognitive Radio MIMO Systems Based on Practical Precoding
12 pages, 4 figures, submitted to the IEEE Trans. Wireless Communications in April 2011
Z. Chen, C.-X. Wang, X. Hong, J. Thompson, S.A. Vorobyov, and et al, "Interference mitigation for cognitive radio MIMO systems based on practical precoding," Invited Paper, Physical Communication, vol. 9, pp. 308-315, Dec. 2013
10.1016/j.phycom.2012.04.007
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose two subspace-projection-based precoding schemes, namely, full-projection (FP)- and partial-projection (PP)-based precoding, for a cognitive radio multiple-input multiple-output (CR-MIMO) network to mitigate its interference to a primary time-division-duplexing (TDD) system. The proposed precoding schemes are capable of estimating interference channels between CR and primary networks, and incorporating the interference from the primary to the CR system into CR precoding via a novel sensing approach. Then, the CR performance and resulting interference of the proposed precoding schemes are analyzed and evaluated. By fully projecting the CR transmission onto a null space of the interference channels, the FP-based precoding scheme can effectively avoid interfering the primary system with boosted CR throughput. While, the PP-based scheme is able to further improve the CR throughput by partially projecting its transmission onto the null space.
[ { "created": "Thu, 21 Apr 2011 01:42:27 GMT", "version": "v1" } ]
2016-11-17
[ [ "Chen", "Zengmao", "" ], [ "Wang", "Cheng-Xiang", "" ], [ "Hong", "Xuemin", "" ], [ "Thompson", "John", "" ], [ "Vorobyov", "Sergiy A.", "" ], [ "Zhao", "Feng", "" ], [ "Xiao", "Hailin", "" ], [ "Ge", "Xiaohu", "" ] ]
In this paper, we propose two subspace-projection-based precoding schemes, namely, full-projection (FP)- and partial-projection (PP)-based precoding, for a cognitive radio multiple-input multiple-output (CR-MIMO) network to mitigate its interference to a primary time-division-duplexing (TDD) system. The proposed precoding schemes are capable of estimating interference channels between CR and primary networks, and incorporating the interference from the primary to the CR system into CR precoding via a novel sensing approach. Then, the CR performance and resulting interference of the proposed precoding schemes are analyzed and evaluated. By fully projecting the CR transmission onto a null space of the interference channels, the FP-based precoding scheme can effectively avoid interfering the primary system with boosted CR throughput. While, the PP-based scheme is able to further improve the CR throughput by partially projecting its transmission onto the null space.
2301.10960
Alireza Hashemi
Alireza Hashemi, Hernan Makse
Visiting Distant Neighbors in Graph Convolutional Networks
null
null
null
null
cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We extend the graph convolutional network method for deep learning on graph data to higher order in terms of neighboring nodes. In order to construct representations for a node in a graph, in addition to the features of the node and its immediate neighboring nodes, we also include more distant nodes in the calculations. In experimenting with a number of publicly available citation graph datasets, we show that this higher order neighbor visiting pays off by outperforming the original model especially when we have a limited number of available labeled data points for the training of the model.
[ { "created": "Thu, 26 Jan 2023 06:37:11 GMT", "version": "v1" }, { "created": "Sun, 29 Jan 2023 06:18:23 GMT", "version": "v2" }, { "created": "Wed, 22 May 2024 19:57:15 GMT", "version": "v3" } ]
2024-05-24
[ [ "Hashemi", "Alireza", "" ], [ "Makse", "Hernan", "" ] ]
We extend the graph convolutional network method for deep learning on graph data to higher order in terms of neighboring nodes. In order to construct representations for a node in a graph, in addition to the features of the node and its immediate neighboring nodes, we also include more distant nodes in the calculations. In experimenting with a number of publicly available citation graph datasets, we show that this higher order neighbor visiting pays off by outperforming the original model especially when we have a limited number of available labeled data points for the training of the model.
2002.02702
Martin Trapp
Mohamed Tarek, Kai Xu, Martin Trapp, Hong Ge, Zoubin Ghahramani
DynamicPPL: Stan-like Speed for Dynamic Probabilistic Models
null
null
null
null
cs.LG cs.PL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the preliminary high-level design and features of DynamicPPL.jl, a modular library providing a lightning-fast infrastructure for probabilistic programming. Besides a computational performance that is often close to or better than Stan, DynamicPPL provides an intuitive DSL that allows the rapid development of complex dynamic probabilistic programs. Being entirely written in Julia, a high-level dynamic programming language for numerical computing, DynamicPPL inherits a rich set of features available through the Julia ecosystem. Since DynamicPPL is a modular, stand-alone library, any probabilistic programming system written in Julia, such as Turing.jl, can use DynamicPPL to specify models and trace their model parameters. The main features of DynamicPPL are: 1) a meta-programming based DSL for specifying dynamic models using an intuitive tilde-based notation; 2) a tracing data-structure for tracking RVs in dynamic probabilistic models; 3) a rich contextual dispatch system allowing tailored behaviour during model execution; and 4) a user-friendly syntax for probabilistic queries. Finally, we show in a variety of experiments that DynamicPPL, in combination with Turing.jl, achieves computational performance that is often close to or better than Stan.
[ { "created": "Fri, 7 Feb 2020 10:21:49 GMT", "version": "v1" } ]
2020-02-10
[ [ "Tarek", "Mohamed", "" ], [ "Xu", "Kai", "" ], [ "Trapp", "Martin", "" ], [ "Ge", "Hong", "" ], [ "Ghahramani", "Zoubin", "" ] ]
We present the preliminary high-level design and features of DynamicPPL.jl, a modular library providing a lightning-fast infrastructure for probabilistic programming. Besides a computational performance that is often close to or better than Stan, DynamicPPL provides an intuitive DSL that allows the rapid development of complex dynamic probabilistic programs. Being entirely written in Julia, a high-level dynamic programming language for numerical computing, DynamicPPL inherits a rich set of features available through the Julia ecosystem. Since DynamicPPL is a modular, stand-alone library, any probabilistic programming system written in Julia, such as Turing.jl, can use DynamicPPL to specify models and trace their model parameters. The main features of DynamicPPL are: 1) a meta-programming based DSL for specifying dynamic models using an intuitive tilde-based notation; 2) a tracing data-structure for tracking RVs in dynamic probabilistic models; 3) a rich contextual dispatch system allowing tailored behaviour during model execution; and 4) a user-friendly syntax for probabilistic queries. Finally, we show in a variety of experiments that DynamicPPL, in combination with Turing.jl, achieves computational performance that is often close to or better than Stan.
2405.13360
Zhenting Wang
Zhenting Wang, Vikash Sehwag, Chen Chen, Lingjuan Lyu, Dimitris N. Metaxas, Shiqing Ma
How to Trace Latent Generative Model Generated Images without Artificial Watermark?
ICML 2024
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Latent generative models (e.g., Stable Diffusion) have become more and more popular, but concerns have arisen regarding potential misuse related to images generated by these models. It is, therefore, necessary to analyze the origin of images by inferring if a particular image was generated by a specific latent generative model. Most existing methods (e.g., image watermark and model fingerprinting) require extra steps during training or generation. These requirements restrict their usage on the generated images without such extra operations, and the extra required operations might compromise the quality of the generated images. In this work, we ask whether it is possible to effectively and efficiently trace the images generated by a specific latent generative model without the aforementioned requirements. To study this problem, we design a latent inversion based method called LatentTracer to trace the generated images of the inspected model by checking if the examined images can be well-reconstructed with an inverted latent input. We leverage gradient based latent inversion and identify a encoder-based initialization critical to the success of our approach. Our experiments on the state-of-the-art latent generative models, such as Stable Diffusion, show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency. Our findings suggest the intriguing possibility that today's latent generative generated images are naturally watermarked by the decoder used in the source models. Code: https://github.com/ZhentingWang/LatentTracer.
[ { "created": "Wed, 22 May 2024 05:33:47 GMT", "version": "v1" } ]
2024-05-24
[ [ "Wang", "Zhenting", "" ], [ "Sehwag", "Vikash", "" ], [ "Chen", "Chen", "" ], [ "Lyu", "Lingjuan", "" ], [ "Metaxas", "Dimitris N.", "" ], [ "Ma", "Shiqing", "" ] ]
Latent generative models (e.g., Stable Diffusion) have become more and more popular, but concerns have arisen regarding potential misuse related to images generated by these models. It is, therefore, necessary to analyze the origin of images by inferring if a particular image was generated by a specific latent generative model. Most existing methods (e.g., image watermark and model fingerprinting) require extra steps during training or generation. These requirements restrict their usage on the generated images without such extra operations, and the extra required operations might compromise the quality of the generated images. In this work, we ask whether it is possible to effectively and efficiently trace the images generated by a specific latent generative model without the aforementioned requirements. To study this problem, we design a latent inversion based method called LatentTracer to trace the generated images of the inspected model by checking if the examined images can be well-reconstructed with an inverted latent input. We leverage gradient based latent inversion and identify a encoder-based initialization critical to the success of our approach. Our experiments on the state-of-the-art latent generative models, such as Stable Diffusion, show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency. Our findings suggest the intriguing possibility that today's latent generative generated images are naturally watermarked by the decoder used in the source models. Code: https://github.com/ZhentingWang/LatentTracer.
2404.12512
Yubo Gao
Yubo Gao, Maryam Haghifam, Christina Giannoula, Renbo Tu, Gennady Pekhimenko, Nandita Vijaykumar
Proteus: Preserving Model Confidentiality during Graph Optimizations
null
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Deep learning (DL) models have revolutionized numerous domains, yet optimizing them for computational efficiency remains a challenging endeavor. Development of new DL models typically involves two parties: the model developers and performance optimizers. The collaboration between the parties often necessitates the model developers exposing the model architecture and computational graph to the optimizers. However, this exposure is undesirable since the model architecture is an important intellectual property, and its innovations require significant investments and expertise. During the exchange, the model is also vulnerable to adversarial attacks via model stealing. This paper presents Proteus, a novel mechanism that enables model optimization by an independent party while preserving the confidentiality of the model architecture. Proteus obfuscates the protected model by partitioning its computational graph into subgraphs and concealing each subgraph within a large pool of generated realistic subgraphs that cannot be easily distinguished from the original. We evaluate Proteus on a range of DNNs, demonstrating its efficacy in preserving confidentiality without compromising performance optimization opportunities. Proteus effectively hides the model as one alternative among up to $10^{32}$ possible model architectures, and is resilient against attacks with a learning-based adversary. We also demonstrate that heuristic based and manual approaches are ineffective in identifying the protected model. To our knowledge, Proteus is the first work that tackles the challenge of model confidentiality during performance optimization. Proteus will be open-sourced for direct use and experimentation, with easy integration with compilers such as ONNXRuntime.
[ { "created": "Thu, 18 Apr 2024 21:23:25 GMT", "version": "v1" } ]
2024-04-22
[ [ "Gao", "Yubo", "" ], [ "Haghifam", "Maryam", "" ], [ "Giannoula", "Christina", "" ], [ "Tu", "Renbo", "" ], [ "Pekhimenko", "Gennady", "" ], [ "Vijaykumar", "Nandita", "" ] ]
Deep learning (DL) models have revolutionized numerous domains, yet optimizing them for computational efficiency remains a challenging endeavor. Development of new DL models typically involves two parties: the model developers and performance optimizers. The collaboration between the parties often necessitates the model developers exposing the model architecture and computational graph to the optimizers. However, this exposure is undesirable since the model architecture is an important intellectual property, and its innovations require significant investments and expertise. During the exchange, the model is also vulnerable to adversarial attacks via model stealing. This paper presents Proteus, a novel mechanism that enables model optimization by an independent party while preserving the confidentiality of the model architecture. Proteus obfuscates the protected model by partitioning its computational graph into subgraphs and concealing each subgraph within a large pool of generated realistic subgraphs that cannot be easily distinguished from the original. We evaluate Proteus on a range of DNNs, demonstrating its efficacy in preserving confidentiality without compromising performance optimization opportunities. Proteus effectively hides the model as one alternative among up to $10^{32}$ possible model architectures, and is resilient against attacks with a learning-based adversary. We also demonstrate that heuristic based and manual approaches are ineffective in identifying the protected model. To our knowledge, Proteus is the first work that tackles the challenge of model confidentiality during performance optimization. Proteus will be open-sourced for direct use and experimentation, with easy integration with compilers such as ONNXRuntime.
2305.11744
Revanth Reddy
Revanth Gangi Reddy, Pradeep Dasigi, Md Arafat Sultan, Arman Cohan, Avirup Sil, Heng Ji, Hannaneh Hajishirzi
ReFIT: Relevance Feedback from a Reranker during Inference
Preprint
null
null
null
cs.IR cs.CL
http://creativecommons.org/licenses/by/4.0/
Retrieve-and-rerank is a prevalent framework in neural information retrieval, wherein a bi-encoder network initially retrieves a pre-defined number of candidates (e.g., K=100), which are then reranked by a more powerful cross-encoder model. While the reranker often yields improved candidate scores compared to the retriever, its scope is confined to only the top K retrieved candidates. As a result, the reranker cannot improve retrieval performance in terms of Recall@K. In this work, we propose to leverage the reranker to improve recall by making it provide relevance feedback to the retriever at inference time. Specifically, given a test instance during inference, we distill the reranker's predictions for that instance into the retriever's query representation using a lightweight update mechanism. The aim of the distillation loss is to align the retriever's candidate scores more closely with those produced by the reranker. The algorithm then proceeds by executing a second retrieval step using the updated query vector. We empirically demonstrate that this method, applicable to various retrieve-and-rerank frameworks, substantially enhances retrieval recall across multiple domains, languages, and modalities.
[ { "created": "Fri, 19 May 2023 15:30:33 GMT", "version": "v1" }, { "created": "Tue, 28 May 2024 17:12:02 GMT", "version": "v2" } ]
2024-05-29
[ [ "Reddy", "Revanth Gangi", "" ], [ "Dasigi", "Pradeep", "" ], [ "Sultan", "Md Arafat", "" ], [ "Cohan", "Arman", "" ], [ "Sil", "Avirup", "" ], [ "Ji", "Heng", "" ], [ "Hajishirzi", "Hannaneh", "" ] ]
Retrieve-and-rerank is a prevalent framework in neural information retrieval, wherein a bi-encoder network initially retrieves a pre-defined number of candidates (e.g., K=100), which are then reranked by a more powerful cross-encoder model. While the reranker often yields improved candidate scores compared to the retriever, its scope is confined to only the top K retrieved candidates. As a result, the reranker cannot improve retrieval performance in terms of Recall@K. In this work, we propose to leverage the reranker to improve recall by making it provide relevance feedback to the retriever at inference time. Specifically, given a test instance during inference, we distill the reranker's predictions for that instance into the retriever's query representation using a lightweight update mechanism. The aim of the distillation loss is to align the retriever's candidate scores more closely with those produced by the reranker. The algorithm then proceeds by executing a second retrieval step using the updated query vector. We empirically demonstrate that this method, applicable to various retrieve-and-rerank frameworks, substantially enhances retrieval recall across multiple domains, languages, and modalities.
1711.04451
Jianyu Wang
Jianyu Wang, Zhishuai Zhang, Cihang Xie, Yuyin Zhou, Vittal Premachandran, Jun Zhu, Lingxi Xie, Alan Yuille
Visual Concepts and Compositional Voting
It is accepted by Annals of Mathematical Sciences and Applications
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is very attractive to formulate vision in terms of pattern theory \cite{Mumford2010pattern}, where patterns are defined hierarchically by compositions of elementary building blocks. But applying pattern theory to real world images is currently less successful than discriminative methods such as deep networks. Deep networks, however, are black-boxes which are hard to interpret and can easily be fooled by adding occluding objects. It is natural to wonder whether by better understanding deep networks we can extract building blocks which can be used to develop pattern theoretic models. This motivates us to study the internal representations of a deep network using vehicle images from the PASCAL3D+ dataset. We use clustering algorithms to study the population activities of the features and extract a set of visual concepts which we show are visually tight and correspond to semantic parts of vehicles. To analyze this we annotate these vehicles by their semantic parts to create a new dataset, VehicleSemanticParts, and evaluate visual concepts as unsupervised part detectors. We show that visual concepts perform fairly well but are outperformed by supervised discriminative methods such as Support Vector Machines (SVM). We next give a more detailed analysis of visual concepts and how they relate to semantic parts. Following this, we use the visual concepts as building blocks for a simple pattern theoretical model, which we call compositional voting. In this model several visual concepts combine to detect semantic parts. We show that this approach is significantly better than discriminative methods like SVM and deep networks trained specifically for semantic part detection. Finally, we return to studying occlusion by creating an annotated dataset with occlusion, called VehicleOcclusion, and show that compositional voting outperforms even deep networks when the amount of occlusion becomes large.
[ { "created": "Mon, 13 Nov 2017 07:29:04 GMT", "version": "v1" } ]
2017-11-15
[ [ "Wang", "Jianyu", "" ], [ "Zhang", "Zhishuai", "" ], [ "Xie", "Cihang", "" ], [ "Zhou", "Yuyin", "" ], [ "Premachandran", "Vittal", "" ], [ "Zhu", "Jun", "" ], [ "Xie", "Lingxi", "" ], [ "Yuille", "Alan", "" ] ]
It is very attractive to formulate vision in terms of pattern theory \cite{Mumford2010pattern}, where patterns are defined hierarchically by compositions of elementary building blocks. But applying pattern theory to real world images is currently less successful than discriminative methods such as deep networks. Deep networks, however, are black-boxes which are hard to interpret and can easily be fooled by adding occluding objects. It is natural to wonder whether by better understanding deep networks we can extract building blocks which can be used to develop pattern theoretic models. This motivates us to study the internal representations of a deep network using vehicle images from the PASCAL3D+ dataset. We use clustering algorithms to study the population activities of the features and extract a set of visual concepts which we show are visually tight and correspond to semantic parts of vehicles. To analyze this we annotate these vehicles by their semantic parts to create a new dataset, VehicleSemanticParts, and evaluate visual concepts as unsupervised part detectors. We show that visual concepts perform fairly well but are outperformed by supervised discriminative methods such as Support Vector Machines (SVM). We next give a more detailed analysis of visual concepts and how they relate to semantic parts. Following this, we use the visual concepts as building blocks for a simple pattern theoretical model, which we call compositional voting. In this model several visual concepts combine to detect semantic parts. We show that this approach is significantly better than discriminative methods like SVM and deep networks trained specifically for semantic part detection. Finally, we return to studying occlusion by creating an annotated dataset with occlusion, called VehicleOcclusion, and show that compositional voting outperforms even deep networks when the amount of occlusion becomes large.
1508.00761
Tingshao Zhu
Shun Li, Changye Zhu, Liqing Cui, Nan Zhao, Baobin Li and Tingshao Zhu
Recognition of Emotions using Kinects
15 pages, 4 figures
null
null
null
cs.CY cs.CV cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Psychological studies indicate that emotional states are expressed in the way people walk and the human gait is investigated in terms of its ability to reveal a person's emotional state. And Microsoft Kinect is a rapidly developing, inexpensive, portable and no-marker motion capture system. This paper gives a new referable method to do emotion recognition, by using Microsoft Kinect to do gait pattern analysis, which has not been reported. $59$ subjects are recruited in this study and their gait patterns are record by two Kinect cameras. Significant joints selecting, Coordinate system transforming, Slider window gauss filter, Differential operation, and Data segmentation are used in data preprocessing. Feature extracting is based on Fourier transformation. By using the NaiveBayes, RandomForests, libSVM and SMO classification, the recognition rate of natural and unnatural emotions can reach above 70%.It is concluded that using the Kinect system can be a new method in recognition of emotions.
[ { "created": "Tue, 4 Aug 2015 13:03:27 GMT", "version": "v1" } ]
2015-08-05
[ [ "Li", "Shun", "" ], [ "Zhu", "Changye", "" ], [ "Cui", "Liqing", "" ], [ "Zhao", "Nan", "" ], [ "Li", "Baobin", "" ], [ "Zhu", "Tingshao", "" ] ]
Psychological studies indicate that emotional states are expressed in the way people walk and the human gait is investigated in terms of its ability to reveal a person's emotional state. And Microsoft Kinect is a rapidly developing, inexpensive, portable and no-marker motion capture system. This paper gives a new referable method to do emotion recognition, by using Microsoft Kinect to do gait pattern analysis, which has not been reported. $59$ subjects are recruited in this study and their gait patterns are record by two Kinect cameras. Significant joints selecting, Coordinate system transforming, Slider window gauss filter, Differential operation, and Data segmentation are used in data preprocessing. Feature extracting is based on Fourier transformation. By using the NaiveBayes, RandomForests, libSVM and SMO classification, the recognition rate of natural and unnatural emotions can reach above 70%.It is concluded that using the Kinect system can be a new method in recognition of emotions.
1911.03567
Zuheng Ming
Souhail Bakkali, Zuheng Ming, Muhammad Muzzamil Luqman, Jean-Christophe Burie
Face Detection in Camera Captured Images of Identity Documents under Challenging Conditions
accepted by the ICDAR2019 workshop CBDAR2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Benefiting from the advance of deep convolutional neural network approaches (CNNs), many face detection algorithms have achieved state-of-the-art performance in terms of accuracy and very high speed in unconstrained applications. However, due to the lack of public datasets and due to the variation of the orientation of face images, the complex background and lighting, defocus and the varying illumination of camera captured images, face detection on identity documents under unconstrained environments has not been sufficiently studied. To address this problem more efficiently, we survey three state-of-the-art face detection methods based on general images, i.e. Cascade-CNN, MTCNN and PCN, for face detection in camera captured images of identity documents, given different image quality assessments. For that, The MIDV-500 dataset, which is the largest and most challenging dataset for identity documents, is used to evaluate the three methods. The evaluation results show the performance and the limitations of the current methods for face detection on identity documents under the wild complex environments. These results show that the face detection task in camera captured images of identity documents is challenging, providing a space to improve in the future works.
[ { "created": "Fri, 8 Nov 2019 22:39:06 GMT", "version": "v1" } ]
2019-11-12
[ [ "Bakkali", "Souhail", "" ], [ "Ming", "Zuheng", "" ], [ "Luqman", "Muhammad Muzzamil", "" ], [ "Burie", "Jean-Christophe", "" ] ]
Benefiting from the advance of deep convolutional neural network approaches (CNNs), many face detection algorithms have achieved state-of-the-art performance in terms of accuracy and very high speed in unconstrained applications. However, due to the lack of public datasets and due to the variation of the orientation of face images, the complex background and lighting, defocus and the varying illumination of camera captured images, face detection on identity documents under unconstrained environments has not been sufficiently studied. To address this problem more efficiently, we survey three state-of-the-art face detection methods based on general images, i.e. Cascade-CNN, MTCNN and PCN, for face detection in camera captured images of identity documents, given different image quality assessments. For that, The MIDV-500 dataset, which is the largest and most challenging dataset for identity documents, is used to evaluate the three methods. The evaluation results show the performance and the limitations of the current methods for face detection on identity documents under the wild complex environments. These results show that the face detection task in camera captured images of identity documents is challenging, providing a space to improve in the future works.
2206.14863
Ruijia Cheng
Ruijia Cheng, Alison Smith-Renner, Ke Zhang, Joel R. Tetreault, Alejandro Jaimes
Mapping the Design Space of Human-AI Interaction in Text Summarization
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic text summarization systems commonly involve humans for preparing data or evaluating model performance, yet, there lacks a systematic understanding of humans' roles, experience, and needs when interacting with or being assisted by AI. From a human-centered perspective, we map the design opportunities and considerations for human-AI interaction in text summarization and broader text generation tasks. We first conducted a systematic literature review of 70 papers, developing a taxonomy of five interactions in AI-assisted text generation and relevant design dimensions. We designed text summarization prototypes for each interaction. We then interviewed 16 users, aided by the prototypes, to understand their expectations, experience, and needs regarding efficiency, control, and trust with AI in text summarization and propose design considerations accordingly.
[ { "created": "Wed, 29 Jun 2022 19:03:25 GMT", "version": "v1" } ]
2022-07-01
[ [ "Cheng", "Ruijia", "" ], [ "Smith-Renner", "Alison", "" ], [ "Zhang", "Ke", "" ], [ "Tetreault", "Joel R.", "" ], [ "Jaimes", "Alejandro", "" ] ]
Automatic text summarization systems commonly involve humans for preparing data or evaluating model performance, yet, there lacks a systematic understanding of humans' roles, experience, and needs when interacting with or being assisted by AI. From a human-centered perspective, we map the design opportunities and considerations for human-AI interaction in text summarization and broader text generation tasks. We first conducted a systematic literature review of 70 papers, developing a taxonomy of five interactions in AI-assisted text generation and relevant design dimensions. We designed text summarization prototypes for each interaction. We then interviewed 16 users, aided by the prototypes, to understand their expectations, experience, and needs regarding efficiency, control, and trust with AI in text summarization and propose design considerations accordingly.
2405.04049
Hamed Poursiami
Hamed Poursiami, Ihsen Alouani, Maryam Parsa
Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks
7 pages, 7 figures
null
null
null
cs.CR cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As spiking neural networks (SNNs) gain traction in deploying neuromorphic computing solutions, protecting their intellectual property (IP) has become crucial. Without adequate safeguards, proprietary SNN architectures are at risk of theft, replication, or misuse, which could lead to significant financial losses for the owners. While IP protection techniques have been extensively explored for artificial neural networks (ANNs), their applicability and effectiveness for the unique characteristics of SNNs remain largely unexplored. In this work, we pioneer an investigation into adapting two prominent watermarking approaches, namely, fingerprint-based and backdoor-based mechanisms to secure proprietary SNN architectures. We conduct thorough experiments to evaluate the impact on fidelity, resilience against overwrite threats, and resistance to compression attacks when applying these watermarking techniques to SNNs, drawing comparisons with their ANN counterparts. This study lays the groundwork for developing neuromorphic-aware IP protection strategies tailored to the distinctive dynamics of SNNs.
[ { "created": "Tue, 7 May 2024 06:42:30 GMT", "version": "v1" } ]
2024-05-08
[ [ "Poursiami", "Hamed", "" ], [ "Alouani", "Ihsen", "" ], [ "Parsa", "Maryam", "" ] ]
As spiking neural networks (SNNs) gain traction in deploying neuromorphic computing solutions, protecting their intellectual property (IP) has become crucial. Without adequate safeguards, proprietary SNN architectures are at risk of theft, replication, or misuse, which could lead to significant financial losses for the owners. While IP protection techniques have been extensively explored for artificial neural networks (ANNs), their applicability and effectiveness for the unique characteristics of SNNs remain largely unexplored. In this work, we pioneer an investigation into adapting two prominent watermarking approaches, namely, fingerprint-based and backdoor-based mechanisms to secure proprietary SNN architectures. We conduct thorough experiments to evaluate the impact on fidelity, resilience against overwrite threats, and resistance to compression attacks when applying these watermarking techniques to SNNs, drawing comparisons with their ANN counterparts. This study lays the groundwork for developing neuromorphic-aware IP protection strategies tailored to the distinctive dynamics of SNNs.
1701.06664
Katina Kralevska
Danilo Gligoroski, Katina Kralevska, Rune E. Jensen, Per Simonsen
Repair Duality with Locally Repairable and Locally Regenerating Codes
Accepted as a full paper for publication at IEEE DataCom 2017
2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech), Orlando, FL, 2017, pp. 979-984
10.1109/DASC-PICom-DataCom-CyberSciTec.2017.162
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We construct an explicit family of locally repairable and locally regenerating codes whose existence was proven in a recent work by Kamath et al. about codes with local regeneration but no explicit construction was given. This explicit family of codes is based on HashTag codes. HashTag codes are recently defined vector codes with different vector length $\alpha$ (also called a sub-packetization level) that achieve the optimal repair bandwidth of MSR codes or near-optimal repair bandwidth depending on the sub-packetization level. We applied the technique of parity-splitting code construction. We show that the lower bound on the size of the finite field for the presented explicit code constructions can be lower than the one given in the work of Kamath et al. Finally, we discuss the importance of having two ways for node repair with locally regenerating HashTag codes: repair only with local parity nodes or repair with both local and global parity nodes. To the best of the authors' knowledge, this is the first work where this duality in repair process is discussed. We give a practical example and experimental results in Hadoop where we show the benefits of having this repair duality.
[ { "created": "Mon, 23 Jan 2017 22:50:54 GMT", "version": "v1" }, { "created": "Sat, 28 Jan 2017 11:12:27 GMT", "version": "v2" }, { "created": "Sat, 22 Apr 2017 11:18:37 GMT", "version": "v3" }, { "created": "Wed, 30 Aug 2017 09:29:12 GMT", "version": "v4" } ]
2020-02-14
[ [ "Gligoroski", "Danilo", "" ], [ "Kralevska", "Katina", "" ], [ "Jensen", "Rune E.", "" ], [ "Simonsen", "Per", "" ] ]
We construct an explicit family of locally repairable and locally regenerating codes whose existence was proven in a recent work by Kamath et al. about codes with local regeneration but no explicit construction was given. This explicit family of codes is based on HashTag codes. HashTag codes are recently defined vector codes with different vector length $\alpha$ (also called a sub-packetization level) that achieve the optimal repair bandwidth of MSR codes or near-optimal repair bandwidth depending on the sub-packetization level. We applied the technique of parity-splitting code construction. We show that the lower bound on the size of the finite field for the presented explicit code constructions can be lower than the one given in the work of Kamath et al. Finally, we discuss the importance of having two ways for node repair with locally regenerating HashTag codes: repair only with local parity nodes or repair with both local and global parity nodes. To the best of the authors' knowledge, this is the first work where this duality in repair process is discussed. We give a practical example and experimental results in Hadoop where we show the benefits of having this repair duality.
1912.01452
Jia-Hong Huang
Jia-Hong Huang, Modar Alfadly, Bernard Ghanem, Marcel Worring
Assessing the Robustness of Visual Question Answering Models
24 pages, 13 figures, International Journal of Computer Vision (IJCV) [under review]. arXiv admin note: substantial text overlap with arXiv:1711.06232, arXiv:1709.04625
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by/4.0/
Deep neural networks have been playing an essential role in the task of Visual Question Answering (VQA). Until recently, their accuracy has been the main focus of research. Now there is a trend toward assessing the robustness of these models against adversarial attacks by evaluating the accuracy of these models under increasing levels of noisiness in the inputs of VQA models. In VQA, the attack can target the image and/or the proposed query question, dubbed main question, and yet there is a lack of proper analysis of this aspect of VQA. In this work, we propose a new method that uses semantically related questions, dubbed basic questions, acting as noise to evaluate the robustness of VQA models. We hypothesize that as the similarity of a basic question to the main question decreases, the level of noise increases. To generate a reasonable noise level for a given main question, we rank a pool of basic questions based on their similarity with this main question. We cast this ranking problem as a LASSO optimization problem. We also propose a novel robustness measure Rscore and two large-scale basic question datasets in order to standardize robustness analysis of VQA models. The experimental results demonstrate that the proposed evaluation method is able to effectively analyze the robustness of VQA models. To foster the VQA research, we will publish our proposed datasets.
[ { "created": "Sat, 30 Nov 2019 09:32:38 GMT", "version": "v1" }, { "created": "Thu, 3 Mar 2022 14:17:46 GMT", "version": "v2" } ]
2022-03-04
[ [ "Huang", "Jia-Hong", "" ], [ "Alfadly", "Modar", "" ], [ "Ghanem", "Bernard", "" ], [ "Worring", "Marcel", "" ] ]
Deep neural networks have been playing an essential role in the task of Visual Question Answering (VQA). Until recently, their accuracy has been the main focus of research. Now there is a trend toward assessing the robustness of these models against adversarial attacks by evaluating the accuracy of these models under increasing levels of noisiness in the inputs of VQA models. In VQA, the attack can target the image and/or the proposed query question, dubbed main question, and yet there is a lack of proper analysis of this aspect of VQA. In this work, we propose a new method that uses semantically related questions, dubbed basic questions, acting as noise to evaluate the robustness of VQA models. We hypothesize that as the similarity of a basic question to the main question decreases, the level of noise increases. To generate a reasonable noise level for a given main question, we rank a pool of basic questions based on their similarity with this main question. We cast this ranking problem as a LASSO optimization problem. We also propose a novel robustness measure Rscore and two large-scale basic question datasets in order to standardize robustness analysis of VQA models. The experimental results demonstrate that the proposed evaluation method is able to effectively analyze the robustness of VQA models. To foster the VQA research, we will publish our proposed datasets.
1006.0551
EPTCS
Xizhong Zheng (Arcadia University), Ning Zhong (University of Cincinnati)
Proceedings Seventh International Conference on Computability and Complexity in Analysis
null
EPTCS 24, 2010
10.4204/EPTCS.24
null
cs.CC cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This volume of the Electronic Proceedings in Theoretical Computer Science (EPTCS) contains extended abstracts of talks to be presented at the Seventh International Conference on Computability and Complexity in Analysis (CCA 2010) that will take place in Zhenjiang, China, June 21-25, 2010. This conference is the seventeenth event in the series of CCA annual meetings. The CCA conferences are aimed at promoting the study and advancement of the theory of computability and complexity over real-valued data and its application.
[ { "created": "Thu, 3 Jun 2010 04:22:39 GMT", "version": "v1" } ]
2010-06-04
[ [ "Zheng", "Xizhong", "", "Arcadia University" ], [ "Zhong", "Ning", "", "University of\n Cincinnati" ] ]
This volume of the Electronic Proceedings in Theoretical Computer Science (EPTCS) contains extended abstracts of talks to be presented at the Seventh International Conference on Computability and Complexity in Analysis (CCA 2010) that will take place in Zhenjiang, China, June 21-25, 2010. This conference is the seventeenth event in the series of CCA annual meetings. The CCA conferences are aimed at promoting the study and advancement of the theory of computability and complexity over real-valued data and its application.
2208.02311
Amar Kumar
Amar Kumar, Anjun Hu, Brennan Nichyporuk, Jean-Pierre R. Falet, Douglas L. Arnold, Sotirios Tsaftaris, and Tal Arbel
Counterfactual Image Synthesis for Discovery of Personalized Predictive Image Markers
Accepted to the MIABID workshop at MICCAI 2022
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The discovery of patient-specific imaging markers that are predictive of future disease outcomes can help us better understand individual-level heterogeneity of disease evolution. In fact, deep learning models that can provide data-driven personalized markers are much more likely to be adopted in medical practice. In this work, we demonstrate that data-driven biomarker discovery can be achieved through a counterfactual synthesis process. We show how a deep conditional generative model can be used to perturb local imaging features in baseline images that are pertinent to subject-specific future disease evolution and result in a counterfactual image that is expected to have a different future outcome. Candidate biomarkers, therefore, result from examining the set of features that are perturbed in this process. Through several experiments on a large-scale, multi-scanner, multi-center multiple sclerosis (MS) clinical trial magnetic resonance imaging (MRI) dataset of relapsing-remitting (RRMS) patients, we demonstrate that our model produces counterfactuals with changes in imaging features that reflect established clinical markers predictive of future MRI lesional activity at the population level. Additional qualitative results illustrate that our model has the potential to discover novel and subject-specific predictive markers of future activity.
[ { "created": "Wed, 3 Aug 2022 18:58:45 GMT", "version": "v1" } ]
2022-08-05
[ [ "Kumar", "Amar", "" ], [ "Hu", "Anjun", "" ], [ "Nichyporuk", "Brennan", "" ], [ "Falet", "Jean-Pierre R.", "" ], [ "Arnold", "Douglas L.", "" ], [ "Tsaftaris", "Sotirios", "" ], [ "Arbel", "Tal", "" ] ]
The discovery of patient-specific imaging markers that are predictive of future disease outcomes can help us better understand individual-level heterogeneity of disease evolution. In fact, deep learning models that can provide data-driven personalized markers are much more likely to be adopted in medical practice. In this work, we demonstrate that data-driven biomarker discovery can be achieved through a counterfactual synthesis process. We show how a deep conditional generative model can be used to perturb local imaging features in baseline images that are pertinent to subject-specific future disease evolution and result in a counterfactual image that is expected to have a different future outcome. Candidate biomarkers, therefore, result from examining the set of features that are perturbed in this process. Through several experiments on a large-scale, multi-scanner, multi-center multiple sclerosis (MS) clinical trial magnetic resonance imaging (MRI) dataset of relapsing-remitting (RRMS) patients, we demonstrate that our model produces counterfactuals with changes in imaging features that reflect established clinical markers predictive of future MRI lesional activity at the population level. Additional qualitative results illustrate that our model has the potential to discover novel and subject-specific predictive markers of future activity.
1907.02562
Hang Hu
Xiaolong Yang, Tzu-Hao Huang, Hang Hu, Shuangyue Yu, Sainan Zhang, Xianlian Zhou, Alessandra Carriero, Guang Yue, and Hao Su
Spine-Inspired Continuum Soft Exoskeleton for Stoop Lifting Assistance
8 pages, 13 figures
IROS 2019
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Back injuries are the most prevalent work-related musculoskeletal disorders and represent a major cause of disability. Although innovations in wearable robots aim to alleviate this hazard, the majority of existing exoskeletons are obtrusive because the rigid linkage design limits natural movement, thus causing ergonomic risk. Moreover, these existing systems are typically only suitable for one type of movement assistance, not ubiquitous for a wide variety of activities. To fill in this gap, this paper presents a new wearable robot design approach continuum soft exoskeleton. This spine-inspired wearable robot is unobtrusive and assists both squat and stoops while not impeding walking motion. To tackle the challenge of the unique anatomy of spine that is inappropriate to be simplified as a single degree of freedom joint, our robot is conformal to human anatomy and it can reduce multiple types of forces along the human spine such as the spinae muscle force, shear, and compression force of the lumbar vertebrae. We derived kinematics and kinetics models of this mechanism and established an analytical biomechanics model of human-robot interaction. Quantitative analysis of disc compression force, disc shear force and muscle force was performed in simulation. We further developed a virtual impedance control strategy to deliver force control and compensate hysteresis of Bowden cable transmission. The feasibility of the prototype was experimentally tested on three healthy subjects. The root mean square error of force tracking is 6.63 N (3.3 % of the 200N peak force) and it demonstrated that it can actively control the stiffness to the desired value. This continuum soft exoskeleton represents a feasible solution with the potential to reduce back pain for multiple activities and multiple forces along the human spine.
[ { "created": "Thu, 4 Jul 2019 18:44:44 GMT", "version": "v1" } ]
2019-07-08
[ [ "Yang", "Xiaolong", "" ], [ "Huang", "Tzu-Hao", "" ], [ "Hu", "Hang", "" ], [ "Yu", "Shuangyue", "" ], [ "Zhang", "Sainan", "" ], [ "Zhou", "Xianlian", "" ], [ "Carriero", "Alessandra", "" ], [ "Yue", "Guang", "" ], [ "Su", "Hao", "" ] ]
Back injuries are the most prevalent work-related musculoskeletal disorders and represent a major cause of disability. Although innovations in wearable robots aim to alleviate this hazard, the majority of existing exoskeletons are obtrusive because the rigid linkage design limits natural movement, thus causing ergonomic risk. Moreover, these existing systems are typically only suitable for one type of movement assistance, not ubiquitous for a wide variety of activities. To fill in this gap, this paper presents a new wearable robot design approach continuum soft exoskeleton. This spine-inspired wearable robot is unobtrusive and assists both squat and stoops while not impeding walking motion. To tackle the challenge of the unique anatomy of spine that is inappropriate to be simplified as a single degree of freedom joint, our robot is conformal to human anatomy and it can reduce multiple types of forces along the human spine such as the spinae muscle force, shear, and compression force of the lumbar vertebrae. We derived kinematics and kinetics models of this mechanism and established an analytical biomechanics model of human-robot interaction. Quantitative analysis of disc compression force, disc shear force and muscle force was performed in simulation. We further developed a virtual impedance control strategy to deliver force control and compensate hysteresis of Bowden cable transmission. The feasibility of the prototype was experimentally tested on three healthy subjects. The root mean square error of force tracking is 6.63 N (3.3 % of the 200N peak force) and it demonstrated that it can actively control the stiffness to the desired value. This continuum soft exoskeleton represents a feasible solution with the potential to reduce back pain for multiple activities and multiple forces along the human spine.
1009.2893
Juha Kontinen
Juha Kontinen (University of Helsinki, Finland), Heribert Vollmer (University of Hannover, Germany)
On Second-Order Monadic Monoidal and Groupoidal Quantifiers
null
Logical Methods in Computer Science, Volume 6, Issue 3 (September 20, 2010) lmcs:1006
10.2168/LMCS-6(3:25)2010
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study logics defined in terms of second-order monadic monoidal and groupoidal quantifiers. These are generalized quantifiers defined by monoid and groupoid word-problems, equivalently, by regular and context-free languages. We give a computational classification of the expressive power of these logics over strings with varying built-in predicates. In particular, we show that ATIME(n) can be logically characterized in terms of second-order monadic monoidal quantifiers.
[ { "created": "Wed, 15 Sep 2010 10:30:50 GMT", "version": "v1" }, { "created": "Mon, 20 Sep 2010 08:22:57 GMT", "version": "v2" } ]
2015-07-01
[ [ "Kontinen", "Juha", "", "University of Helsinki, Finland" ], [ "Vollmer", "Heribert", "", "University of Hannover, Germany" ] ]
We study logics defined in terms of second-order monadic monoidal and groupoidal quantifiers. These are generalized quantifiers defined by monoid and groupoid word-problems, equivalently, by regular and context-free languages. We give a computational classification of the expressive power of these logics over strings with varying built-in predicates. In particular, we show that ATIME(n) can be logically characterized in terms of second-order monadic monoidal quantifiers.
1907.03050
Soheil Khorram
Soheil Khorram, Melvin G McInnis, Emily Mower Provost
Jointly Aligning and Predicting Continuous Emotion Annotations
IEEE Transactions on Affective Computing
null
null
null
cs.LG cs.HC eess.AS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time-continuous dimensional descriptions of emotions (e.g., arousal, valence) allow researchers to characterize short-time changes and to capture long-term trends in emotion expression. However, continuous emotion labels are generally not synchronized with the input speech signal due to delays caused by reaction-time, which is inherent in human evaluations. To deal with this challenge, we introduce a new convolutional neural network (multi-delay sinc network) that is able to simultaneously align and predict labels in an end-to-end manner. The proposed network is a stack of convolutional layers followed by an aligner network that aligns the speech signal and emotion labels. This network is implemented using a new convolutional layer that we introduce, the delayed sinc layer. It is a time-shifted low-pass (sinc) filter that uses a gradient-based algorithm to learn a single delay. Multiple delayed sinc layers can be used to compensate for a non-stationary delay that is a function of the acoustic space. We test the efficacy of this system on two common emotion datasets, RECOLA and SEWA, and show that this approach obtains state-of-the-art speech-only results by learning time-varying delays while predicting dimensional descriptors of emotions.
[ { "created": "Fri, 5 Jul 2019 23:49:49 GMT", "version": "v1" }, { "created": "Thu, 18 Jul 2019 22:40:43 GMT", "version": "v2" } ]
2019-07-22
[ [ "Khorram", "Soheil", "" ], [ "McInnis", "Melvin G", "" ], [ "Provost", "Emily Mower", "" ] ]
Time-continuous dimensional descriptions of emotions (e.g., arousal, valence) allow researchers to characterize short-time changes and to capture long-term trends in emotion expression. However, continuous emotion labels are generally not synchronized with the input speech signal due to delays caused by reaction-time, which is inherent in human evaluations. To deal with this challenge, we introduce a new convolutional neural network (multi-delay sinc network) that is able to simultaneously align and predict labels in an end-to-end manner. The proposed network is a stack of convolutional layers followed by an aligner network that aligns the speech signal and emotion labels. This network is implemented using a new convolutional layer that we introduce, the delayed sinc layer. It is a time-shifted low-pass (sinc) filter that uses a gradient-based algorithm to learn a single delay. Multiple delayed sinc layers can be used to compensate for a non-stationary delay that is a function of the acoustic space. We test the efficacy of this system on two common emotion datasets, RECOLA and SEWA, and show that this approach obtains state-of-the-art speech-only results by learning time-varying delays while predicting dimensional descriptors of emotions.
2211.06479
Ron Fulbright
Ron Fulbright
Calculating Cognitive Augmentation, A Case Study
14 pages; 4 figures
HCII 2019: Augmented Cognition; Lecture Notes in Computer Science book series (LNAI,volume 11580)
10.1007/978-3-030-22419-6_38
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are entering an era in which humans will increasingly work in partnership and collaboration with artificially intelligent entities. For millennia, tools have augmented human physical and mental performance but in the coming era of cognitive systems, human cognitive performance will be augmented. We are only just now beginning to define the fundamental concepts and metrics to describe, characterize, and measure augmented and collaborative cognition. In this paper, the results of a cognitive augmentation experiment are discussed and we calculate the increase in cognitive accuracy and cognitive precision. In the case study, cognitively augmented problem solvers show an increase of 74% in cognitive accuracy (the ability to synthesize desired answers) and a 27% increase in cognitive precision (the ability to synthesize only desired answers). We offer a formal treatment of the case study results and propose cognitive accuracy and cognitive precision as standard metrics to describe and measure human cognitive augmentation.
[ { "created": "Fri, 11 Nov 2022 20:48:29 GMT", "version": "v1" } ]
2022-11-15
[ [ "Fulbright", "Ron", "" ] ]
We are entering an era in which humans will increasingly work in partnership and collaboration with artificially intelligent entities. For millennia, tools have augmented human physical and mental performance but in the coming era of cognitive systems, human cognitive performance will be augmented. We are only just now beginning to define the fundamental concepts and metrics to describe, characterize, and measure augmented and collaborative cognition. In this paper, the results of a cognitive augmentation experiment are discussed and we calculate the increase in cognitive accuracy and cognitive precision. In the case study, cognitively augmented problem solvers show an increase of 74% in cognitive accuracy (the ability to synthesize desired answers) and a 27% increase in cognitive precision (the ability to synthesize only desired answers). We offer a formal treatment of the case study results and propose cognitive accuracy and cognitive precision as standard metrics to describe and measure human cognitive augmentation.
2102.01020
Yan Uehara De Moraes
Carlos Pedroso, Yan Uehara de Moraes, Michele Nogueira, Aldri Santos
Relational Consensus-Based Cooperative Task Allocation Management for IIoT-Health Networks
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
IIoT services focused on industry-oriented services often require objects run more than one task. IIoT objects poses the challenge of distributing and managing task allocation among them. The fairness of task allocation brings flexible network reconfiguration and maximizes the tasks to be performed. Although existing approaches optimize and manage the dynamics of objects, not all them consider both co-relationship between tasks and object capabilities and the distributed allocation over the cluster service. This paper introduces the ACADIA mechanism for task allocation in IIoT networks in order to distribute task among objects. It relies on relational consensus strategies to allocate tasks and similarity capabilities to determine which objects can play in accomplishing those tasks. Evaluation on NS-3 showed that ACADIA achieved 98% of allocated tasks in an IIoT-Health considering all scenarios, average more than 95% of clusters apt to performed tasks in a low response time, and achieved 50% more effectiveness in task allocation compared to the literature solution CONTASKI.
[ { "created": "Mon, 1 Feb 2021 17:53:11 GMT", "version": "v1" } ]
2021-02-02
[ [ "Pedroso", "Carlos", "" ], [ "de Moraes", "Yan Uehara", "" ], [ "Nogueira", "Michele", "" ], [ "Santos", "Aldri", "" ] ]
IIoT services focused on industry-oriented services often require objects run more than one task. IIoT objects poses the challenge of distributing and managing task allocation among them. The fairness of task allocation brings flexible network reconfiguration and maximizes the tasks to be performed. Although existing approaches optimize and manage the dynamics of objects, not all them consider both co-relationship between tasks and object capabilities and the distributed allocation over the cluster service. This paper introduces the ACADIA mechanism for task allocation in IIoT networks in order to distribute task among objects. It relies on relational consensus strategies to allocate tasks and similarity capabilities to determine which objects can play in accomplishing those tasks. Evaluation on NS-3 showed that ACADIA achieved 98% of allocated tasks in an IIoT-Health considering all scenarios, average more than 95% of clusters apt to performed tasks in a low response time, and achieved 50% more effectiveness in task allocation compared to the literature solution CONTASKI.
2305.04177
Anastasiia Razdaibiedina
Anastasia Razdaibiedina, Alexander Brechalov
MIReAD: Simple Method for Learning High-quality Representations from Scientific Documents
ACL 2023 (short paper)
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Learning semantically meaningful representations from scientific documents can facilitate academic literature search and improve performance of recommendation systems. Pre-trained language models have been shown to learn rich textual representations, yet they cannot provide powerful document-level representations for scientific articles. We propose MIReAD, a simple method that learns high-quality representations of scientific papers by fine-tuning transformer model to predict the target journal class based on the abstract. We train MIReAD on more than 500,000 PubMed and arXiv abstracts across over 2,000 journal classes. We show that MIReAD produces representations that can be used for similar papers retrieval, topic categorization and literature search. Our proposed approach outperforms six existing models for representation learning on scientific documents across four evaluation standards.
[ { "created": "Sun, 7 May 2023 03:29:55 GMT", "version": "v1" } ]
2023-05-09
[ [ "Razdaibiedina", "Anastasia", "" ], [ "Brechalov", "Alexander", "" ] ]
Learning semantically meaningful representations from scientific documents can facilitate academic literature search and improve performance of recommendation systems. Pre-trained language models have been shown to learn rich textual representations, yet they cannot provide powerful document-level representations for scientific articles. We propose MIReAD, a simple method that learns high-quality representations of scientific papers by fine-tuning transformer model to predict the target journal class based on the abstract. We train MIReAD on more than 500,000 PubMed and arXiv abstracts across over 2,000 journal classes. We show that MIReAD produces representations that can be used for similar papers retrieval, topic categorization and literature search. Our proposed approach outperforms six existing models for representation learning on scientific documents across four evaluation standards.
2207.13541
Wim Martens
Wim Martens, Matthias Niewerth, Tina Popp, Stijn Vansummeren, Domagoj Vrgoc
Representing Paths in Graph Database Pattern Matching
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern graph database query languages such as GQL, SQL/PGQ, and their academic predecessor G-Core promote paths to first-class citizens in the sense that paths that match regular path queries can be returned to the user. This brings a number of challenges in terms of efficiency, caused by the fact that graphs can have a huge amount of paths between a given node pair. We introduce the concept of path multiset representations (PMRs), which can represent multisets of paths in an exponentially succinct manner. After exploring fundamental problems such as minimization and equivalence testing of PMRs, we explore how their use can lead to significant time and space savings when executing query plans. We show that, from a computational complexity point of view, PMRs seem especially well-suited for representing results of regular path queries and extensions thereof involving counting, random sampling, unions, and joins.
[ { "created": "Wed, 27 Jul 2022 14:28:39 GMT", "version": "v1" } ]
2022-07-28
[ [ "Martens", "Wim", "" ], [ "Niewerth", "Matthias", "" ], [ "Popp", "Tina", "" ], [ "Vansummeren", "Stijn", "" ], [ "Vrgoc", "Domagoj", "" ] ]
Modern graph database query languages such as GQL, SQL/PGQ, and their academic predecessor G-Core promote paths to first-class citizens in the sense that paths that match regular path queries can be returned to the user. This brings a number of challenges in terms of efficiency, caused by the fact that graphs can have a huge amount of paths between a given node pair. We introduce the concept of path multiset representations (PMRs), which can represent multisets of paths in an exponentially succinct manner. After exploring fundamental problems such as minimization and equivalence testing of PMRs, we explore how their use can lead to significant time and space savings when executing query plans. We show that, from a computational complexity point of view, PMRs seem especially well-suited for representing results of regular path queries and extensions thereof involving counting, random sampling, unions, and joins.
1806.10570
Anna Aljanaki
Anna Aljanaki, Gerhard Widmer
Modeling Majorness as a Perceptual Property in Music from Listener Ratings
short paper for ICMPC proceedings
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the tasks of automatic music emotion recognition, genre recognition, music recommendation it is helpful to be able to extract mode from any section of a musical piece as a perceived amount of major or minor mode (majorness) inside that section, perceived as a whole (one or several melodies and any harmony present). In this paper we take a data-driven approach (modeling directly from data without giving an explicit definition or explicitly programming an algorithm) towards modeling this property. We collect annotations from musicians and show that majorness can be understood by musicians in an intuitive way. We model this property from the data using deep learning.
[ { "created": "Wed, 27 Jun 2018 17:05:48 GMT", "version": "v1" } ]
2018-06-28
[ [ "Aljanaki", "Anna", "" ], [ "Widmer", "Gerhard", "" ] ]
For the tasks of automatic music emotion recognition, genre recognition, music recommendation it is helpful to be able to extract mode from any section of a musical piece as a perceived amount of major or minor mode (majorness) inside that section, perceived as a whole (one or several melodies and any harmony present). In this paper we take a data-driven approach (modeling directly from data without giving an explicit definition or explicitly programming an algorithm) towards modeling this property. We collect annotations from musicians and show that majorness can be understood by musicians in an intuitive way. We model this property from the data using deep learning.
2007.03631
Uma Girish
Uma Girish, Ran Raz, Wei Zhan
Lower Bounds for XOR of Forrelations
null
null
null
null
cs.CC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Forrelation problem, introduced by Aaronson [A10] and Aaronson and Ambainis [AA15], is a well studied problem in the context of separating quantum and classical models. Variants of this problem were used to give exponential separations between quantum and classical query complexity [A10, AA15]; quantum query complexity and bounded-depth circuits [RT19]; and quantum and classical communication complexity [GRT19]. In all these separations, the lower bound for the classical model only holds when the advantage of the protocol (over a random guess) is more than $\approx 1/\sqrt{N}$, that is, the success probability is larger than $\approx 1/2 + 1/\sqrt{N}$. To achieve separations when the classical protocol has smaller advantage, we study in this work the XOR of $k$ independent copies of the Forrelation function (where $k\ll N$). We prove a very general result that shows that any family of Boolean functions that is closed under restrictions, whose Fourier mass at level $2k$ is bounded by $\alpha^k$, cannot compute the XOR of $k$ independent copies of the Forrelation function with advantage better than $O\left(\frac{\alpha^k}{{N^{k/2}}}\right)$. This is a strengthening of a result of [CHLT19], that gave a similar result for $k=1$, using the technique of [RT19]. As an application of our result, we give the first example of a partial Boolean function that can be computed by a simultaneous-message quantum protocol of cost $\mbox{polylog}(N)$ (when players share $\mbox{polylog}(N)$ EPR pairs), however, any classical interactive randomized protocol of cost at most $\tilde{o}(N^{1/4})$, has quasipolynomially small advantage over a random guess. We also give the first example of a partial Boolean function that has a quantum query algorithm of cost $\mbox{polylog}(N)$, and such that, any constant-depth circuit of quasipolynomial size has quasipolynomially small advantage over a random guess.
[ { "created": "Tue, 7 Jul 2020 17:05:09 GMT", "version": "v1" } ]
2020-07-08
[ [ "Girish", "Uma", "" ], [ "Raz", "Ran", "" ], [ "Zhan", "Wei", "" ] ]
The Forrelation problem, introduced by Aaronson [A10] and Aaronson and Ambainis [AA15], is a well studied problem in the context of separating quantum and classical models. Variants of this problem were used to give exponential separations between quantum and classical query complexity [A10, AA15]; quantum query complexity and bounded-depth circuits [RT19]; and quantum and classical communication complexity [GRT19]. In all these separations, the lower bound for the classical model only holds when the advantage of the protocol (over a random guess) is more than $\approx 1/\sqrt{N}$, that is, the success probability is larger than $\approx 1/2 + 1/\sqrt{N}$. To achieve separations when the classical protocol has smaller advantage, we study in this work the XOR of $k$ independent copies of the Forrelation function (where $k\ll N$). We prove a very general result that shows that any family of Boolean functions that is closed under restrictions, whose Fourier mass at level $2k$ is bounded by $\alpha^k$, cannot compute the XOR of $k$ independent copies of the Forrelation function with advantage better than $O\left(\frac{\alpha^k}{{N^{k/2}}}\right)$. This is a strengthening of a result of [CHLT19], that gave a similar result for $k=1$, using the technique of [RT19]. As an application of our result, we give the first example of a partial Boolean function that can be computed by a simultaneous-message quantum protocol of cost $\mbox{polylog}(N)$ (when players share $\mbox{polylog}(N)$ EPR pairs), however, any classical interactive randomized protocol of cost at most $\tilde{o}(N^{1/4})$, has quasipolynomially small advantage over a random guess. We also give the first example of a partial Boolean function that has a quantum query algorithm of cost $\mbox{polylog}(N)$, and such that, any constant-depth circuit of quasipolynomial size has quasipolynomially small advantage over a random guess.
2005.13290
Tim Schatto-Eckrodt
Thorsten Quandt, Svenja Boberg, Tim Schatto-Eckrodt, Lena Frischlich
Pandemic News: Facebook Pages of Mainstream News Media and the Coronavirus Crisis -- A Computational Content Analysis
Corrected typos, 7 figures, 4 tables, 1 ancillary file
null
null
Muenster Online Research (MOR) Working Paper 2/2020,
cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The unfolding of the COVID-19 pandemic has been an unprecedented challenge for news media around the globe. While journalism is meant to process yet unknown events by design, the dynamically evolving situation affected all aspects of life in such profound ways that even the routines of crisis reporting seemed to be insufficient. Critics noted tendencies to horse-race reporting and uncritical coverage, with journalism being too close to official statements and too affirmative of political decisions. However, empirical data on the performance of journalistic news media during the crisis has been lacking thus far. The current study analyzes the Facebook messages of journalistic news media during the early Coronavirus crisis, based on a large German data set from January to March 2020. Using computational content analysis methods, reach and interactions, topical structure, relevant actors, negativity of messages, as well as the coverage of fabricated news and conspiracy theories were examined. The topical structure of the near-time Facebook coverage changed during various stages of the crisis, with just partial support for the claims of critics. The initial stages were somewhat lacking in topical breadth, but later stages offered a broad range of coverage on Corona-related issues and societal concerns. Further, journalistic media covered fake news and conspiracy theories during the crisis, but they consistently contextualized them as what they were and debunked the false claims circulating in public. While some criticism regarding the performance of journalism during the crisis received mild empirical support, the analysis did not find overwhelming signs of systemic dysfunctionalities. Overall, journalistic media did not default to a uniform reaction nor to sprawling, information-poor pandemic news, but they responded with a multi-perspective coverage of the crisis.
[ { "created": "Wed, 27 May 2020 11:39:15 GMT", "version": "v1" }, { "created": "Thu, 28 May 2020 15:28:13 GMT", "version": "v2" } ]
2020-05-29
[ [ "Quandt", "Thorsten", "" ], [ "Boberg", "Svenja", "" ], [ "Schatto-Eckrodt", "Tim", "" ], [ "Frischlich", "Lena", "" ] ]
The unfolding of the COVID-19 pandemic has been an unprecedented challenge for news media around the globe. While journalism is meant to process yet unknown events by design, the dynamically evolving situation affected all aspects of life in such profound ways that even the routines of crisis reporting seemed to be insufficient. Critics noted tendencies to horse-race reporting and uncritical coverage, with journalism being too close to official statements and too affirmative of political decisions. However, empirical data on the performance of journalistic news media during the crisis has been lacking thus far. The current study analyzes the Facebook messages of journalistic news media during the early Coronavirus crisis, based on a large German data set from January to March 2020. Using computational content analysis methods, reach and interactions, topical structure, relevant actors, negativity of messages, as well as the coverage of fabricated news and conspiracy theories were examined. The topical structure of the near-time Facebook coverage changed during various stages of the crisis, with just partial support for the claims of critics. The initial stages were somewhat lacking in topical breadth, but later stages offered a broad range of coverage on Corona-related issues and societal concerns. Further, journalistic media covered fake news and conspiracy theories during the crisis, but they consistently contextualized them as what they were and debunked the false claims circulating in public. While some criticism regarding the performance of journalism during the crisis received mild empirical support, the analysis did not find overwhelming signs of systemic dysfunctionalities. Overall, journalistic media did not default to a uniform reaction nor to sprawling, information-poor pandemic news, but they responded with a multi-perspective coverage of the crisis.
1508.05133
Tamir Hazan
Tamir Hazan and Tommi Jaakkola
Steps Toward Deep Kernel Methods from Infinite Neural Networks
null
null
null
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contemporary deep neural networks exhibit impressive results on practical problems. These networks generalize well although their inherent capacity may extend significantly beyond the number of training examples. We analyze this behavior in the context of deep, infinite neural networks. We show that deep infinite layers are naturally aligned with Gaussian processes and kernel methods, and devise stochastic kernels that encode the information of these networks. We show that stability results apply despite the size, offering an explanation for their empirical success.
[ { "created": "Thu, 20 Aug 2015 21:35:52 GMT", "version": "v1" }, { "created": "Wed, 2 Sep 2015 18:27:36 GMT", "version": "v2" } ]
2015-09-03
[ [ "Hazan", "Tamir", "" ], [ "Jaakkola", "Tommi", "" ] ]
Contemporary deep neural networks exhibit impressive results on practical problems. These networks generalize well although their inherent capacity may extend significantly beyond the number of training examples. We analyze this behavior in the context of deep, infinite neural networks. We show that deep infinite layers are naturally aligned with Gaussian processes and kernel methods, and devise stochastic kernels that encode the information of these networks. We show that stability results apply despite the size, offering an explanation for their empirical success.
cs/0607133
Peter Turney
Robert Ewaschuk, Peter D. Turney
Self-Replication and Self-Assembly for Manufacturing
Java code available at http://purl.org/net/johnnyvon/
Artificial Life, (2006), 12, 411-433
10.1162/artl.2006.12.3.411
NRC-48760
cs.MA cs.CE
null
It has been argued that a central objective of nanotechnology is to make products inexpensively, and that self-replication is an effective approach to very low-cost manufacturing. The research presented here is intended to be a step towards this vision. We describe a computational simulation of nanoscale machines floating in a virtual liquid. The machines can bond together to form strands (chains) that self-replicate and self-assemble into user-specified meshes. There are four types of machines and the sequence of machine types in a strand determines the shape of the mesh they will build. A strand may be in an unfolded state, in which the bonds are straight, or in a folded state, in which the bond angles depend on the types of machines. By choosing the sequence of machine types in a strand, the user can specify a variety of polygonal shapes. A simulation typically begins with an initial unfolded seed strand in a soup of unbonded machines. The seed strand replicates by bonding with free machines in the soup. The child strands fold into the encoded polygonal shape, and then the polygons drift together and bond to form a mesh. We demonstrate that a variety of polygonal meshes can be manufactured in the simulation, by simply changing the sequence of machine types in the seed.
[ { "created": "Thu, 27 Jul 2006 17:55:16 GMT", "version": "v1" } ]
2020-08-20
[ [ "Ewaschuk", "Robert", "" ], [ "Turney", "Peter D.", "" ] ]
It has been argued that a central objective of nanotechnology is to make products inexpensively, and that self-replication is an effective approach to very low-cost manufacturing. The research presented here is intended to be a step towards this vision. We describe a computational simulation of nanoscale machines floating in a virtual liquid. The machines can bond together to form strands (chains) that self-replicate and self-assemble into user-specified meshes. There are four types of machines and the sequence of machine types in a strand determines the shape of the mesh they will build. A strand may be in an unfolded state, in which the bonds are straight, or in a folded state, in which the bond angles depend on the types of machines. By choosing the sequence of machine types in a strand, the user can specify a variety of polygonal shapes. A simulation typically begins with an initial unfolded seed strand in a soup of unbonded machines. The seed strand replicates by bonding with free machines in the soup. The child strands fold into the encoded polygonal shape, and then the polygons drift together and bond to form a mesh. We demonstrate that a variety of polygonal meshes can be manufactured in the simulation, by simply changing the sequence of machine types in the seed.
2104.08575
Hangqi Zhou
Hangqi Zhou, Chao Huang, Shangqi Gao, Xiahai Zhuang
VSpSR: Explorable Super-Resolution via Variational Sparse Representation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Super-resolution (SR) is an ill-posed problem, which means that infinitely many high-resolution (HR) images can be degraded to the same low-resolution (LR) image. To study the one-to-many stochastic SR mapping, we implicitly represent the non-local self-similarity of natural images and develop a Variational Sparse framework for Super-Resolution (VSpSR) via neural networks. Since every small patch of a HR image can be well approximated by the sparse representation of atoms in an over-complete dictionary, we design a two-branch module, i.e., VSpM, to explore the SR space. Concretely, one branch of VSpM extracts patch-level basis from the LR input, and the other branch infers pixel-wise variational distributions with respect to the sparse coefficients. By repeatedly sampling coefficients, we could obtain infinite sparse representations, and thus generate diverse HR images. According to the preliminary results of NTIRE 2021 challenge on learning SR space, our team (FudanZmic21) ranks 7-th in terms of released scores. The implementation of VSpSR is released at https://zmiclab.github.io/.
[ { "created": "Sat, 17 Apr 2021 15:36:24 GMT", "version": "v1" } ]
2021-04-20
[ [ "Zhou", "Hangqi", "" ], [ "Huang", "Chao", "" ], [ "Gao", "Shangqi", "" ], [ "Zhuang", "Xiahai", "" ] ]
Super-resolution (SR) is an ill-posed problem, which means that infinitely many high-resolution (HR) images can be degraded to the same low-resolution (LR) image. To study the one-to-many stochastic SR mapping, we implicitly represent the non-local self-similarity of natural images and develop a Variational Sparse framework for Super-Resolution (VSpSR) via neural networks. Since every small patch of a HR image can be well approximated by the sparse representation of atoms in an over-complete dictionary, we design a two-branch module, i.e., VSpM, to explore the SR space. Concretely, one branch of VSpM extracts patch-level basis from the LR input, and the other branch infers pixel-wise variational distributions with respect to the sparse coefficients. By repeatedly sampling coefficients, we could obtain infinite sparse representations, and thus generate diverse HR images. According to the preliminary results of NTIRE 2021 challenge on learning SR space, our team (FudanZmic21) ranks 7-th in terms of released scores. The implementation of VSpSR is released at https://zmiclab.github.io/.
1310.6205
Nicolas Trotignon
St\'ephan Thomass\'e, Nicolas Trotignon, Kristina Vuskovi\'c
Parameterized algorithm for weighted independent set problem in bull-free graphs
null
Parameterized algorithm for weighted independent set problem in bull-free graphs. Algorithmica, November 2015, 1--23
10.1007/s00453-015-0083-x
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The maximum stable set problem is NP-hard, even when restricted to triangle-free graphs. In particular, one cannot expect a polynomial time algorithm deciding if a bull-free graph has a stable set of size $k$, when $k$ is part of the instance. Our main result in this paper is to show the existence of an FPT algorithm when we parameterize the problem by the solution size $k$. A polynomial kernel is unlikely to exist for this problem. We show however that our problem has a polynomial size Turing-kernel. More precisely, the hard cases are instances of size $O(k^5)$. As a byproduct, if we forbid odd holes in addition to the bull, we show the existence of a polynomial time algorithm for the stable set problem. We also prove that the chromatic number of a bull-free graph is bounded by a function of its clique number and the maximum chromatic number of its triangle-free induced subgraphs. All our results rely on a decomposition theorem of bull-free graphs due to Chudnovsky which is modified here, allowing us to provide extreme decompositions, adapted to our computational purpose.
[ { "created": "Wed, 23 Oct 2013 12:36:51 GMT", "version": "v1" }, { "created": "Fri, 21 Nov 2014 12:43:57 GMT", "version": "v2" } ]
2015-11-20
[ [ "Thomassé", "Stéphan", "" ], [ "Trotignon", "Nicolas", "" ], [ "Vusković", "Kristina", "" ] ]
The maximum stable set problem is NP-hard, even when restricted to triangle-free graphs. In particular, one cannot expect a polynomial time algorithm deciding if a bull-free graph has a stable set of size $k$, when $k$ is part of the instance. Our main result in this paper is to show the existence of an FPT algorithm when we parameterize the problem by the solution size $k$. A polynomial kernel is unlikely to exist for this problem. We show however that our problem has a polynomial size Turing-kernel. More precisely, the hard cases are instances of size $O(k^5)$. As a byproduct, if we forbid odd holes in addition to the bull, we show the existence of a polynomial time algorithm for the stable set problem. We also prove that the chromatic number of a bull-free graph is bounded by a function of its clique number and the maximum chromatic number of its triangle-free induced subgraphs. All our results rely on a decomposition theorem of bull-free graphs due to Chudnovsky which is modified here, allowing us to provide extreme decompositions, adapted to our computational purpose.
1811.01510
Marc Moreno Maza
Rui-Juan Jing, Marc Moreno-Maza, Delaram Talaashrafi
Complexity Estimates for Fourier-Motzkin Elimination
null
null
null
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a new method for removing all the redundant inequalities generated by Fourier-Motzkin elimination. This method is based on an improved version of Balas' work and can also be used to remove all the redundant inequalities in the input system. Moreover, our method only uses arithmetic operations on matrices and avoids resorting to linear programming techniques. Algebraic complexity estimates and experimental results show that our method outperforms alternative approaches, in particular those based on linear programming and simplex algorithm.
[ { "created": "Mon, 5 Nov 2018 04:41:42 GMT", "version": "v1" }, { "created": "Fri, 10 May 2019 23:20:47 GMT", "version": "v2" } ]
2019-05-14
[ [ "Jing", "Rui-Juan", "" ], [ "Moreno-Maza", "Marc", "" ], [ "Talaashrafi", "Delaram", "" ] ]
In this paper, we propose a new method for removing all the redundant inequalities generated by Fourier-Motzkin elimination. This method is based on an improved version of Balas' work and can also be used to remove all the redundant inequalities in the input system. Moreover, our method only uses arithmetic operations on matrices and avoids resorting to linear programming techniques. Algebraic complexity estimates and experimental results show that our method outperforms alternative approaches, in particular those based on linear programming and simplex algorithm.
2405.07671
Martin Berglund
Martin Berglund, Willeke Martens, Brink van der Merwe
Constructing a BPE Tokenization DFA
null
null
null
null
cs.FL cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many natural language processing systems operate over tokenizations of text to address the open-vocabulary problem. In this paper, we give and analyze an algorithm for the efficient construction of deterministic finite automata designed to operate directly on tokenizations produced by the popular byte pair encoding technique. This makes it possible to apply many existing techniques and algorithms to the tokenized case, such as pattern matching, equivalence checking of tokenization dictionaries, and composing tokenized languages in various ways.
[ { "created": "Mon, 13 May 2024 11:59:24 GMT", "version": "v1" } ]
2024-05-14
[ [ "Berglund", "Martin", "" ], [ "Martens", "Willeke", "" ], [ "van der Merwe", "Brink", "" ] ]
Many natural language processing systems operate over tokenizations of text to address the open-vocabulary problem. In this paper, we give and analyze an algorithm for the efficient construction of deterministic finite automata designed to operate directly on tokenizations produced by the popular byte pair encoding technique. This makes it possible to apply many existing techniques and algorithms to the tokenized case, such as pattern matching, equivalence checking of tokenization dictionaries, and composing tokenized languages in various ways.
1403.1956
Hanum Putri Permatasari
Hanum Putri Permatasari, Silvia Harlena, Donny Erlangga, Reza Chandra
Effect of Social Media on Website Popularity: Differences between Public and Private Universities in Indonesia
6 pages, 3 figures, 5 tables
World of Computer Science and Information Technology Journal (WCSIT), ISSN: 2221-0741, Vol. 3, No. 2, pp. 32-37, 2013
null
null
cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social media has become something that is important to enhance social networking and sharing of information through the website. Social media have not only changed social networking, they provide a valuable tool for social organization, activism, political, healthcare and even academic relations in the university. The researchers conducted present study with objectives to a). examine the academic use of social media by universities, b). measure the popularity and visibility of social media owned by universities. This study was delimited to the universities in Indonesia. The population of the study consisted both on public and private universities. The sample size comprised totally of 264 universities that their ranks included both in Webometrics and 4ICU in July 2012 edition. The social media which was examined included Facebook, Twitter, Flicker, LinkedIn, Youtube, Wikipeda, Blogs, social network community owned by the university and Open Course Ware. This study used an approach for data collection and measurement: by using Alexa and Majestic SEO. Data analysis using the Pearson Chi-square for social media ownership that using data ordinal and independent t test for examining effects of social media on website popularity. The study revealed that majority of the social media users used Facebook, then followed by Twitter. There are also most significant differences for result of popularity by Alexa Rank and visibility by Majestic SEO in universities whether used social media or no.
[ { "created": "Sat, 8 Mar 2014 10:06:57 GMT", "version": "v1" } ]
2014-03-11
[ [ "Permatasari", "Hanum Putri", "" ], [ "Harlena", "Silvia", "" ], [ "Erlangga", "Donny", "" ], [ "Chandra", "Reza", "" ] ]
Social media has become something that is important to enhance social networking and sharing of information through the website. Social media have not only changed social networking, they provide a valuable tool for social organization, activism, political, healthcare and even academic relations in the university. The researchers conducted present study with objectives to a). examine the academic use of social media by universities, b). measure the popularity and visibility of social media owned by universities. This study was delimited to the universities in Indonesia. The population of the study consisted both on public and private universities. The sample size comprised totally of 264 universities that their ranks included both in Webometrics and 4ICU in July 2012 edition. The social media which was examined included Facebook, Twitter, Flicker, LinkedIn, Youtube, Wikipeda, Blogs, social network community owned by the university and Open Course Ware. This study used an approach for data collection and measurement: by using Alexa and Majestic SEO. Data analysis using the Pearson Chi-square for social media ownership that using data ordinal and independent t test for examining effects of social media on website popularity. The study revealed that majority of the social media users used Facebook, then followed by Twitter. There are also most significant differences for result of popularity by Alexa Rank and visibility by Majestic SEO in universities whether used social media or no.
2007.03730
Ping-Yeh Chiang
Ping-yeh Chiang, Michael J. Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, Tom Goldstein
Detection as Regression: Certified Object Detection by Median Smoothing
null
null
null
null
cs.CV cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the vulnerability of object detectors to adversarial attacks, very few defenses are known to date. While adversarial training can improve the empirical robustness of image classifiers, a direct extension to object detection is very expensive. This work is motivated by recent progress on certified classification by randomized smoothing. We start by presenting a reduction from object detection to a regression problem. Then, to enable certified regression, where standard mean smoothing fails, we propose median smoothing, which is of independent interest. We obtain the first model-agnostic, training-free, and certified defense for object detection against $\ell_2$-bounded attacks. The code for all experiments in the paper is available at http://github.com/Ping-C/CertifiedObjectDetection .
[ { "created": "Tue, 7 Jul 2020 18:40:19 GMT", "version": "v1" }, { "created": "Fri, 7 Aug 2020 22:13:31 GMT", "version": "v2" }, { "created": "Wed, 25 Nov 2020 16:43:49 GMT", "version": "v3" }, { "created": "Fri, 25 Feb 2022 14:23:54 GMT", "version": "v4" } ]
2022-02-28
[ [ "Chiang", "Ping-yeh", "" ], [ "Curry", "Michael J.", "" ], [ "Abdelkader", "Ahmed", "" ], [ "Kumar", "Aounon", "" ], [ "Dickerson", "John", "" ], [ "Goldstein", "Tom", "" ] ]
Despite the vulnerability of object detectors to adversarial attacks, very few defenses are known to date. While adversarial training can improve the empirical robustness of image classifiers, a direct extension to object detection is very expensive. This work is motivated by recent progress on certified classification by randomized smoothing. We start by presenting a reduction from object detection to a regression problem. Then, to enable certified regression, where standard mean smoothing fails, we propose median smoothing, which is of independent interest. We obtain the first model-agnostic, training-free, and certified defense for object detection against $\ell_2$-bounded attacks. The code for all experiments in the paper is available at http://github.com/Ping-C/CertifiedObjectDetection .
1901.05311
Chao Zhai
Chao Zhai, Hehong Zhang, Gaoxi Xiao and Tso-Chien Pan
Contingency Identification of Cascading Failures in Power Transmission Networks
null
null
null
null
cs.SY nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the evolving nature of power systems and the complicated coupling relationship of power devices, it has been a great challenge to identify the contingencies that could trigger cascading blackouts of power systems. This paper provides an effective approach to identifying the initial contingency in power transmission networks, which are equipped with flexible alternating current transmission system (FACTS) devices, high-voltage direct current (HVDC) links and protective relays. Essentially, the problem of contingency identification is formulated in the framework of nonlinear programming, which can be solved by the Jacobian-Free Newton-Krylov (JFNK) method to circumvent the Jacobian matrix and reduce the computational cost. Notably, the proposed identification approach is also applied to complicated cascading failure models of power systems. Finally, numerical simulations are carried out to validate the proposed identification approach on IEEE 118 Bus Systems. The proposed approach succeeds in reconciling the rigorous optimization formulation with the practical modelling of cascading blackouts.
[ { "created": "Tue, 15 Jan 2019 03:21:24 GMT", "version": "v1" } ]
2019-01-17
[ [ "Zhai", "Chao", "" ], [ "Zhang", "Hehong", "" ], [ "Xiao", "Gaoxi", "" ], [ "Pan", "Tso-Chien", "" ] ]
Due to the evolving nature of power systems and the complicated coupling relationship of power devices, it has been a great challenge to identify the contingencies that could trigger cascading blackouts of power systems. This paper provides an effective approach to identifying the initial contingency in power transmission networks, which are equipped with flexible alternating current transmission system (FACTS) devices, high-voltage direct current (HVDC) links and protective relays. Essentially, the problem of contingency identification is formulated in the framework of nonlinear programming, which can be solved by the Jacobian-Free Newton-Krylov (JFNK) method to circumvent the Jacobian matrix and reduce the computational cost. Notably, the proposed identification approach is also applied to complicated cascading failure models of power systems. Finally, numerical simulations are carried out to validate the proposed identification approach on IEEE 118 Bus Systems. The proposed approach succeeds in reconciling the rigorous optimization formulation with the practical modelling of cascading blackouts.
0806.1397
Xianmin Feng
Xianmin Ming, Jiansheng Yang
The Improvement of the Bound on Hash Family
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the bound on three kinds of hash family using the Singleton bound. To $\epsilon-U(N; n, m)$ hash family, in the caes of $n>m^2>1$ and $1\geq\epsilon\geq \epsilon_1(n, m)$, we get that the new bound is better. To $\epsilon-\bigtriangleup U(N; n, m)$ hash family, in the case of $n>m>1$ and $1\geq\epsilon\geq\epsilon_3(n,m)$, the new bound is better. To $\epsilon-SU(N; n, m)$ hash family, in the case of $n>2^m>2$ and $1\geq\epsilon\geq \epsilon_4(n, m)$, we get that the new bound is better.
[ { "created": "Mon, 9 Jun 2008 08:26:01 GMT", "version": "v1" } ]
2008-12-18
[ [ "Ming", "Xianmin", "" ], [ "Yang", "Jiansheng", "" ] ]
In this paper, we study the bound on three kinds of hash family using the Singleton bound. To $\epsilon-U(N; n, m)$ hash family, in the caes of $n>m^2>1$ and $1\geq\epsilon\geq \epsilon_1(n, m)$, we get that the new bound is better. To $\epsilon-\bigtriangleup U(N; n, m)$ hash family, in the case of $n>m>1$ and $1\geq\epsilon\geq\epsilon_3(n,m)$, the new bound is better. To $\epsilon-SU(N; n, m)$ hash family, in the case of $n>2^m>2$ and $1\geq\epsilon\geq \epsilon_4(n, m)$, we get that the new bound is better.
1410.7756
Wenliang Du
Xing Jin, Tongbo Luo, Derek G. Tsui, Wenliang Du
Code Injection Attacks on HTML5-based Mobile Apps
In Proceedings of the Third Workshop on Mobile Security Technologies (MoST) 2014 (http://arxiv.org/abs/1410.6674)
null
null
MoST/2014/11
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
HTML5-based mobile apps become more and more popular, mostly because they are much easier to be ported across different mobile platforms than native apps. HTML5-based apps are implemented using the standard web technologies, including HTML5, JavaScript and CSS; they depend on some middlewares, such as PhoneGap, to interact with the underlying OS. Knowing that JavaScript is subject to code injection attacks, we have conducted a systematic study on HTML5-based mobile apps, trying to evaluate whether it is safe to rely on the web technologies for mobile app development. Our discoveries are quite surprising. We found out that if HTML5-based mobile apps become popular--it seems to go that direction based on the current projection--many of the things that we normally do today may become dangerous, including reading from 2D barcodes, scanning Wi-Fi access points, playing MP4 videos, pairing with Bluetooth devices, etc. This paper describes how HTML5-based apps can become vulnerable, how attackers can exploit their vulnerabilities through a variety of channels, and what damage can be achieved by the attackers. In addition to demonstrating the attacks through example apps, we have studied 186 PhoneGap plugins, used by apps to achieve a variety of functionalities, and we found that 11 are vulnerable. We also found two real HTML5-based apps that are vulnerable to the attacks.
[ { "created": "Tue, 28 Oct 2014 19:24:11 GMT", "version": "v1" } ]
2014-10-29
[ [ "Jin", "Xing", "" ], [ "Luo", "Tongbo", "" ], [ "Tsui", "Derek G.", "" ], [ "Du", "Wenliang", "" ] ]
HTML5-based mobile apps become more and more popular, mostly because they are much easier to be ported across different mobile platforms than native apps. HTML5-based apps are implemented using the standard web technologies, including HTML5, JavaScript and CSS; they depend on some middlewares, such as PhoneGap, to interact with the underlying OS. Knowing that JavaScript is subject to code injection attacks, we have conducted a systematic study on HTML5-based mobile apps, trying to evaluate whether it is safe to rely on the web technologies for mobile app development. Our discoveries are quite surprising. We found out that if HTML5-based mobile apps become popular--it seems to go that direction based on the current projection--many of the things that we normally do today may become dangerous, including reading from 2D barcodes, scanning Wi-Fi access points, playing MP4 videos, pairing with Bluetooth devices, etc. This paper describes how HTML5-based apps can become vulnerable, how attackers can exploit their vulnerabilities through a variety of channels, and what damage can be achieved by the attackers. In addition to demonstrating the attacks through example apps, we have studied 186 PhoneGap plugins, used by apps to achieve a variety of functionalities, and we found that 11 are vulnerable. We also found two real HTML5-based apps that are vulnerable to the attacks.
2212.07145
Michael Gundall
Michael Gundall, Julius Raphael Stegmann, Christopher Huber, R\"udiger Halfmann, Hans Dieter Schotten
Implementation and Evaluation of the RBIS Protocol in 5G
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
5G and 6G wireless communications allow for novel and disruptive applications. While 5G was strongly focused on improvements on QoS and QoS guarantees that are necessary for industrial deployments, 6G will have a disruptive impact on possible use cases. Here, nearly each use case requires time synchronization of the involved systems. While PTP in its variations, e.g. IEEE 1588 v2.1 or IEEE 802.1AS, has established as standard for wireline systems, time synchronization of wireless or hybrid systems is still subject to research. Thus, the so-called RBIS protocol, which was originally developed and investigated for Wi-Fi, is mapped to 5G. This is possible, because both systems are infrastructure based and a suitable broadcast that fits to the requirements of RBIS protocol can be found in the control layer of 5G NR. Even if the 1 microsecond requirement that is required by some applications is not yet cracked, the accuracy of 1.3 microseconds and precision of <4.3 microseconds for non-invasive extension of existing 5G deployments is highly promising.
[ { "created": "Wed, 14 Dec 2022 10:35:50 GMT", "version": "v1" } ]
2022-12-15
[ [ "Gundall", "Michael", "" ], [ "Stegmann", "Julius Raphael", "" ], [ "Huber", "Christopher", "" ], [ "Halfmann", "Rüdiger", "" ], [ "Schotten", "Hans Dieter", "" ] ]
5G and 6G wireless communications allow for novel and disruptive applications. While 5G was strongly focused on improvements on QoS and QoS guarantees that are necessary for industrial deployments, 6G will have a disruptive impact on possible use cases. Here, nearly each use case requires time synchronization of the involved systems. While PTP in its variations, e.g. IEEE 1588 v2.1 or IEEE 802.1AS, has established as standard for wireline systems, time synchronization of wireless or hybrid systems is still subject to research. Thus, the so-called RBIS protocol, which was originally developed and investigated for Wi-Fi, is mapped to 5G. This is possible, because both systems are infrastructure based and a suitable broadcast that fits to the requirements of RBIS protocol can be found in the control layer of 5G NR. Even if the 1 microsecond requirement that is required by some applications is not yet cracked, the accuracy of 1.3 microseconds and precision of <4.3 microseconds for non-invasive extension of existing 5G deployments is highly promising.
2107.13190
Aoting Hu
Aoting Hu, Renjie Xie, Zhigang Lu, Aiqun Hu, Minhui Xue
TableGAN-MCA: Evaluating Membership Collisions of GAN-Synthesized Tabular Data Releasing
Accepted to ACM CCS 2021
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative Adversarial Networks (GAN)-synthesized table publishing lets people privately learn insights without access to the private table. However, existing studies on Membership Inference (MI) Attacks show promising results on disclosing membership of training datasets of GAN-synthesized tables. Different from those works focusing on discovering membership of a given data point, in this paper, we propose a novel Membership Collision Attack against GANs (TableGAN-MCA), which allows an adversary given only synthetic entries randomly sampled from a black-box generator to recover partial GAN training data. Namely, a GAN-synthesized table immune to state-of-the-art MI attacks is vulnerable to the TableGAN-MCA. The success of TableGAN-MCA is boosted by an observation that GAN-synthesized tables potentially collide with the training data of the generator. Our experimental evaluations on TableGAN-MCA have five main findings. First, TableGAN-MCA has a satisfying training data recovery rate on three commonly used real-world datasets against four generative models. Second, factors, including the size of GAN training data, GAN training epochs and the number of synthetic samples available to the adversary, are positively correlated to the success of TableGAN-MCA. Third, highly frequent data points have high risks of being recovered by TableGAN-MCA. Fourth, some unique data are exposed to unexpected high recovery risks in TableGAN-MCA, which may attribute to GAN's generalization. Fifth, as expected, differential privacy, without the consideration of the correlations between features, does not show commendable mitigation effect against the TableGAN-MCA. Finally, we propose two mitigation methods and show promising privacy and utility trade-offs when protecting against TableGAN-MCA.
[ { "created": "Wed, 28 Jul 2021 06:43:36 GMT", "version": "v1" } ]
2021-07-29
[ [ "Hu", "Aoting", "" ], [ "Xie", "Renjie", "" ], [ "Lu", "Zhigang", "" ], [ "Hu", "Aiqun", "" ], [ "Xue", "Minhui", "" ] ]
Generative Adversarial Networks (GAN)-synthesized table publishing lets people privately learn insights without access to the private table. However, existing studies on Membership Inference (MI) Attacks show promising results on disclosing membership of training datasets of GAN-synthesized tables. Different from those works focusing on discovering membership of a given data point, in this paper, we propose a novel Membership Collision Attack against GANs (TableGAN-MCA), which allows an adversary given only synthetic entries randomly sampled from a black-box generator to recover partial GAN training data. Namely, a GAN-synthesized table immune to state-of-the-art MI attacks is vulnerable to the TableGAN-MCA. The success of TableGAN-MCA is boosted by an observation that GAN-synthesized tables potentially collide with the training data of the generator. Our experimental evaluations on TableGAN-MCA have five main findings. First, TableGAN-MCA has a satisfying training data recovery rate on three commonly used real-world datasets against four generative models. Second, factors, including the size of GAN training data, GAN training epochs and the number of synthetic samples available to the adversary, are positively correlated to the success of TableGAN-MCA. Third, highly frequent data points have high risks of being recovered by TableGAN-MCA. Fourth, some unique data are exposed to unexpected high recovery risks in TableGAN-MCA, which may attribute to GAN's generalization. Fifth, as expected, differential privacy, without the consideration of the correlations between features, does not show commendable mitigation effect against the TableGAN-MCA. Finally, we propose two mitigation methods and show promising privacy and utility trade-offs when protecting against TableGAN-MCA.
2010.01083
Caelan Garrett
Caelan Reed Garrett, Rohan Chitnis, Rachel Holladay, Beomjoon Kim, Tom Silver, Leslie Pack Kaelbling and Tom\'as Lozano-P\'erez
Integrated Task and Motion Planning
Accepted to the Annual Review of Control, Robotics, and Autonomous Systems. Vol. 4 (Volume publication date May 2021)
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of planning for a robot that operates in environments containing a large number of objects, taking actions to move itself through the world as well as to change the state of the objects, is known as task and motion planning (TAMP). TAMP problems contain elements of discrete task planning, discrete-continuous mathematical programming, and continuous motion planning, and thus cannot be effectively addressed by any of these fields directly. In this paper, we define a class of TAMP problems and survey algorithms for solving them, characterizing the solution methods in terms of their strategies for solving the continuous-space subproblems and their techniques for integrating the discrete and continuous components of the search.
[ { "created": "Fri, 2 Oct 2020 16:23:08 GMT", "version": "v1" } ]
2020-10-05
[ [ "Garrett", "Caelan Reed", "" ], [ "Chitnis", "Rohan", "" ], [ "Holladay", "Rachel", "" ], [ "Kim", "Beomjoon", "" ], [ "Silver", "Tom", "" ], [ "Kaelbling", "Leslie Pack", "" ], [ "Lozano-Pérez", "Tomás", "" ] ]
The problem of planning for a robot that operates in environments containing a large number of objects, taking actions to move itself through the world as well as to change the state of the objects, is known as task and motion planning (TAMP). TAMP problems contain elements of discrete task planning, discrete-continuous mathematical programming, and continuous motion planning, and thus cannot be effectively addressed by any of these fields directly. In this paper, we define a class of TAMP problems and survey algorithms for solving them, characterizing the solution methods in terms of their strategies for solving the continuous-space subproblems and their techniques for integrating the discrete and continuous components of the search.
1905.04770
Will Ma
Will Ma, David Simchi-Levi
Algorithms for Online Matching, Assortment, and Pricing with Tight Weight-dependent Competitive Ratios
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the dynamic assortment offerings and item pricings occurring in e-commerce, we study a general problem of allocating finite inventories to heterogeneous customers arriving sequentially. We analyze this problem under the framework of competitive analysis, where the sequence of customers is unknown and does not necessarily follow any pattern. Previous work in this area, studying online matching, advertising, and assortment problems, has focused on the case where each item can only be sold at a single price, resulting in algorithms which achieve the best-possible competitive ratio of 1-1/e. In this paper, we extend all of these results to allow for items having multiple feasible prices. Our algorithms achieve the best-possible weight-dependent competitive ratios, which depend on the sets of feasible prices given in advance. Our algorithms are also simple and intuitive; they are based on constructing a class of universal ``value functions'' which integrate the selection of items and prices offered. Finally, we test our algorithms on the publicly-available hotel data set of Bodea et al. (2009), where there are multiple items (hotel rooms) each with multiple prices (fares at which the room could be sold). We find that applying our algorithms, as a ``hybrid'' with algorithms which attempt to forecast and learn the future transactions, results in the best performance.
[ { "created": "Sun, 12 May 2019 18:59:53 GMT", "version": "v1" } ]
2019-05-14
[ [ "Ma", "Will", "" ], [ "Simchi-Levi", "David", "" ] ]
Motivated by the dynamic assortment offerings and item pricings occurring in e-commerce, we study a general problem of allocating finite inventories to heterogeneous customers arriving sequentially. We analyze this problem under the framework of competitive analysis, where the sequence of customers is unknown and does not necessarily follow any pattern. Previous work in this area, studying online matching, advertising, and assortment problems, has focused on the case where each item can only be sold at a single price, resulting in algorithms which achieve the best-possible competitive ratio of 1-1/e. In this paper, we extend all of these results to allow for items having multiple feasible prices. Our algorithms achieve the best-possible weight-dependent competitive ratios, which depend on the sets of feasible prices given in advance. Our algorithms are also simple and intuitive; they are based on constructing a class of universal ``value functions'' which integrate the selection of items and prices offered. Finally, we test our algorithms on the publicly-available hotel data set of Bodea et al. (2009), where there are multiple items (hotel rooms) each with multiple prices (fares at which the room could be sold). We find that applying our algorithms, as a ``hybrid'' with algorithms which attempt to forecast and learn the future transactions, results in the best performance.
1102.5699
Rachit Mohan Garg
Rachit Mohan Garg, Yamini Sood, Neha Tyagi
Ontology based approach for video transmission over the network
7 pages, 2 figures, 4 tables
The International journal of Multimedia & Its Applications (IJMA) Vol.3, No.1, February 2011
10.5121/ijma.2011.3106
null
cs.MM cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the increase in the bandwidth & the transmission speed over the internet, transmission of multimedia objects like video, audio, images has become an easier work. In this paper we provide an approach that can be useful for transmission of video objects over the internet without much fuzz. The approach provides a ontology based framework that is used to establish an automatic deployment of video transmission system. Further the video is compressed using the structural flow mechanism that uses the wavelet principle for compression of video frames. Finally the video transmission algorithm known as RRDBFSF algorithm is provided that makes use of the concept of restrictive flooding to avoid redundancy thereby increasing the efficiency.
[ { "created": "Mon, 28 Feb 2011 16:13:20 GMT", "version": "v1" } ]
2011-03-01
[ [ "Garg", "Rachit Mohan", "" ], [ "Sood", "Yamini", "" ], [ "Tyagi", "Neha", "" ] ]
With the increase in the bandwidth & the transmission speed over the internet, transmission of multimedia objects like video, audio, images has become an easier work. In this paper we provide an approach that can be useful for transmission of video objects over the internet without much fuzz. The approach provides a ontology based framework that is used to establish an automatic deployment of video transmission system. Further the video is compressed using the structural flow mechanism that uses the wavelet principle for compression of video frames. Finally the video transmission algorithm known as RRDBFSF algorithm is provided that makes use of the concept of restrictive flooding to avoid redundancy thereby increasing the efficiency.
2006.13401
Xinshi Chen
Xinshi Chen, Yufei Zhang, Christoph Reisinger, Le Song
Understanding Deep Architectures with Reasoning Layer
34th Conference on Neural Information Processing Systems (NeurIPS 2020)
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, there has been a surge of interest in combining deep learning models with reasoning in order to handle more sophisticated learning tasks. In many cases, a reasoning task can be solved by an iterative algorithm. This algorithm is often unrolled, and used as a specialized layer in the deep architecture, which can be trained end-to-end with other neural components. Although such hybrid deep architectures have led to many empirical successes, the theoretical foundation of such architectures, especially the interplay between algorithm layers and other neural layers, remains largely unexplored. In this paper, we take an initial step towards an understanding of such hybrid deep architectures by showing that properties of the algorithm layers, such as convergence, stability, and sensitivity, are intimately related to the approximation and generalization abilities of the end-to-end model. Furthermore, our analysis matches closely our experimental observations under various conditions, suggesting that our theory can provide useful guidelines for designing deep architectures with reasoning layers.
[ { "created": "Wed, 24 Jun 2020 00:26:35 GMT", "version": "v1" }, { "created": "Thu, 29 Oct 2020 22:00:00 GMT", "version": "v2" } ]
2020-11-02
[ [ "Chen", "Xinshi", "" ], [ "Zhang", "Yufei", "" ], [ "Reisinger", "Christoph", "" ], [ "Song", "Le", "" ] ]
Recently, there has been a surge of interest in combining deep learning models with reasoning in order to handle more sophisticated learning tasks. In many cases, a reasoning task can be solved by an iterative algorithm. This algorithm is often unrolled, and used as a specialized layer in the deep architecture, which can be trained end-to-end with other neural components. Although such hybrid deep architectures have led to many empirical successes, the theoretical foundation of such architectures, especially the interplay between algorithm layers and other neural layers, remains largely unexplored. In this paper, we take an initial step towards an understanding of such hybrid deep architectures by showing that properties of the algorithm layers, such as convergence, stability, and sensitivity, are intimately related to the approximation and generalization abilities of the end-to-end model. Furthermore, our analysis matches closely our experimental observations under various conditions, suggesting that our theory can provide useful guidelines for designing deep architectures with reasoning layers.
2201.05716
EPTCS
P\'eter Bereczky (E\"otv\"os Lor\'and University, Hungary), Xiaohong Chen (University of Illinois at Urbana-Champaign, USA), D\'aniel Horp\'acsi (E\"otv\"os Lor\'and University, Hungary), Lucas Pe\~na (University of Illinois at Urbana-Champaign, USA), Jan Tu\v{s}il (Masaryk University, Czechia)
Mechanizing Matching Logic In Coq
In Proceedings FROM 2022, arXiv:2209.09208
EPTCS 369, 2022, pp. 17-36
10.4204/EPTCS.369.2
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Matching logic is a formalism for specifying, and reasoning about, mathematical structures, using patterns and pattern matching. Growing in popularity, it has been used to define many logical systems such as separation logic with recursive definitions and linear temporal logic. In addition, it serves as the logical foundation of the K semantic framework, which was used to build practical verifiers for a number of real-world languages. Despite being a fundamental formal system accommodating substantial theories, matching logic lacks a general-purpose, machine-checked formalization. Hence, we formalize matching logic using the Coq proof assistant. Specifically, we create a new representation of matching logic that uses a locally nameless encoding, and we formalize the syntax, semantics, and proof system of this representation in the Coq proof assistant. Crucially, we prove the soundness of the formalized proof system and provide a means to carry out interactive matching logic reasoning in Coq. We believe this work provides a previously unexplored avenue for reasoning about matching logic, its models, and the proof system.
[ { "created": "Sat, 15 Jan 2022 00:06:17 GMT", "version": "v1" }, { "created": "Wed, 19 Jan 2022 16:04:53 GMT", "version": "v2" }, { "created": "Sun, 1 May 2022 09:58:07 GMT", "version": "v3" }, { "created": "Wed, 21 Sep 2022 13:55:16 GMT", "version": "v4" } ]
2022-09-22
[ [ "Bereczky", "Péter", "", "Eötvös Loránd University, Hungary" ], [ "Chen", "Xiaohong", "", "University of Illinois at Urbana-Champaign, USA" ], [ "Horpácsi", "Dániel", "", "Eötvös Loránd University, Hungary" ], [ "Peña", "Lucas", "", "University of\n Illinois at Urbana-Champaign, USA" ], [ "Tušil", "Jan", "", "Masaryk University,\n Czechia" ] ]
Matching logic is a formalism for specifying, and reasoning about, mathematical structures, using patterns and pattern matching. Growing in popularity, it has been used to define many logical systems such as separation logic with recursive definitions and linear temporal logic. In addition, it serves as the logical foundation of the K semantic framework, which was used to build practical verifiers for a number of real-world languages. Despite being a fundamental formal system accommodating substantial theories, matching logic lacks a general-purpose, machine-checked formalization. Hence, we formalize matching logic using the Coq proof assistant. Specifically, we create a new representation of matching logic that uses a locally nameless encoding, and we formalize the syntax, semantics, and proof system of this representation in the Coq proof assistant. Crucially, we prove the soundness of the formalized proof system and provide a means to carry out interactive matching logic reasoning in Coq. We believe this work provides a previously unexplored avenue for reasoning about matching logic, its models, and the proof system.
2310.11117
Jianchao Tan
Huan Yuan, Chao Liao, Jianchao Tan, Peng Yao, Jiyuan Jia, Bin Chen, Chengru Song, Di Zhang
USDC: Unified Static and Dynamic Compression for Visual Transformer
This paper was actually finished in 2021
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Visual Transformers have achieved great success in almost all vision tasks, such as classification, detection, and so on. However, the model complexity and the inference speed of the visual transformers hinder their deployments in industrial products. Various model compression techniques focus on directly compressing the visual transformers into a smaller one while maintaining the model performance, however, the performance drops dramatically when the compression ratio is large. Furthermore, several dynamic network techniques have also been applied to dynamically compress the visual transformers to obtain input-adaptive efficient sub-structures during the inference stage, which can achieve a better trade-off between the compression ratio and the model performance. The upper bound of memory of dynamic models is not reduced in the practical deployment since the whole original visual transformer model and the additional control gating modules should be loaded onto devices together for inference. To alleviate two disadvantages of two categories of methods, we propose to unify the static compression and dynamic compression techniques jointly to obtain an input-adaptive compressed model, which can further better balance the total compression ratios and the model performances. Moreover, in practical deployment, the batch sizes of the training and inference stage are usually different, which will cause the model inference performance to be worse than the model training performance, which is not touched by all previous dynamic network papers. We propose a sub-group gates augmentation technique to solve this performance drop problem. Extensive experiments demonstrate the superiority of our method on various baseline visual transformers such as DeiT, T2T-ViT, and so on.
[ { "created": "Tue, 17 Oct 2023 10:04:47 GMT", "version": "v1" } ]
2023-10-18
[ [ "Yuan", "Huan", "" ], [ "Liao", "Chao", "" ], [ "Tan", "Jianchao", "" ], [ "Yao", "Peng", "" ], [ "Jia", "Jiyuan", "" ], [ "Chen", "Bin", "" ], [ "Song", "Chengru", "" ], [ "Zhang", "Di", "" ] ]
Visual Transformers have achieved great success in almost all vision tasks, such as classification, detection, and so on. However, the model complexity and the inference speed of the visual transformers hinder their deployments in industrial products. Various model compression techniques focus on directly compressing the visual transformers into a smaller one while maintaining the model performance, however, the performance drops dramatically when the compression ratio is large. Furthermore, several dynamic network techniques have also been applied to dynamically compress the visual transformers to obtain input-adaptive efficient sub-structures during the inference stage, which can achieve a better trade-off between the compression ratio and the model performance. The upper bound of memory of dynamic models is not reduced in the practical deployment since the whole original visual transformer model and the additional control gating modules should be loaded onto devices together for inference. To alleviate two disadvantages of two categories of methods, we propose to unify the static compression and dynamic compression techniques jointly to obtain an input-adaptive compressed model, which can further better balance the total compression ratios and the model performances. Moreover, in practical deployment, the batch sizes of the training and inference stage are usually different, which will cause the model inference performance to be worse than the model training performance, which is not touched by all previous dynamic network papers. We propose a sub-group gates augmentation technique to solve this performance drop problem. Extensive experiments demonstrate the superiority of our method on various baseline visual transformers such as DeiT, T2T-ViT, and so on.
2306.03929
Stratis Tsirtsis
Stratis Tsirtsis, Manuel Gomez-Rodriguez
Finding Counterfactually Optimal Action Sequences in Continuous State Spaces
Accepted at NeurIPS 2023
null
null
null
cs.LG cs.AI cs.CY stat.ML
http://creativecommons.org/licenses/by/4.0/
Whenever a clinician reflects on the efficacy of a sequence of treatment decisions for a patient, they may try to identify critical time steps where, had they made different decisions, the patient's health would have improved. While recent methods at the intersection of causal inference and reinforcement learning promise to aid human experts, as the clinician above, to retrospectively analyze sequential decision making processes, they have focused on environments with finitely many discrete states. However, in many practical applications, the state of the environment is inherently continuous in nature. In this paper, we aim to fill this gap. We start by formally characterizing a sequence of discrete actions and continuous states using finite horizon Markov decision processes and a broad class of bijective structural causal models. Building upon this characterization, we formalize the problem of finding counterfactually optimal action sequences and show that, in general, we cannot expect to solve it in polynomial time. Then, we develop a search method based on the $A^*$ algorithm that, under a natural form of Lipschitz continuity of the environment's dynamics, is guaranteed to return the optimal solution to the problem. Experiments on real clinical data show that our method is very efficient in practice, and it has the potential to offer interesting insights for sequential decision making tasks.
[ { "created": "Tue, 6 Jun 2023 18:00:29 GMT", "version": "v1" }, { "created": "Mon, 6 Nov 2023 11:01:28 GMT", "version": "v2" } ]
2023-11-07
[ [ "Tsirtsis", "Stratis", "" ], [ "Gomez-Rodriguez", "Manuel", "" ] ]
Whenever a clinician reflects on the efficacy of a sequence of treatment decisions for a patient, they may try to identify critical time steps where, had they made different decisions, the patient's health would have improved. While recent methods at the intersection of causal inference and reinforcement learning promise to aid human experts, as the clinician above, to retrospectively analyze sequential decision making processes, they have focused on environments with finitely many discrete states. However, in many practical applications, the state of the environment is inherently continuous in nature. In this paper, we aim to fill this gap. We start by formally characterizing a sequence of discrete actions and continuous states using finite horizon Markov decision processes and a broad class of bijective structural causal models. Building upon this characterization, we formalize the problem of finding counterfactually optimal action sequences and show that, in general, we cannot expect to solve it in polynomial time. Then, we develop a search method based on the $A^*$ algorithm that, under a natural form of Lipschitz continuity of the environment's dynamics, is guaranteed to return the optimal solution to the problem. Experiments on real clinical data show that our method is very efficient in practice, and it has the potential to offer interesting insights for sequential decision making tasks.
1512.00312
Anton Aristov AA
Anton Aristov
The Quasi cellular nets-based models of transport and logistic systems
null
Reports of the XXIII XXIII International Scientific Symposium Miner's Week, 2015, Moscow : NUST MISIS., PP. 280-287
null
null
cs.OH
http://creativecommons.org/licenses/by-nc-sa/4.0/
There are many systems in different subjects such as industry, medicine, transport, social and others, can be discribed on their dynamic of flows. Nowadays models of flows consist of micro- and macro-models. In practice there is a problem of convertation from different levels of simulation. In the different articles author descriptes quasi cellular nets. Quasi cellular nets are new type of discrete structures without signature. It may be used for simulation instruments. This structures can simulate flows on micro- and macro levels on the single model structure. In this article described using quasi cellular nets in transport and logistics of open-cast mining.
[ { "created": "Fri, 27 Nov 2015 01:49:02 GMT", "version": "v1" } ]
2015-12-04
[ [ "Aristov", "Anton", "" ] ]
There are many systems in different subjects such as industry, medicine, transport, social and others, can be discribed on their dynamic of flows. Nowadays models of flows consist of micro- and macro-models. In practice there is a problem of convertation from different levels of simulation. In the different articles author descriptes quasi cellular nets. Quasi cellular nets are new type of discrete structures without signature. It may be used for simulation instruments. This structures can simulate flows on micro- and macro levels on the single model structure. In this article described using quasi cellular nets in transport and logistics of open-cast mining.
2108.12129
Keshav Srinivasan
Keshav Srinivasan, Nolan Coble, Joy Hamlin, Thomas Antonsen, Edward Ott and Michelle Girvan
Parallel Machine Learning for Forecasting the Dynamics of Complex Networks
null
null
10.1103/PhysRevLett.128.164101
null
cs.LG nlin.CD
http://creativecommons.org/licenses/by/4.0/
Forecasting the dynamics of large complex networks from previous time-series data is important in a wide range of contexts. Here we present a machine learning scheme for this task using a parallel architecture that mimics the topology of the network of interest. We demonstrate the utility and scalability of our method implemented using reservoir computing on a chaotic network of oscillators. Two levels of prior knowledge are considered: (i) the network links are known; and (ii) the network links are unknown and inferred via a data-driven approach to approximately optimize prediction.
[ { "created": "Fri, 27 Aug 2021 06:06:41 GMT", "version": "v1" } ]
2022-05-04
[ [ "Srinivasan", "Keshav", "" ], [ "Coble", "Nolan", "" ], [ "Hamlin", "Joy", "" ], [ "Antonsen", "Thomas", "" ], [ "Ott", "Edward", "" ], [ "Girvan", "Michelle", "" ] ]
Forecasting the dynamics of large complex networks from previous time-series data is important in a wide range of contexts. Here we present a machine learning scheme for this task using a parallel architecture that mimics the topology of the network of interest. We demonstrate the utility and scalability of our method implemented using reservoir computing on a chaotic network of oscillators. Two levels of prior knowledge are considered: (i) the network links are known; and (ii) the network links are unknown and inferred via a data-driven approach to approximately optimize prediction.
1707.04520
Wanming Hao
Wanming Hao, Ming Zeng, Zheng Chu, Shouyi Yang
Energy-Efficient Power Allocation in Millimeter Wave Massive MIMO with Non-Orthogonal Multiple Access
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this letter, we investigate the energy efficiency (EE) problem in a millimeter wave (mmWave) massive MIMO (mMIMO) system with non-orthogonal multiple access (NOMA). Multiple two-user clusters are formulated according to their channel correlation and gain difference. Following this, we propose a hybrid analog/digital precoding scheme for the low radio frequency (RF) chains structure at the base station (BS). On this basis, we formulate a power allocation problem aiming to maximize the EE under users' quality of service (QoS) requirements and per-cluster power constraint. An iterative algorithm is proposed to obtain the optimal power allocation. Simulation results show that the proposed NOMA achieves superior EE performance than that of conventional OMA.
[ { "created": "Fri, 14 Jul 2017 14:24:16 GMT", "version": "v1" } ]
2017-07-17
[ [ "Hao", "Wanming", "" ], [ "Zeng", "Ming", "" ], [ "Chu", "Zheng", "" ], [ "Yang", "Shouyi", "" ] ]
In this letter, we investigate the energy efficiency (EE) problem in a millimeter wave (mmWave) massive MIMO (mMIMO) system with non-orthogonal multiple access (NOMA). Multiple two-user clusters are formulated according to their channel correlation and gain difference. Following this, we propose a hybrid analog/digital precoding scheme for the low radio frequency (RF) chains structure at the base station (BS). On this basis, we formulate a power allocation problem aiming to maximize the EE under users' quality of service (QoS) requirements and per-cluster power constraint. An iterative algorithm is proposed to obtain the optimal power allocation. Simulation results show that the proposed NOMA achieves superior EE performance than that of conventional OMA.
2207.02370
Tianhong Li
Tianhong Li, Lijie Fan, Yuan Yuan, Dina Katabi
Unsupervised Learning for Human Sensing Using Radio Signals
WACV 2022. The first three authors contributed equally to this paper
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
There is a growing literature demonstrating the feasibility of using Radio Frequency (RF) signals to enable key computer vision tasks in the presence of occlusions and poor lighting. It leverages that RF signals traverse walls and occlusions to deliver through-wall pose estimation, action recognition, scene captioning, and human re-identification. However, unlike RGB datasets which can be labeled by human workers, labeling RF signals is a daunting task because such signals are not human interpretable. Yet, it is fairly easy to collect unlabelled RF signals. It would be highly beneficial to use such unlabeled RF data to learn useful representations in an unsupervised manner. Thus, in this paper, we explore the feasibility of adapting RGB-based unsupervised representation learning to RF signals. We show that while contrastive learning has emerged as the main technique for unsupervised representation learning from images and videos, such methods produce poor performance when applied to sensing humans using RF signals. In contrast, predictive unsupervised learning methods learn high-quality representations that can be used for multiple downstream RF-based sensing tasks. Our empirical results show that this approach outperforms state-of-the-art RF-based human sensing on various tasks, opening the possibility of unsupervised representation learning from this novel modality.
[ { "created": "Wed, 6 Jul 2022 00:28:18 GMT", "version": "v1" } ]
2022-07-07
[ [ "Li", "Tianhong", "" ], [ "Fan", "Lijie", "" ], [ "Yuan", "Yuan", "" ], [ "Katabi", "Dina", "" ] ]
There is a growing literature demonstrating the feasibility of using Radio Frequency (RF) signals to enable key computer vision tasks in the presence of occlusions and poor lighting. It leverages that RF signals traverse walls and occlusions to deliver through-wall pose estimation, action recognition, scene captioning, and human re-identification. However, unlike RGB datasets which can be labeled by human workers, labeling RF signals is a daunting task because such signals are not human interpretable. Yet, it is fairly easy to collect unlabelled RF signals. It would be highly beneficial to use such unlabeled RF data to learn useful representations in an unsupervised manner. Thus, in this paper, we explore the feasibility of adapting RGB-based unsupervised representation learning to RF signals. We show that while contrastive learning has emerged as the main technique for unsupervised representation learning from images and videos, such methods produce poor performance when applied to sensing humans using RF signals. In contrast, predictive unsupervised learning methods learn high-quality representations that can be used for multiple downstream RF-based sensing tasks. Our empirical results show that this approach outperforms state-of-the-art RF-based human sensing on various tasks, opening the possibility of unsupervised representation learning from this novel modality.
1612.09087
Farshad Roohbakhshan
Farshad Roohbakhshan and Roger A. Sauer
Efficient isogeometric thin shell formulations for soft biological materials
Typos are removed. Remark 3.4 is added. Eq. (18) in the previous version is removed. Thus, the equations get renumbered. Example 5.5 is updated. Minor typos in Eqs. (17), (80), (145) and (146), are corrected. They do not affect the results
Biomech Model Mechanobiol (2017) 16:1569
10.1007/s10237-017-0906-6
null
cs.CE cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents three different constitutive approaches to model thin rotation-free shells based on the Kirchhoff-Love hypothesis. One approach is based on numerical integration through the shell thickness while the other two approaches do not need any numerical integration and so they are computationally more efficient. The formulation is designed for large deformations and allows for geometrical and material nonlinearities, which makes it very suitable for the modeling of soft tissues. Furthermore, six different isotropic and anisotropic material models, which are commonly used to model soft biological materials, are examined for the three proposed constitutive approaches. Following an isogeometric approach, NURBS-based finite elements are used for the discretization of the shell surface. Several numerical examples are investigated to demonstrate the capabilities of the formulation. Those include the contact simulation during balloon angioplasty.
[ { "created": "Thu, 29 Dec 2016 10:02:43 GMT", "version": "v1" }, { "created": "Tue, 24 Oct 2017 14:33:41 GMT", "version": "v2" } ]
2017-10-25
[ [ "Roohbakhshan", "Farshad", "" ], [ "Sauer", "Roger A.", "" ] ]
This paper presents three different constitutive approaches to model thin rotation-free shells based on the Kirchhoff-Love hypothesis. One approach is based on numerical integration through the shell thickness while the other two approaches do not need any numerical integration and so they are computationally more efficient. The formulation is designed for large deformations and allows for geometrical and material nonlinearities, which makes it very suitable for the modeling of soft tissues. Furthermore, six different isotropic and anisotropic material models, which are commonly used to model soft biological materials, are examined for the three proposed constitutive approaches. Following an isogeometric approach, NURBS-based finite elements are used for the discretization of the shell surface. Several numerical examples are investigated to demonstrate the capabilities of the formulation. Those include the contact simulation during balloon angioplasty.
2402.06266
Peter Vamplew
Peter Vamplew, Cameron Foale, Richard Dazeley
Value function interference and greedy action selection in value-based multi-objective reinforcement learning
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Multi-objective reinforcement learning (MORL) algorithms extend conventional reinforcement learning (RL) to the more general case of problems with multiple, conflicting objectives, represented by vector-valued rewards. Widely-used scalar RL methods such as Q-learning can be modified to handle multiple objectives by (1) learning vector-valued value functions, and (2) performing action selection using a scalarisation or ordering operator which reflects the user's utility with respect to the different objectives. However, as we demonstrate here, if the user's utility function maps widely varying vector-values to similar levels of utility, this can lead to interference in the value-function learned by the agent, leading to convergence to sub-optimal policies. This will be most prevalent in stochastic environments when optimising for the Expected Scalarised Return criterion, but we present a simple example showing that interference can also arise in deterministic environments. We demonstrate empirically that avoiding the use of random tie-breaking when identifying greedy actions can ameliorate, but not fully overcome, the problems caused by value function interference.
[ { "created": "Fri, 9 Feb 2024 09:28:01 GMT", "version": "v1" } ]
2024-02-12
[ [ "Vamplew", "Peter", "" ], [ "Foale", "Cameron", "" ], [ "Dazeley", "Richard", "" ] ]
Multi-objective reinforcement learning (MORL) algorithms extend conventional reinforcement learning (RL) to the more general case of problems with multiple, conflicting objectives, represented by vector-valued rewards. Widely-used scalar RL methods such as Q-learning can be modified to handle multiple objectives by (1) learning vector-valued value functions, and (2) performing action selection using a scalarisation or ordering operator which reflects the user's utility with respect to the different objectives. However, as we demonstrate here, if the user's utility function maps widely varying vector-values to similar levels of utility, this can lead to interference in the value-function learned by the agent, leading to convergence to sub-optimal policies. This will be most prevalent in stochastic environments when optimising for the Expected Scalarised Return criterion, but we present a simple example showing that interference can also arise in deterministic environments. We demonstrate empirically that avoiding the use of random tie-breaking when identifying greedy actions can ameliorate, but not fully overcome, the problems caused by value function interference.
2205.01464
Andrew Drozdov
Andrew Drozdov, Jiawei Zhou, Radu Florian, Andrew McCallum, Tahira Naseem, Yoon Kim, Ramon Fernandez Astudillo
Inducing and Using Alignments for Transition-based AMR Parsing
Accepted at NAACL 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transition-based parsers for Abstract Meaning Representation (AMR) rely on node-to-word alignments. These alignments are learned separately from parser training and require a complex pipeline of rule-based components, pre-processing, and post-processing to satisfy domain-specific constraints. Parsers also train on a point-estimate of the alignment pipeline, neglecting the uncertainty due to the inherent ambiguity of alignment. In this work we explore two avenues for overcoming these limitations. First, we propose a neural aligner for AMR that learns node-to-word alignments without relying on complex pipelines. We subsequently explore a tighter integration of aligner and parser training by considering a distribution over oracle action sequences arising from aligner uncertainty. Empirical results show this approach leads to more accurate alignments and generalization better from the AMR2.0 to AMR3.0 corpora. We attain a new state-of-the art for gold-only trained models, matching silver-trained performance without the need for beam search on AMR3.0.
[ { "created": "Tue, 3 May 2022 12:58:36 GMT", "version": "v1" } ]
2022-05-04
[ [ "Drozdov", "Andrew", "" ], [ "Zhou", "Jiawei", "" ], [ "Florian", "Radu", "" ], [ "McCallum", "Andrew", "" ], [ "Naseem", "Tahira", "" ], [ "Kim", "Yoon", "" ], [ "Astudillo", "Ramon Fernandez", "" ] ]
Transition-based parsers for Abstract Meaning Representation (AMR) rely on node-to-word alignments. These alignments are learned separately from parser training and require a complex pipeline of rule-based components, pre-processing, and post-processing to satisfy domain-specific constraints. Parsers also train on a point-estimate of the alignment pipeline, neglecting the uncertainty due to the inherent ambiguity of alignment. In this work we explore two avenues for overcoming these limitations. First, we propose a neural aligner for AMR that learns node-to-word alignments without relying on complex pipelines. We subsequently explore a tighter integration of aligner and parser training by considering a distribution over oracle action sequences arising from aligner uncertainty. Empirical results show this approach leads to more accurate alignments and generalization better from the AMR2.0 to AMR3.0 corpora. We attain a new state-of-the art for gold-only trained models, matching silver-trained performance without the need for beam search on AMR3.0.