id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1508.03101
Jason McEwen
J. D. McEwen, M. B\"uttner, B. Leistedt, H. V. Peiris and Y. Wiaux
A novel sampling theorem on the rotation group
5 pages, 2 figures, minor changes to match version accepted for publication. Code available at http://www.sothree.org
IEEE Signal Processing Letters. Vol. 22, No. 12, 2015, pp 2425-2429
10.1109/LSP.2015.2490676
null
cs.IT astro-ph.IM math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a novel sampling theorem for functions defined on the three-dimensional rotation group SO(3) by connecting the rotation group to the three-torus through a periodic extension. Our sampling theorem requires $4L^3$ samples to capture all of the information content of a signal band-limited at $L$, reducing the number of required samples by a factor of two compared to other equiangular sampling theorems. We present fast algorithms to compute the associated Fourier transform on the rotation group, the so-called Wigner transform, which scale as $O(L^4)$, compared to the naive scaling of $O(L^6)$. For the common case of a low directional band-limit $N$, complexity is reduced to $O(N L^3)$. Our fast algorithms will be of direct use in speeding up the computation of directional wavelet transforms on the sphere. We make our SO3 code implementing these algorithms publicly available.
[ { "created": "Thu, 13 Aug 2015 02:11:23 GMT", "version": "v1" }, { "created": "Fri, 8 Jan 2016 12:23:41 GMT", "version": "v2" } ]
2016-01-11
[ [ "McEwen", "J. D.", "" ], [ "Büttner", "M.", "" ], [ "Leistedt", "B.", "" ], [ "Peiris", "H. V.", "" ], [ "Wiaux", "Y.", "" ] ]
We develop a novel sampling theorem for functions defined on the three-dimensional rotation group SO(3) by connecting the rotation group to the three-torus through a periodic extension. Our sampling theorem requires $4L^3$ samples to capture all of the information content of a signal band-limited at $L$, reducing the number of required samples by a factor of two compared to other equiangular sampling theorems. We present fast algorithms to compute the associated Fourier transform on the rotation group, the so-called Wigner transform, which scale as $O(L^4)$, compared to the naive scaling of $O(L^6)$. For the common case of a low directional band-limit $N$, complexity is reduced to $O(N L^3)$. Our fast algorithms will be of direct use in speeding up the computation of directional wavelet transforms on the sphere. We make our SO3 code implementing these algorithms publicly available.
1007.1484
David Eppstein
Erin Chambers and David Eppstein
Flows in One-Crossing-Minor-Free Graphs
16 pages, 4 figures
J. Graph Algorithms & Applications 17(3): 201-220, 2013
10.7155/jgaa.00291
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the maximum flow problem in directed H-minor-free graphs where H can be drawn in the plane with one crossing. If a structural decomposition of the graph as a clique-sum of planar graphs and graphs of constant complexity is given, we show that a maximum flow can be computed in O(n log n) time. In particular, maximum flows in directed K_{3,3}-minor-free graphs and directed K_5-minor-free graphs can be computed in O(n log n) time without additional assumptions.
[ { "created": "Thu, 8 Jul 2010 23:24:59 GMT", "version": "v1" } ]
2015-07-16
[ [ "Chambers", "Erin", "" ], [ "Eppstein", "David", "" ] ]
We study the maximum flow problem in directed H-minor-free graphs where H can be drawn in the plane with one crossing. If a structural decomposition of the graph as a clique-sum of planar graphs and graphs of constant complexity is given, we show that a maximum flow can be computed in O(n log n) time. In particular, maximum flows in directed K_{3,3}-minor-free graphs and directed K_5-minor-free graphs can be computed in O(n log n) time without additional assumptions.
1501.06158
Adi Vardi
Yossi Azar and Adi Vardi
TSP with Time Windows and Service Time
arXiv admin note: substantial text overlap with arXiv:1309.0251
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider TSP with time windows and service time. In this problem we receive a sequence of requests for a service at nodes in a metric space and a time window for each request. The goal of the online algorithm is to maximize the number of requests served during their time window. The time to traverse an edge is the distance between the incident nodes of that edge. Serving a request requires unit time. We characterize the competitive ratio for each metric space separately. The competitive ratio depends on the relation between the minimum laxity (the minimum length of a time window) and the diameter of the metric space. Specifically, there is a constant competitive algorithm depending whether the laxity is larger or smaller than the diameter. In addition, we characterize the rate of convergence of the competitive ratio to $1$ as the laxity increases. Specifically, we provide a matching lower and upper bounds depending on the ratio between the laxity and the TSP of the metric space (the minimum distance to traverse all nodes). An application of our result improves the lower bound for colored packets with transition cost and matches the upper bound. In proving our lower bounds we use an interesting non-standard embedding with some special properties. This embedding may be interesting by its own.
[ { "created": "Sun, 25 Jan 2015 13:52:36 GMT", "version": "v1" } ]
2015-01-27
[ [ "Azar", "Yossi", "" ], [ "Vardi", "Adi", "" ] ]
We consider TSP with time windows and service time. In this problem we receive a sequence of requests for a service at nodes in a metric space and a time window for each request. The goal of the online algorithm is to maximize the number of requests served during their time window. The time to traverse an edge is the distance between the incident nodes of that edge. Serving a request requires unit time. We characterize the competitive ratio for each metric space separately. The competitive ratio depends on the relation between the minimum laxity (the minimum length of a time window) and the diameter of the metric space. Specifically, there is a constant competitive algorithm depending whether the laxity is larger or smaller than the diameter. In addition, we characterize the rate of convergence of the competitive ratio to $1$ as the laxity increases. Specifically, we provide a matching lower and upper bounds depending on the ratio between the laxity and the TSP of the metric space (the minimum distance to traverse all nodes). An application of our result improves the lower bound for colored packets with transition cost and matches the upper bound. In proving our lower bounds we use an interesting non-standard embedding with some special properties. This embedding may be interesting by its own.
2103.16025
Sen Tian
Sen Tian, Panos Ipeirotis
On the Predictability of Utilizing Rank Percentile to Evaluate Scientific Impact
The dataset and the code to reproduce the results in this paper are available online at https://github.com/sentian/SciImpactRanking
null
null
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bibliographic metrics are commonly utilized for evaluation purposes within academia, often in conjunction with other metrics. These metrics vary widely across fields and change with the seniority of the scholar; consequently, the only way to interpret these values is by comparison with other academics within the same field who are of similar seniority. Among the field- and time- normalized indicators, rank percentile has grown in popularity, and it is preferred over other types of indicators. In this paper, we propose and justify a novel rank percentile indicator for scholars. Furthermore, we emphasize on the time factor that is built into the rank percentile, and we demonstrate that the rank percentile is highly predictable. The publication percentile is highly stable over time, while the scholar percentile exhibits short-term stability and can be predicted via a simple linear regression model. More advanced models that utilize extensive lists of features offer slightly superior performance; however, the simplicity and interpretability of the simple model impose significant advantages over the additional complexity of other models.
[ { "created": "Tue, 30 Mar 2021 02:07:58 GMT", "version": "v1" }, { "created": "Fri, 9 Jul 2021 04:04:23 GMT", "version": "v2" }, { "created": "Thu, 16 Dec 2021 06:39:16 GMT", "version": "v3" } ]
2021-12-17
[ [ "Tian", "Sen", "" ], [ "Ipeirotis", "Panos", "" ] ]
Bibliographic metrics are commonly utilized for evaluation purposes within academia, often in conjunction with other metrics. These metrics vary widely across fields and change with the seniority of the scholar; consequently, the only way to interpret these values is by comparison with other academics within the same field who are of similar seniority. Among the field- and time- normalized indicators, rank percentile has grown in popularity, and it is preferred over other types of indicators. In this paper, we propose and justify a novel rank percentile indicator for scholars. Furthermore, we emphasize on the time factor that is built into the rank percentile, and we demonstrate that the rank percentile is highly predictable. The publication percentile is highly stable over time, while the scholar percentile exhibits short-term stability and can be predicted via a simple linear regression model. More advanced models that utilize extensive lists of features offer slightly superior performance; however, the simplicity and interpretability of the simple model impose significant advantages over the additional complexity of other models.
1612.08534
Fang Zhao
Fang Zhao, Jiashi Feng, Jian Zhao, Wenhan Yang, Shuicheng Yan
Robust LSTM-Autoencoders for Face De-Occlusion in the Wild
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face recognition techniques have been developed significantly in recent years. However, recognizing faces with partial occlusion is still challenging for existing face recognizers which is heavily desired in real-world applications concerning surveillance and security. Although much research effort has been devoted to developing face de-occlusion methods, most of them can only work well under constrained conditions, such as all the faces are from a pre-defined closed set. In this paper, we propose a robust LSTM-Autoencoders (RLA) model to effectively restore partially occluded faces even in the wild. The RLA model consists of two LSTM components, which aims at occlusion-robust face encoding and recurrent occlusion removal respectively. The first one, named multi-scale spatial LSTM encoder, reads facial patches of various scales sequentially to output a latent representation, and occlusion-robustness is achieved owing to the fact that the influence of occlusion is only upon some of the patches. Receiving the representation learned by the encoder, the LSTM decoder with a dual channel architecture reconstructs the overall face and detects occlusion simultaneously, and by feat of LSTM, the decoder breaks down the task of face de-occlusion into restoring the occluded part step by step. Moreover, to minimize identify information loss and guarantee face recognition accuracy over recovered faces, we introduce an identity-preserving adversarial training scheme to further improve RLA. Extensive experiments on both synthetic and real datasets of faces with occlusion clearly demonstrate the effectiveness of our proposed RLA in removing different types of facial occlusion at various locations. The proposed method also provides significantly larger performance gain than other de-occlusion methods in promoting recognition performance over partially-occluded faces.
[ { "created": "Tue, 27 Dec 2016 08:36:48 GMT", "version": "v1" } ]
2016-12-28
[ [ "Zhao", "Fang", "" ], [ "Feng", "Jiashi", "" ], [ "Zhao", "Jian", "" ], [ "Yang", "Wenhan", "" ], [ "Yan", "Shuicheng", "" ] ]
Face recognition techniques have been developed significantly in recent years. However, recognizing faces with partial occlusion is still challenging for existing face recognizers which is heavily desired in real-world applications concerning surveillance and security. Although much research effort has been devoted to developing face de-occlusion methods, most of them can only work well under constrained conditions, such as all the faces are from a pre-defined closed set. In this paper, we propose a robust LSTM-Autoencoders (RLA) model to effectively restore partially occluded faces even in the wild. The RLA model consists of two LSTM components, which aims at occlusion-robust face encoding and recurrent occlusion removal respectively. The first one, named multi-scale spatial LSTM encoder, reads facial patches of various scales sequentially to output a latent representation, and occlusion-robustness is achieved owing to the fact that the influence of occlusion is only upon some of the patches. Receiving the representation learned by the encoder, the LSTM decoder with a dual channel architecture reconstructs the overall face and detects occlusion simultaneously, and by feat of LSTM, the decoder breaks down the task of face de-occlusion into restoring the occluded part step by step. Moreover, to minimize identify information loss and guarantee face recognition accuracy over recovered faces, we introduce an identity-preserving adversarial training scheme to further improve RLA. Extensive experiments on both synthetic and real datasets of faces with occlusion clearly demonstrate the effectiveness of our proposed RLA in removing different types of facial occlusion at various locations. The proposed method also provides significantly larger performance gain than other de-occlusion methods in promoting recognition performance over partially-occluded faces.
2209.14599
Toan Pham Van
Toan Pham Van, Linh Bao Doan, Thanh Tung Nguyen, Duc Trung Tran, Quan Van Nguyen, Dinh Viet Sang
Online pseudo labeling for polyp segmentation with momentum networks
Accepted in KSE 2022
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Semantic segmentation is an essential task in developing medical image diagnosis systems. However, building an annotated medical dataset is expensive. Thus, semi-supervised methods are significant in this circumstance. In semi-supervised learning, the quality of labels plays a crucial role in model performance. In this work, we present a new pseudo labeling strategy that enhances the quality of pseudo labels used for training student networks. We follow the multi-stage semi-supervised training approach, which trains a teacher model on a labeled dataset and then uses the trained teacher to render pseudo labels for student training. By doing so, the pseudo labels will be updated and more precise as training progress. The key difference between previous and our methods is that we update the teacher model during the student training process. So the quality of pseudo labels is improved during the student training process. We also propose a simple but effective strategy to enhance the quality of pseudo labels using a momentum model -- a slow copy version of the original model during training. By applying the momentum model combined with re-rendering pseudo labels during student training, we achieved an average of 84.1% Dice Score on five datasets (i.e., Kvarsir, CVC-ClinicDB, ETIS-LaribPolypDB, CVC-ColonDB, and CVC-300) with only 20% of the dataset used as labeled data. Our results surpass common practice by 3% and even approach fully-supervised results on some datasets. Our source code and pre-trained models are available at https://github.com/sun-asterisk-research/online learning ssl
[ { "created": "Thu, 29 Sep 2022 07:33:54 GMT", "version": "v1" } ]
2022-09-30
[ [ "Van", "Toan Pham", "" ], [ "Doan", "Linh Bao", "" ], [ "Nguyen", "Thanh Tung", "" ], [ "Tran", "Duc Trung", "" ], [ "Van Nguyen", "Quan", "" ], [ "Sang", "Dinh Viet", "" ] ]
Semantic segmentation is an essential task in developing medical image diagnosis systems. However, building an annotated medical dataset is expensive. Thus, semi-supervised methods are significant in this circumstance. In semi-supervised learning, the quality of labels plays a crucial role in model performance. In this work, we present a new pseudo labeling strategy that enhances the quality of pseudo labels used for training student networks. We follow the multi-stage semi-supervised training approach, which trains a teacher model on a labeled dataset and then uses the trained teacher to render pseudo labels for student training. By doing so, the pseudo labels will be updated and more precise as training progress. The key difference between previous and our methods is that we update the teacher model during the student training process. So the quality of pseudo labels is improved during the student training process. We also propose a simple but effective strategy to enhance the quality of pseudo labels using a momentum model -- a slow copy version of the original model during training. By applying the momentum model combined with re-rendering pseudo labels during student training, we achieved an average of 84.1% Dice Score on five datasets (i.e., Kvarsir, CVC-ClinicDB, ETIS-LaribPolypDB, CVC-ColonDB, and CVC-300) with only 20% of the dataset used as labeled data. Our results surpass common practice by 3% and even approach fully-supervised results on some datasets. Our source code and pre-trained models are available at https://github.com/sun-asterisk-research/online learning ssl
2007.07316
Nisarg Shah
Safwan Hossain, Nisarg Shah
The Effect of Strategic Noise in Linear Regression
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We build on an emerging line of work which studies strategic manipulations in training data provided to machine learning algorithms. Specifically, we focus on the ubiquitous task of linear regression. Prior work focused on the design of strategyproof algorithms, which aim to prevent such manipulations altogether by aligning the incentives of data sources. However, algorithms used in practice are often not strategyproof, which induces a strategic game among the agents. We focus on a broad class of non-strategyproof algorithms for linear regression, namely $\ell_p$ norm minimization ($p > 1$) with convex regularization. We show that when manipulations are bounded, every algorithm in this class admits a unique pure Nash equilibrium outcome. We also shed light on the structure of this equilibrium by uncovering a surprising connection between strategyproof algorithms and pure Nash equilibria of non-strategyproof algorithms in a broader setting, which may be of independent interest. Finally, we analyze the quality of equilibria under these algorithms in terms of the price of anarchy.
[ { "created": "Tue, 14 Jul 2020 19:28:19 GMT", "version": "v1" } ]
2020-07-16
[ [ "Hossain", "Safwan", "" ], [ "Shah", "Nisarg", "" ] ]
We build on an emerging line of work which studies strategic manipulations in training data provided to machine learning algorithms. Specifically, we focus on the ubiquitous task of linear regression. Prior work focused on the design of strategyproof algorithms, which aim to prevent such manipulations altogether by aligning the incentives of data sources. However, algorithms used in practice are often not strategyproof, which induces a strategic game among the agents. We focus on a broad class of non-strategyproof algorithms for linear regression, namely $\ell_p$ norm minimization ($p > 1$) with convex regularization. We show that when manipulations are bounded, every algorithm in this class admits a unique pure Nash equilibrium outcome. We also shed light on the structure of this equilibrium by uncovering a surprising connection between strategyproof algorithms and pure Nash equilibria of non-strategyproof algorithms in a broader setting, which may be of independent interest. Finally, we analyze the quality of equilibria under these algorithms in terms of the price of anarchy.
1901.03285
Fabian Steiner
Alexandru Dominic Git, Bal\'azs Matuz, Fabian Steiner
Protograph-Based LDPC Code Design for Probabilistic Shaping with On-Off Keying
Invited Paper for CISS 2019
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work investigates protograph-based LDPC codes for the AWGN channel with OOK modulation. A non-uniform distribution of the OOK modulation symbols is considered to improve the power efficiency especially for low SNRs. To this end, a specific transmitter architecture based on time sharing is proposed that allows probabilistic shaping of (some) OOK modulation symbols. Tailored protograph-based LDPC code designs outperform standard schemes with uniform signaling and off-the-shelf codes by 1.1 dB for a transmission rate of 0.25 bits/channel use.
[ { "created": "Thu, 10 Jan 2019 17:29:35 GMT", "version": "v1" } ]
2019-01-11
[ [ "Git", "Alexandru Dominic", "" ], [ "Matuz", "Balázs", "" ], [ "Steiner", "Fabian", "" ] ]
This work investigates protograph-based LDPC codes for the AWGN channel with OOK modulation. A non-uniform distribution of the OOK modulation symbols is considered to improve the power efficiency especially for low SNRs. To this end, a specific transmitter architecture based on time sharing is proposed that allows probabilistic shaping of (some) OOK modulation symbols. Tailored protograph-based LDPC code designs outperform standard schemes with uniform signaling and off-the-shelf codes by 1.1 dB for a transmission rate of 0.25 bits/channel use.
2105.10880
Yang Li
Yang Li, Hermawan Mulyono, Ying Chen, Zhiyin Lu, Desmond Chan
RtFPS: An Interactive Map that Visualizes and Predicts Wildfires in the US
Source code: https://github.com/yangland/rtfps
null
null
null
cs.LG cs.HC cs.IR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Climate change has largely impacted our daily lives. As one of its consequences, we are experiencing more wildfires. In the year 2020, wildfires burned a record number of 8,888,297 acres in the US. To awaken people's attention to climate change, and to visualize the current risk of wildfires, We developed RtFPS, "Real-Time Fire Prediction System". It provides a real-time prediction visualization of wildfire risk at specific locations base on a Machine Learning model. It also provides interactive map features that show the historical wildfire events with environmental info.
[ { "created": "Sun, 23 May 2021 08:07:01 GMT", "version": "v1" }, { "created": "Tue, 22 Jun 2021 01:58:27 GMT", "version": "v2" } ]
2021-06-23
[ [ "Li", "Yang", "" ], [ "Mulyono", "Hermawan", "" ], [ "Chen", "Ying", "" ], [ "Lu", "Zhiyin", "" ], [ "Chan", "Desmond", "" ] ]
Climate change has largely impacted our daily lives. As one of its consequences, we are experiencing more wildfires. In the year 2020, wildfires burned a record number of 8,888,297 acres in the US. To awaken people's attention to climate change, and to visualize the current risk of wildfires, We developed RtFPS, "Real-Time Fire Prediction System". It provides a real-time prediction visualization of wildfire risk at specific locations base on a Machine Learning model. It also provides interactive map features that show the historical wildfire events with environmental info.
2403.10842
Mohammad Ali Labbaf Khaniki
Mohammad Ali Labbaf-Khaniki, Mohammad Manthouri
Twin Transformer using Gated Dynamic Learnable Attention mechanism for Fault Detection and Diagnosis in the Tennessee Eastman Process
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fault detection and diagnosis (FDD) is a crucial task for ensuring the safety and efficiency of industrial processes. We propose a novel FDD methodology for the Tennessee Eastman Process (TEP), a widely used benchmark for chemical process control. The model employs two separate Transformer branches, enabling independent processing of input data and potential extraction of diverse information. A novel attention mechanism, Gated Dynamic Learnable Attention (GDLAttention), is introduced which integrates a gating mechanism and dynamic learning capabilities. The gating mechanism modulates the attention weights, allowing the model to focus on the most relevant parts of the input. The dynamic learning approach adapts the attention strategy during training, potentially leading to improved performance. The attention mechanism uses a bilinear similarity function, providing greater flexibility in capturing complex relationships between query and key vectors. In order to assess the effectiveness of our approach, we tested it against 21 and 18 distinct fault scenarios in TEP, and compared its performance with several established FDD techniques. The outcomes indicate that the method outperforms others in terms of accuracy, false alarm rate, and misclassification rate. This underscores the robustness and efficacy of the approach for FDD in intricate industrial processes.
[ { "created": "Sat, 16 Mar 2024 07:40:23 GMT", "version": "v1" }, { "created": "Mon, 1 Apr 2024 18:37:10 GMT", "version": "v2" }, { "created": "Fri, 21 Jun 2024 07:04:49 GMT", "version": "v3" } ]
2024-06-24
[ [ "Labbaf-Khaniki", "Mohammad Ali", "" ], [ "Manthouri", "Mohammad", "" ] ]
Fault detection and diagnosis (FDD) is a crucial task for ensuring the safety and efficiency of industrial processes. We propose a novel FDD methodology for the Tennessee Eastman Process (TEP), a widely used benchmark for chemical process control. The model employs two separate Transformer branches, enabling independent processing of input data and potential extraction of diverse information. A novel attention mechanism, Gated Dynamic Learnable Attention (GDLAttention), is introduced which integrates a gating mechanism and dynamic learning capabilities. The gating mechanism modulates the attention weights, allowing the model to focus on the most relevant parts of the input. The dynamic learning approach adapts the attention strategy during training, potentially leading to improved performance. The attention mechanism uses a bilinear similarity function, providing greater flexibility in capturing complex relationships between query and key vectors. In order to assess the effectiveness of our approach, we tested it against 21 and 18 distinct fault scenarios in TEP, and compared its performance with several established FDD techniques. The outcomes indicate that the method outperforms others in terms of accuracy, false alarm rate, and misclassification rate. This underscores the robustness and efficacy of the approach for FDD in intricate industrial processes.
2101.05634
Renato Stoffalette Joao
Renato Stoffalette Jo\~ao and Pavlos Fafalios and Stefan Dietze
Better Together -- An Ensemble Learner for Combining the Results of Ready-made Entity Linking Systems
SAC '20: Proceedings of the 35th Annual ACM Symposium on Applied Computing
null
10.1145/3341105.3373883
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Entity linking (EL) is the task of automatically identifying entity mentions in text and resolving them to a corresponding entity in a reference knowledge base like Wikipedia. Throughout the past decade, a plethora of EL systems and pipelines have become available, where performance of individual systems varies heavily across corpora, languages or domains. Linking performance varies even between different mentions in the same text corpus, where, for instance, some EL approaches are better able to deal with short surface forms while others may perform better when more context information is available. To this end, we argue that performance may be optimised by exploiting results from distinct EL systems on the same corpus, thereby leveraging their individual strengths on a per-mention basis. In this paper, we introduce a supervised approach which exploits the output of multiple ready-made EL systems by predicting the correct link on a per-mention basis. Experimental results obtained on existing ground truth datasets and exploiting three state-of-the-art EL systems show the effectiveness of our approach and its capacity to significantly outperform the individual EL systems as well as a set of baseline methods.
[ { "created": "Thu, 14 Jan 2021 14:42:57 GMT", "version": "v1" } ]
2021-01-15
[ [ "João", "Renato Stoffalette", "" ], [ "Fafalios", "Pavlos", "" ], [ "Dietze", "Stefan", "" ] ]
Entity linking (EL) is the task of automatically identifying entity mentions in text and resolving them to a corresponding entity in a reference knowledge base like Wikipedia. Throughout the past decade, a plethora of EL systems and pipelines have become available, where performance of individual systems varies heavily across corpora, languages or domains. Linking performance varies even between different mentions in the same text corpus, where, for instance, some EL approaches are better able to deal with short surface forms while others may perform better when more context information is available. To this end, we argue that performance may be optimised by exploiting results from distinct EL systems on the same corpus, thereby leveraging their individual strengths on a per-mention basis. In this paper, we introduce a supervised approach which exploits the output of multiple ready-made EL systems by predicting the correct link on a per-mention basis. Experimental results obtained on existing ground truth datasets and exploiting three state-of-the-art EL systems show the effectiveness of our approach and its capacity to significantly outperform the individual EL systems as well as a set of baseline methods.
1009.4386
David Malone
Minyu Fang, David Malone, Ken R. Duffy, and Douglas J. Leith
Decentralised Learning MACs for Collision-free Access in WLANs
null
Springer Wireless Networks 2013, Volume 19, Issue 1, pp 83-98
10.1007/s11276-012-0452-1
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By combining the features of CSMA and TDMA, fully decentralised WLAN MAC schemes have recently been proposed that converge to collision-free schedules. In this paper we describe a MAC with optimal long-run throughput that is almost decentralised. We then design two \changed{schemes} that are practically realisable, decentralised approximations of this optimal scheme and operate with different amounts of sensing information. We achieve this by (1) introducing learning algorithms that can substantially speed up convergence to collision free operation; (2) developing a decentralised schedule length adaptation scheme that provides long-run fair (uniform) access to the medium while maintaining collision-free access for arbitrary numbers of stations.
[ { "created": "Wed, 22 Sep 2010 15:24:12 GMT", "version": "v1" }, { "created": "Wed, 2 Mar 2011 16:42:44 GMT", "version": "v2" } ]
2013-09-19
[ [ "Fang", "Minyu", "" ], [ "Malone", "David", "" ], [ "Duffy", "Ken R.", "" ], [ "Leith", "Douglas J.", "" ] ]
By combining the features of CSMA and TDMA, fully decentralised WLAN MAC schemes have recently been proposed that converge to collision-free schedules. In this paper we describe a MAC with optimal long-run throughput that is almost decentralised. We then design two \changed{schemes} that are practically realisable, decentralised approximations of this optimal scheme and operate with different amounts of sensing information. We achieve this by (1) introducing learning algorithms that can substantially speed up convergence to collision free operation; (2) developing a decentralised schedule length adaptation scheme that provides long-run fair (uniform) access to the medium while maintaining collision-free access for arbitrary numbers of stations.
2308.04336
Krzysztof Turowski
Alan Frieze and Krzysztof Turowski and Wojciech Szpankowski
On the concentration of the maximum degree in the duplication-divergence models
null
null
null
null
cs.DM
http://creativecommons.org/licenses/by/4.0/
We present a rigorous and precise analysis of the maximum degree and the average degree in a dynamic duplication-divergence graph model introduced by Sol\'e, Pastor-Satorras et al. in which the graph grows according to a duplication-divergence mechanism, i.e. by iteratively creating a copy of some node and then randomly alternating the neighborhood of a new node with probability $p$. This model captures the growth of some real-world processes e.g. biological or social networks. In this paper, we prove that for some $0 < p < 1$ the maximum degree and the average degree of a duplication-divergence graph on $t$ vertices are asymptotically concentrated with high probability around $t^p$ and $\max\{t^{2 p - 1}, 1\}$, respectively, i.e. they are within at most a polylogarithmic factor from these values with probability at least $1 - t^{-A}$ for any constant $A > 0$.
[ { "created": "Tue, 8 Aug 2023 15:30:07 GMT", "version": "v1" }, { "created": "Wed, 6 Dec 2023 17:29:17 GMT", "version": "v2" } ]
2023-12-07
[ [ "Frieze", "Alan", "" ], [ "Turowski", "Krzysztof", "" ], [ "Szpankowski", "Wojciech", "" ] ]
We present a rigorous and precise analysis of the maximum degree and the average degree in a dynamic duplication-divergence graph model introduced by Sol\'e, Pastor-Satorras et al. in which the graph grows according to a duplication-divergence mechanism, i.e. by iteratively creating a copy of some node and then randomly alternating the neighborhood of a new node with probability $p$. This model captures the growth of some real-world processes e.g. biological or social networks. In this paper, we prove that for some $0 < p < 1$ the maximum degree and the average degree of a duplication-divergence graph on $t$ vertices are asymptotically concentrated with high probability around $t^p$ and $\max\{t^{2 p - 1}, 1\}$, respectively, i.e. they are within at most a polylogarithmic factor from these values with probability at least $1 - t^{-A}$ for any constant $A > 0$.
1702.00616
Fedor Sandomirskiy
Anna Bogomolnaia, Herve Moulin, Fedor Sandomirskiy, Elena Yanovskaya
Competitive division of a mixed manna
33 pages, 13 figures; this paper subsumes arXiv:1608.01540 and arXiv:1610.03745
null
null
null
cs.GT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homothetic, concave (and monotone), the Competitive Equilibrium with Equal Incomes maximizes the Nash product of utilities: hence it is welfarist (determined utility-wise by the feasible set of profiles), single-valued and easy to compute. We generalize the Gale-Eisenberg Theorem to a mixed manna. The Competitive division is still welfarist and related to the product of utilities or disutilities. If the zero utility profile (before any manna) is Pareto dominated, the competitive profile is unique and still maximizes the product of utilities. If the zero profile is unfeasible, the competitive profiles are the critical points of the product of disutilities on the efficiency frontier, and multiplicity is pervasive. In particular the task of dividing a mixed manna is either good news for everyone, or bad news for everyone. We refine our results in the practically important case of linear preferences, where the axiomatic comparison between the division of goods and that of bads is especially sharp. When we divide goods and the manna improves, everyone weakly benefits under the competitive rule; but no reasonable rule to divide bads can be similarly Resource Monotonic. Also, the much larger set of Non Envious and Efficient divisions of bads can be disconnected so that it will admit no continuous selection.
[ { "created": "Thu, 2 Feb 2017 11:07:53 GMT", "version": "v1" } ]
2017-02-03
[ [ "Bogomolnaia", "Anna", "" ], [ "Moulin", "Herve", "" ], [ "Sandomirskiy", "Fedor", "" ], [ "Yanovskaya", "Elena", "" ] ]
A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homothetic, concave (and monotone), the Competitive Equilibrium with Equal Incomes maximizes the Nash product of utilities: hence it is welfarist (determined utility-wise by the feasible set of profiles), single-valued and easy to compute. We generalize the Gale-Eisenberg Theorem to a mixed manna. The Competitive division is still welfarist and related to the product of utilities or disutilities. If the zero utility profile (before any manna) is Pareto dominated, the competitive profile is unique and still maximizes the product of utilities. If the zero profile is unfeasible, the competitive profiles are the critical points of the product of disutilities on the efficiency frontier, and multiplicity is pervasive. In particular the task of dividing a mixed manna is either good news for everyone, or bad news for everyone. We refine our results in the practically important case of linear preferences, where the axiomatic comparison between the division of goods and that of bads is especially sharp. When we divide goods and the manna improves, everyone weakly benefits under the competitive rule; but no reasonable rule to divide bads can be similarly Resource Monotonic. Also, the much larger set of Non Envious and Efficient divisions of bads can be disconnected so that it will admit no continuous selection.
1503.03585
Jascha Sohl-Dickstein
Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, Surya Ganguli
Deep Unsupervised Learning using Nonequilibrium Thermodynamics
null
null
null
null
cs.LG cond-mat.dis-nn q-bio.NC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A central problem in machine learning involves modeling complex data-sets using highly flexible families of probability distributions in which learning, sampling, inference, and evaluation are still analytically or computationally tractable. Here, we develop an approach that simultaneously achieves both flexibility and tractability. The essential idea, inspired by non-equilibrium statistical physics, is to systematically and slowly destroy structure in a data distribution through an iterative forward diffusion process. We then learn a reverse diffusion process that restores structure in data, yielding a highly flexible and tractable generative model of the data. This approach allows us to rapidly learn, sample from, and evaluate probabilities in deep generative models with thousands of layers or time steps, as well as to compute conditional and posterior probabilities under the learned model. We additionally release an open source reference implementation of the algorithm.
[ { "created": "Thu, 12 Mar 2015 04:51:37 GMT", "version": "v1" }, { "created": "Thu, 2 Apr 2015 06:48:02 GMT", "version": "v2" }, { "created": "Wed, 29 Apr 2015 06:00:20 GMT", "version": "v3" }, { "created": "Wed, 13 May 2015 01:57:49 GMT", "version": "v4" }, { "created": "Wed, 20 May 2015 03:19:10 GMT", "version": "v5" }, { "created": "Thu, 9 Jul 2015 16:16:33 GMT", "version": "v6" }, { "created": "Tue, 21 Jul 2015 19:44:20 GMT", "version": "v7" }, { "created": "Wed, 18 Nov 2015 21:50:51 GMT", "version": "v8" } ]
2015-11-20
[ [ "Sohl-Dickstein", "Jascha", "" ], [ "Weiss", "Eric A.", "" ], [ "Maheswaranathan", "Niru", "" ], [ "Ganguli", "Surya", "" ] ]
A central problem in machine learning involves modeling complex data-sets using highly flexible families of probability distributions in which learning, sampling, inference, and evaluation are still analytically or computationally tractable. Here, we develop an approach that simultaneously achieves both flexibility and tractability. The essential idea, inspired by non-equilibrium statistical physics, is to systematically and slowly destroy structure in a data distribution through an iterative forward diffusion process. We then learn a reverse diffusion process that restores structure in data, yielding a highly flexible and tractable generative model of the data. This approach allows us to rapidly learn, sample from, and evaluate probabilities in deep generative models with thousands of layers or time steps, as well as to compute conditional and posterior probabilities under the learned model. We additionally release an open source reference implementation of the algorithm.
1811.06166
Tianchi Huang
Tianchi Huang, Xin Yao, Chenglei Wu, Rui-Xiao Zhang, Zhangyuan Pang, Lifeng Sun
Tiyuntsong: A Self-Play Reinforcement Learning Approach for ABR Video Streaming
Published in ICME 2019
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing reinforcement learning~(RL)-based adaptive bitrate~(ABR) approaches outperform the previous fixed control rules based methods by improving the Quality of Experience~(QoE) score, as the QoE metric can hardly provide clear guidance for optimization, finally resulting in the unexpected strategies. In this paper, we propose \emph{Tiyuntsong}, a self-play reinforcement learning approach with generative adversarial network~(GAN)-based method for ABR video streaming. Tiyuntsong learns strategies automatically by training two agents who are competing against each other. Note that the competition results are determined by a set of rules rather than a numerical QoE score that allows clearer optimization objectives. Meanwhile, we propose GAN Enhancement Module to extract hidden features from the past status for preserving the information without the limitations of sequence lengths. Using testbed experiments, we show that the utilization of GAN significantly improves the Tiyuntsong's performance. By comparing the performance of ABRs, we observe that Tiyuntsong also betters existing ABR algorithms in the underlying metrics.
[ { "created": "Thu, 15 Nov 2018 04:29:49 GMT", "version": "v1" }, { "created": "Tue, 26 Feb 2019 14:05:02 GMT", "version": "v2" }, { "created": "Thu, 2 May 2019 14:40:30 GMT", "version": "v3" } ]
2019-05-03
[ [ "Huang", "Tianchi", "" ], [ "Yao", "Xin", "" ], [ "Wu", "Chenglei", "" ], [ "Zhang", "Rui-Xiao", "" ], [ "Pang", "Zhangyuan", "" ], [ "Sun", "Lifeng", "" ] ]
Existing reinforcement learning~(RL)-based adaptive bitrate~(ABR) approaches outperform the previous fixed control rules based methods by improving the Quality of Experience~(QoE) score, as the QoE metric can hardly provide clear guidance for optimization, finally resulting in the unexpected strategies. In this paper, we propose \emph{Tiyuntsong}, a self-play reinforcement learning approach with generative adversarial network~(GAN)-based method for ABR video streaming. Tiyuntsong learns strategies automatically by training two agents who are competing against each other. Note that the competition results are determined by a set of rules rather than a numerical QoE score that allows clearer optimization objectives. Meanwhile, we propose GAN Enhancement Module to extract hidden features from the past status for preserving the information without the limitations of sequence lengths. Using testbed experiments, we show that the utilization of GAN significantly improves the Tiyuntsong's performance. By comparing the performance of ABRs, we observe that Tiyuntsong also betters existing ABR algorithms in the underlying metrics.
2002.06816
Pamela K. Douglas
Pamela K. Douglas, Farzad Vasheghani Farahani
On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples
2 figures
Med NeurIPS 2019
null
null
cs.CV cs.LG eess.IV q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The increasing use of deep neural networks (DNNs) has motivated a parallel endeavor: the design of adversaries that profit from successful misclassifications. However, not all adversarial examples are crafted for malicious purposes. For example, real world systems often contain physical, temporal, and sampling variability across instrumentation. Adversarial examples in the wild may inadvertently prove deleterious for accurate predictive modeling. Conversely, naturally occurring covariance of image features may serve didactic purposes. Here, we studied the stability of deep learning representations for neuroimaging classification across didactic and adversarial conditions characteristic of MRI acquisition variability. We show that representational similarity and performance vary according to the frequency of adversarial examples in the input space.
[ { "created": "Mon, 17 Feb 2020 07:49:20 GMT", "version": "v1" } ]
2020-02-18
[ [ "Douglas", "Pamela K.", "" ], [ "Farahani", "Farzad Vasheghani", "" ] ]
The increasing use of deep neural networks (DNNs) has motivated a parallel endeavor: the design of adversaries that profit from successful misclassifications. However, not all adversarial examples are crafted for malicious purposes. For example, real world systems often contain physical, temporal, and sampling variability across instrumentation. Adversarial examples in the wild may inadvertently prove deleterious for accurate predictive modeling. Conversely, naturally occurring covariance of image features may serve didactic purposes. Here, we studied the stability of deep learning representations for neuroimaging classification across didactic and adversarial conditions characteristic of MRI acquisition variability. We show that representational similarity and performance vary according to the frequency of adversarial examples in the input space.
2403.14639
Andrei Khurshudov
Andrei Khurshudov
On Defining Smart Cities using Transformer Neural Networks
16 pages, 2 fugures
International Journal of Computer and Technology Vol 24 (2024) ISSN: 2277-3061
10.24297/ijct.v24i.9579
null
cs.CY cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Cities worldwide are rapidly adopting smart technologies, transforming urban life. Despite this trend, a universally accepted definition of 'smart city' remains elusive. Past efforts to define it have not yielded a consensus, as evidenced by the numerous definitions in use. In this paper, we endeavored to create a new 'compromise' definition that should resonate with most experts previously involved in defining this concept and aimed to validate one of the existing definitions. We reviewed 60 definitions of smart cities from industry, academia, and various relevant organizations, employing transformer architecture-based generative AI and semantic text analysis to reach this compromise. We proposed a semantic similarity measure as an evaluation technique, which could generally be used to compare different smart city definitions, assessing their uniqueness or resemblance. Our methodology employed generative AI to analyze various existing definitions of smart cities, generating a list of potential new composite definitions. Each of these new definitions was then tested against the pre-existing individual definitions we have gathered, using cosine similarity as our metric. This process identified smart city definitions with the highest average cosine similarity, semantically positioning them as the closest on average to all the 60 individual definitions selected.
[ { "created": "Tue, 20 Feb 2024 18:34:24 GMT", "version": "v1" } ]
2024-03-25
[ [ "Khurshudov", "Andrei", "" ] ]
Cities worldwide are rapidly adopting smart technologies, transforming urban life. Despite this trend, a universally accepted definition of 'smart city' remains elusive. Past efforts to define it have not yielded a consensus, as evidenced by the numerous definitions in use. In this paper, we endeavored to create a new 'compromise' definition that should resonate with most experts previously involved in defining this concept and aimed to validate one of the existing definitions. We reviewed 60 definitions of smart cities from industry, academia, and various relevant organizations, employing transformer architecture-based generative AI and semantic text analysis to reach this compromise. We proposed a semantic similarity measure as an evaluation technique, which could generally be used to compare different smart city definitions, assessing their uniqueness or resemblance. Our methodology employed generative AI to analyze various existing definitions of smart cities, generating a list of potential new composite definitions. Each of these new definitions was then tested against the pre-existing individual definitions we have gathered, using cosine similarity as our metric. This process identified smart city definitions with the highest average cosine similarity, semantically positioning them as the closest on average to all the 60 individual definitions selected.
2311.14726
Frank Heyen
Frank Heyen, Alejandro Gabino Diaz Mendoza, Quynh Quang Ngo, Michael Sedlmair
Visual Guitar Tab Comparison
Late-breaking demo for ISMIR 2023 https://ismir2023program.ismir.net/lbd_357.html
null
null
null
cs.HC cs.GR
http://creativecommons.org/licenses/by/4.0/
We designed a visual interface for comparing different guitar tablature (tab) versions of the same piece. By automatically aligning the bars of these versions and visually encoding different metrics, our interface helps determine similarity, difficulty, and correctness. During our design, we collected and integrated feedback from musicians and finally conducted a qualitative evaluation with five guitarists. Results confirm that our interface effectively supports comparison and helps musicians choose a version appropriate for their personal skills and tastes.
[ { "created": "Mon, 20 Nov 2023 11:09:59 GMT", "version": "v1" } ]
2023-11-28
[ [ "Heyen", "Frank", "" ], [ "Mendoza", "Alejandro Gabino Diaz", "" ], [ "Ngo", "Quynh Quang", "" ], [ "Sedlmair", "Michael", "" ] ]
We designed a visual interface for comparing different guitar tablature (tab) versions of the same piece. By automatically aligning the bars of these versions and visually encoding different metrics, our interface helps determine similarity, difficulty, and correctness. During our design, we collected and integrated feedback from musicians and finally conducted a qualitative evaluation with five guitarists. Results confirm that our interface effectively supports comparison and helps musicians choose a version appropriate for their personal skills and tastes.
2211.13339
Bilal Farooq
Daniel Opoku Mensah and Godwin Badu-Marfo and Bilal Farooq
Robustness Analysis of Deep Learning Models for Population Synthesis
arXiv admin note: text overlap with arXiv:2203.03489, arXiv:1909.07689 by other authors
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep generative models have become useful for synthetic data generation, particularly population synthesis. The models implicitly learn the probability distribution of a dataset and can draw samples from a distribution. Several models have been proposed, but their performance is only tested on a single cross-sectional sample. The implementation of population synthesis on single datasets is seen as a drawback that needs further studies to explore the robustness of the models on multiple datasets. While comparing with the real data can increase trust and interpretability of the models, techniques to evaluate deep generative models' robustness for population synthesis remain underexplored. In this study, we present bootstrap confidence interval for the deep generative models, an approach that computes efficient confidence intervals for mean errors predictions to evaluate the robustness of the models to multiple datasets. Specifically, we adopt the tabular-based Composite Travel Generative Adversarial Network (CTGAN) and Variational Autoencoder (VAE), to estimate the distribution of the population, by generating agents that have tabular data using several samples over time from the same study area. The models are implemented on multiple travel diaries of Montreal Origin- Destination Survey of 2008, 2013, and 2018 and compare the predictive performance under varying sample sizes from multiple surveys. Results show that the predictive errors of CTGAN have narrower confidence intervals indicating its robustness to multiple datasets of the varying sample sizes when compared to VAE. Again, the evaluation of model robustness against varying sample size shows a minimal decrease in model performance with decrease in sample size. This study directly supports agent-based modelling by enabling finer synthetic generation of populations in a reliable environment.
[ { "created": "Wed, 23 Nov 2022 22:55:55 GMT", "version": "v1" } ]
2022-11-28
[ [ "Mensah", "Daniel Opoku", "" ], [ "Badu-Marfo", "Godwin", "" ], [ "Farooq", "Bilal", "" ] ]
Deep generative models have become useful for synthetic data generation, particularly population synthesis. The models implicitly learn the probability distribution of a dataset and can draw samples from a distribution. Several models have been proposed, but their performance is only tested on a single cross-sectional sample. The implementation of population synthesis on single datasets is seen as a drawback that needs further studies to explore the robustness of the models on multiple datasets. While comparing with the real data can increase trust and interpretability of the models, techniques to evaluate deep generative models' robustness for population synthesis remain underexplored. In this study, we present bootstrap confidence interval for the deep generative models, an approach that computes efficient confidence intervals for mean errors predictions to evaluate the robustness of the models to multiple datasets. Specifically, we adopt the tabular-based Composite Travel Generative Adversarial Network (CTGAN) and Variational Autoencoder (VAE), to estimate the distribution of the population, by generating agents that have tabular data using several samples over time from the same study area. The models are implemented on multiple travel diaries of Montreal Origin- Destination Survey of 2008, 2013, and 2018 and compare the predictive performance under varying sample sizes from multiple surveys. Results show that the predictive errors of CTGAN have narrower confidence intervals indicating its robustness to multiple datasets of the varying sample sizes when compared to VAE. Again, the evaluation of model robustness against varying sample size shows a minimal decrease in model performance with decrease in sample size. This study directly supports agent-based modelling by enabling finer synthetic generation of populations in a reliable environment.
1811.02658
George Kesidis
Yujia Wang, David J. Miller, George Kesidis
When Not to Classify: Detection of Reverse Engineering Attacks on DNN Image Classifiers
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses detection of a reverse engineering (RE) attack targeting a deep neural network (DNN) image classifier; by querying, RE's aim is to discover the classifier's decision rule. RE can enable test-time evasion attacks, which require knowledge of the classifier. Recently, we proposed a quite effective approach (ADA) to detect test-time evasion attacks. In this paper, we extend ADA to detect RE attacks (ADA-RE). We demonstrate our method is successful in detecting "stealthy" RE attacks before they learn enough to launch effective test-time evasion attacks.
[ { "created": "Wed, 31 Oct 2018 20:59:49 GMT", "version": "v1" } ]
2018-11-08
[ [ "Wang", "Yujia", "" ], [ "Miller", "David J.", "" ], [ "Kesidis", "George", "" ] ]
This paper addresses detection of a reverse engineering (RE) attack targeting a deep neural network (DNN) image classifier; by querying, RE's aim is to discover the classifier's decision rule. RE can enable test-time evasion attacks, which require knowledge of the classifier. Recently, we proposed a quite effective approach (ADA) to detect test-time evasion attacks. In this paper, we extend ADA to detect RE attacks (ADA-RE). We demonstrate our method is successful in detecting "stealthy" RE attacks before they learn enough to launch effective test-time evasion attacks.
2007.00752
Ana Nora Evans
Ana Nora Evans, Bradford Campbell, Mary Lou Soffa
Is Rust Used Safely by Software Developers?
null
null
10.1145/3377811.3380413
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rust, an emerging programming language with explosive growth, provides a robust type system that enables programmers to write memory-safe and data-race free code. To allow access to a machine's hardware and to support low-level performance optimizations, a second language, Unsafe Rust, is embedded in Rust. It contains support for operations that are difficult to statically check, such as C-style pointers for access to arbitrary memory locations and mutable global variables. When a program uses these features, the compiler is unable to statically guarantee the safety properties Rust promotes. In this work, we perform a large-scale empirical study to explore how software developers are using Unsafe Rust in real-world Rust libraries and applications. Our results indicate that software engineers use the keyword unsafe in less than 30% of Rust libraries, but more than half cannot be entirely statically checked by the Rust compiler because of Unsafe Rust hidden somewhere in a library's call chain. We conclude that although the use of the keyword unsafe is limited, the propagation of unsafeness offers a challenge to the claim of Rust as a memory-safe language. Furthermore, we recommend changes to the Rust compiler and to the central Rust repository's interface to help Rust software developers be aware of when their Rust code is unsafe.
[ { "created": "Wed, 1 Jul 2020 21:00:25 GMT", "version": "v1" } ]
2020-07-03
[ [ "Evans", "Ana Nora", "" ], [ "Campbell", "Bradford", "" ], [ "Soffa", "Mary Lou", "" ] ]
Rust, an emerging programming language with explosive growth, provides a robust type system that enables programmers to write memory-safe and data-race free code. To allow access to a machine's hardware and to support low-level performance optimizations, a second language, Unsafe Rust, is embedded in Rust. It contains support for operations that are difficult to statically check, such as C-style pointers for access to arbitrary memory locations and mutable global variables. When a program uses these features, the compiler is unable to statically guarantee the safety properties Rust promotes. In this work, we perform a large-scale empirical study to explore how software developers are using Unsafe Rust in real-world Rust libraries and applications. Our results indicate that software engineers use the keyword unsafe in less than 30% of Rust libraries, but more than half cannot be entirely statically checked by the Rust compiler because of Unsafe Rust hidden somewhere in a library's call chain. We conclude that although the use of the keyword unsafe is limited, the propagation of unsafeness offers a challenge to the claim of Rust as a memory-safe language. Furthermore, we recommend changes to the Rust compiler and to the central Rust repository's interface to help Rust software developers be aware of when their Rust code is unsafe.
2305.19107
Lijian Xu
Ziyu Ni, Linda Wei, Lijian Xu, Simon Yu, Qing Xia, Hongsheng Li and Shaoting Zhang
Voxel2Hemodynamics: An End-to-end Deep Learning Method for Predicting Coronary Artery Hemodynamics
8pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Local hemodynamic forces play an important role in determining the functional significance of coronary arterial stenosis and understanding the mechanism of coronary disease progression. Computational fluid dynamics (CFD) have been widely performed to simulate hemodynamics non-invasively from coronary computed tomography angiography (CCTA) images. However, accurate computational analysis is still limited by the complex construction of patient-specific modeling and time-consuming computation. In this work, we proposed an end-to-end deep learning framework, which could predict the coronary artery hemodynamics from CCTA images. The model was trained on the hemodynamic data obtained from 3D simulations of synthetic and real datasets. Extensive experiments demonstrated that the predicted hemdynamic distributions by our method agreed well with the CFD-derived results. Quantitatively, the proposed method has the capability of predicting the fractional flow reserve with an average error of 0.5\% and 2.5\% for the synthetic dataset and real dataset, respectively. Particularly, our method achieved much better accuracy for the real dataset compared to PointNet++ with the point cloud input. This study demonstrates the feasibility and great potential of our end-to-end deep learning method as a fast and accurate approach for hemodynamic analysis.
[ { "created": "Tue, 30 May 2023 15:12:52 GMT", "version": "v1" } ]
2023-05-31
[ [ "Ni", "Ziyu", "" ], [ "Wei", "Linda", "" ], [ "Xu", "Lijian", "" ], [ "Yu", "Simon", "" ], [ "Xia", "Qing", "" ], [ "Li", "Hongsheng", "" ], [ "Zhang", "Shaoting", "" ] ]
Local hemodynamic forces play an important role in determining the functional significance of coronary arterial stenosis and understanding the mechanism of coronary disease progression. Computational fluid dynamics (CFD) have been widely performed to simulate hemodynamics non-invasively from coronary computed tomography angiography (CCTA) images. However, accurate computational analysis is still limited by the complex construction of patient-specific modeling and time-consuming computation. In this work, we proposed an end-to-end deep learning framework, which could predict the coronary artery hemodynamics from CCTA images. The model was trained on the hemodynamic data obtained from 3D simulations of synthetic and real datasets. Extensive experiments demonstrated that the predicted hemdynamic distributions by our method agreed well with the CFD-derived results. Quantitatively, the proposed method has the capability of predicting the fractional flow reserve with an average error of 0.5\% and 2.5\% for the synthetic dataset and real dataset, respectively. Particularly, our method achieved much better accuracy for the real dataset compared to PointNet++ with the point cloud input. This study demonstrates the feasibility and great potential of our end-to-end deep learning method as a fast and accurate approach for hemodynamic analysis.
2402.05407
Xinyi Hu
Xinyi Hu, Nikolaos Pappas, Howard H. Yang
Version age-based client scheduling policy for federated learning
5 pages, 4 figures, ICASSP 2024
null
null
null
cs.LG cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated Learning (FL) has emerged as a privacy-preserving machine learning paradigm facilitating collaborative training across multiple clients without sharing local data. Despite advancements in edge device capabilities, communication bottlenecks present challenges in aggregating a large number of clients; only a portion of the clients can update their parameters upon each global aggregation. This phenomenon introduces the critical challenge of stragglers in FL and the profound impact of client scheduling policies on global model convergence and stability. Existing scheduling strategies address staleness but predominantly focus on either timeliness or content. Motivated by this, we introduce the novel concept of Version Age of Information (VAoI) to FL. Unlike traditional Age of Information metrics, VAoI considers both timeliness and content staleness. Each client's version age is updated discretely, indicating the freshness of information. VAoI is incorporated into the client scheduling policy to minimize the average VAoI, mitigating the impact of outdated local updates and enhancing the stability of FL systems.
[ { "created": "Thu, 8 Feb 2024 04:48:51 GMT", "version": "v1" } ]
2024-02-09
[ [ "Hu", "Xinyi", "" ], [ "Pappas", "Nikolaos", "" ], [ "Yang", "Howard H.", "" ] ]
Federated Learning (FL) has emerged as a privacy-preserving machine learning paradigm facilitating collaborative training across multiple clients without sharing local data. Despite advancements in edge device capabilities, communication bottlenecks present challenges in aggregating a large number of clients; only a portion of the clients can update their parameters upon each global aggregation. This phenomenon introduces the critical challenge of stragglers in FL and the profound impact of client scheduling policies on global model convergence and stability. Existing scheduling strategies address staleness but predominantly focus on either timeliness or content. Motivated by this, we introduce the novel concept of Version Age of Information (VAoI) to FL. Unlike traditional Age of Information metrics, VAoI considers both timeliness and content staleness. Each client's version age is updated discretely, indicating the freshness of information. VAoI is incorporated into the client scheduling policy to minimize the average VAoI, mitigating the impact of outdated local updates and enhancing the stability of FL systems.
2407.08990
Yue Zhang
Yue Zhang, Woyu Zhang, Shaocong Wang, Ning Lin, Yifei Yu, Yangu He, Bo Wang, Hao Jiang, Peng Lin, Xiaoxin Xu, Xiaojuan Qi, Zhongrui Wang, Xumeng Zhang, Dashan Shang, Qi Liu, Kwang-Ting Cheng and Ming Liu
Dynamic neural network with memristive CIM and CAM for 2D and 3D vision
In press
null
null
null
cs.AR cs.AI cs.ET cs.NE
http://creativecommons.org/licenses/by/4.0/
The brain is dynamic, associative and efficient. It reconfigures by associating the inputs with past experiences, with fused memory and processing. In contrast, AI models are static, unable to associate inputs with past experiences, and run on digital computers with physically separated memory and processing. We propose a hardware-software co-design, a semantic memory-based dynamic neural network (DNN) using memristor. The network associates incoming data with the past experience stored as semantic vectors. The network and the semantic memory are physically implemented on noise-robust ternary memristor-based Computing-In-Memory (CIM) and Content-Addressable Memory (CAM) circuits, respectively. We validate our co-designs, using a 40nm memristor macro, on ResNet and PointNet++ for classifying images and 3D points from the MNIST and ModelNet datasets, which not only achieves accuracy on par with software but also a 48.1% and 15.9% reduction in computational budget. Moreover, it delivers a 77.6% and 93.3% reduction in energy consumption.
[ { "created": "Fri, 12 Jul 2024 04:55:57 GMT", "version": "v1" } ]
2024-07-15
[ [ "Zhang", "Yue", "" ], [ "Zhang", "Woyu", "" ], [ "Wang", "Shaocong", "" ], [ "Lin", "Ning", "" ], [ "Yu", "Yifei", "" ], [ "He", "Yangu", "" ], [ "Wang", "Bo", "" ], [ "Jiang", "Hao", "" ], [ "Lin", "Peng", "" ], [ "Xu", "Xiaoxin", "" ], [ "Qi", "Xiaojuan", "" ], [ "Wang", "Zhongrui", "" ], [ "Zhang", "Xumeng", "" ], [ "Shang", "Dashan", "" ], [ "Liu", "Qi", "" ], [ "Cheng", "Kwang-Ting", "" ], [ "Liu", "Ming", "" ] ]
The brain is dynamic, associative and efficient. It reconfigures by associating the inputs with past experiences, with fused memory and processing. In contrast, AI models are static, unable to associate inputs with past experiences, and run on digital computers with physically separated memory and processing. We propose a hardware-software co-design, a semantic memory-based dynamic neural network (DNN) using memristor. The network associates incoming data with the past experience stored as semantic vectors. The network and the semantic memory are physically implemented on noise-robust ternary memristor-based Computing-In-Memory (CIM) and Content-Addressable Memory (CAM) circuits, respectively. We validate our co-designs, using a 40nm memristor macro, on ResNet and PointNet++ for classifying images and 3D points from the MNIST and ModelNet datasets, which not only achieves accuracy on par with software but also a 48.1% and 15.9% reduction in computational budget. Moreover, it delivers a 77.6% and 93.3% reduction in energy consumption.
2312.07250
Lifeng Han Dr
Lifeng Han, Serge Gladkoff, Gleb Erofeev, Irina Sorokina, Betty Galiano, Goran Nenadic
Neural Machine Translation of Clinical Text: An Empirical Investigation into Multilingual Pre-Trained Language Models and Transfer-Learning
Accepted by Frontiers in Digital Health - Health Informatics
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
We conduct investigations on clinical text machine translation by examining multilingual neural network models using deep learning such as Transformer based structures. Furthermore, to address the language resource imbalance issue, we also carry out experiments using a transfer learning methodology based on massive multilingual pre-trained language models (MMPLMs). The experimental results on three subtasks including 1) clinical case (CC), 2) clinical terminology (CT), and 3) ontological concept (OC) show that our models achieved top-level performances in the ClinSpEn-2022 shared task on English-Spanish clinical domain data. Furthermore, our expert-based human evaluations demonstrate that the small-sized pre-trained language model (PLM) won over the other two extra-large language models by a large margin, in the clinical domain fine-tuning, which finding was never reported in the field. Finally, the transfer learning method works well in our experimental setting using the WMT21fb model to accommodate a new language space Spanish that was not seen at the pre-training stage within WMT21fb itself, which deserves more exploitation for clinical knowledge transformation, e.g. to investigate into more languages. These research findings can shed some light on domain-specific machine translation development, especially in clinical and healthcare fields. Further research projects can be carried out based on our work to improve healthcare text analytics and knowledge transformation. Our data will be openly available for research purposes at https://github.com/HECTA-UoM/ClinicalNMT
[ { "created": "Tue, 12 Dec 2023 13:26:42 GMT", "version": "v1" }, { "created": "Wed, 21 Feb 2024 12:08:47 GMT", "version": "v2" } ]
2024-02-22
[ [ "Han", "Lifeng", "" ], [ "Gladkoff", "Serge", "" ], [ "Erofeev", "Gleb", "" ], [ "Sorokina", "Irina", "" ], [ "Galiano", "Betty", "" ], [ "Nenadic", "Goran", "" ] ]
We conduct investigations on clinical text machine translation by examining multilingual neural network models using deep learning such as Transformer based structures. Furthermore, to address the language resource imbalance issue, we also carry out experiments using a transfer learning methodology based on massive multilingual pre-trained language models (MMPLMs). The experimental results on three subtasks including 1) clinical case (CC), 2) clinical terminology (CT), and 3) ontological concept (OC) show that our models achieved top-level performances in the ClinSpEn-2022 shared task on English-Spanish clinical domain data. Furthermore, our expert-based human evaluations demonstrate that the small-sized pre-trained language model (PLM) won over the other two extra-large language models by a large margin, in the clinical domain fine-tuning, which finding was never reported in the field. Finally, the transfer learning method works well in our experimental setting using the WMT21fb model to accommodate a new language space Spanish that was not seen at the pre-training stage within WMT21fb itself, which deserves more exploitation for clinical knowledge transformation, e.g. to investigate into more languages. These research findings can shed some light on domain-specific machine translation development, especially in clinical and healthcare fields. Further research projects can be carried out based on our work to improve healthcare text analytics and knowledge transformation. Our data will be openly available for research purposes at https://github.com/HECTA-UoM/ClinicalNMT
1511.00310
Daniel Lokshtanov
Daniel Lokshtanov
Parameterized Integer Quadratic Programming: Variables and Coefficients
Added algorithm for undounded IQP's
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the Integer Quadratic Programming problem input is an n*n integer matrix Q, an m*n integer matrix A and an m-dimensional integer vector b. The task is to find a vector x in Z^n, minimizing x^TQx, subject to Ax <= b. We give a fixed parameter tractable algorithm for Integer Quadratic Programming parameterized by n+a. Here a is the largest absolute value of an entry of Q and A. As an application of our main result we show that Optimal Linear Arrangement is fixed parameter tractable parameterized by the size of the smallest vertex cover of the input graph. This resolves an open problem from the recent monograph by Downey and Fellows.
[ { "created": "Sun, 1 Nov 2015 21:47:44 GMT", "version": "v1" }, { "created": "Mon, 10 Apr 2017 11:39:57 GMT", "version": "v2" } ]
2017-04-11
[ [ "Lokshtanov", "Daniel", "" ] ]
In the Integer Quadratic Programming problem input is an n*n integer matrix Q, an m*n integer matrix A and an m-dimensional integer vector b. The task is to find a vector x in Z^n, minimizing x^TQx, subject to Ax <= b. We give a fixed parameter tractable algorithm for Integer Quadratic Programming parameterized by n+a. Here a is the largest absolute value of an entry of Q and A. As an application of our main result we show that Optimal Linear Arrangement is fixed parameter tractable parameterized by the size of the smallest vertex cover of the input graph. This resolves an open problem from the recent monograph by Downey and Fellows.
2209.01541
Shihan Lin
Shihan Lin, Rui Xin, Aayush Goel, Xiaowei Yang
InviCloak: An End-to-End Approach to Privacy and Performance in Web Content Distribution
null
The ACM Conference on Computer and Communications Security 2022
10.1145/3548606.3559336
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In today's web ecosystem, a website that uses a Content Delivery Network (CDN) shares its Transport Layer Security (TLS) private key or session key with the CDN. In this paper, we present the design and implementation of InviCloak, a system that protects the confidentiality and integrity of a user and a website's private communications without changing TLS or upgrading a CDN. InviCloak builds a lightweight but secure and practical key distribution mechanism using the existing DNS infrastructure to distribute a new public key associated with a website's domain name. A web client and a website can use the new key pair to build an encryption channel inside TLS. InviCloak accommodates the current web ecosystem. A website can deploy InviCloak unilaterally without a client's involvement to prevent a passive attacker inside a CDN from eavesdropping on their communications. If a client also installs InviCloak's browser extension, the client and the website can achieve end-to-end confidential and untampered communications in the presence of an active attacker inside a CDN. Our evaluation shows that InviCloak increases the median page load times (PLTs) of realistic web pages from 2.0s to 2.1s, which is smaller than the median PLTs (2.8s) of a state-of-the-art TEE-based solution.
[ { "created": "Sun, 4 Sep 2022 06:38:27 GMT", "version": "v1" }, { "created": "Wed, 7 Sep 2022 19:30:21 GMT", "version": "v2" }, { "created": "Sun, 18 Sep 2022 11:17:35 GMT", "version": "v3" } ]
2023-01-27
[ [ "Lin", "Shihan", "" ], [ "Xin", "Rui", "" ], [ "Goel", "Aayush", "" ], [ "Yang", "Xiaowei", "" ] ]
In today's web ecosystem, a website that uses a Content Delivery Network (CDN) shares its Transport Layer Security (TLS) private key or session key with the CDN. In this paper, we present the design and implementation of InviCloak, a system that protects the confidentiality and integrity of a user and a website's private communications without changing TLS or upgrading a CDN. InviCloak builds a lightweight but secure and practical key distribution mechanism using the existing DNS infrastructure to distribute a new public key associated with a website's domain name. A web client and a website can use the new key pair to build an encryption channel inside TLS. InviCloak accommodates the current web ecosystem. A website can deploy InviCloak unilaterally without a client's involvement to prevent a passive attacker inside a CDN from eavesdropping on their communications. If a client also installs InviCloak's browser extension, the client and the website can achieve end-to-end confidential and untampered communications in the presence of an active attacker inside a CDN. Our evaluation shows that InviCloak increases the median page load times (PLTs) of realistic web pages from 2.0s to 2.1s, which is smaller than the median PLTs (2.8s) of a state-of-the-art TEE-based solution.
2205.08706
Xun Xu
Xun Xu, Manh Cuong Nguyen, Yasin Yazici, Kangkang Lu, Hlaing Min, Chuan-Sheng Foo
SemiCurv: Semi-Supervised Curvilinear Structure Segmentation
IEEE Transactions on Image Processing
null
10.1109/TIP.2022.3189823
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent work on curvilinear structure segmentation has mostly focused on backbone network design and loss engineering. The challenge of collecting labelled data, an expensive and labor intensive process, has been overlooked. While labelled data is expensive to obtain, unlabelled data is often readily available. In this work, we propose SemiCurv, a semi-supervised learning (SSL) framework for curvilinear structure segmentation that is able to utilize such unlabelled data to reduce the labelling burden. Our framework addresses two key challenges in formulating curvilinear segmentation in a semi-supervised manner. First, to fully exploit the power of consistency based SSL, we introduce a geometric transformation as strong data augmentation and then align segmentation predictions via a differentiable inverse transformation to enable the computation of pixel-wise consistency. Second, the traditional mean square error (MSE) on unlabelled data is prone to collapsed predictions and this issue exacerbates with severe class imbalance (significantly more background pixels). We propose a N-pair consistency loss to avoid trivial predictions on unlabelled data. We evaluate SemiCurv on six curvilinear segmentation datasets, and find that with no more than 5% of the labelled data, it achieves close to 95% of the performance relative to its fully supervised counterpart.
[ { "created": "Wed, 18 May 2022 03:52:17 GMT", "version": "v1" }, { "created": "Thu, 19 May 2022 05:48:42 GMT", "version": "v2" } ]
2022-09-07
[ [ "Xu", "Xun", "" ], [ "Nguyen", "Manh Cuong", "" ], [ "Yazici", "Yasin", "" ], [ "Lu", "Kangkang", "" ], [ "Min", "Hlaing", "" ], [ "Foo", "Chuan-Sheng", "" ] ]
Recent work on curvilinear structure segmentation has mostly focused on backbone network design and loss engineering. The challenge of collecting labelled data, an expensive and labor intensive process, has been overlooked. While labelled data is expensive to obtain, unlabelled data is often readily available. In this work, we propose SemiCurv, a semi-supervised learning (SSL) framework for curvilinear structure segmentation that is able to utilize such unlabelled data to reduce the labelling burden. Our framework addresses two key challenges in formulating curvilinear segmentation in a semi-supervised manner. First, to fully exploit the power of consistency based SSL, we introduce a geometric transformation as strong data augmentation and then align segmentation predictions via a differentiable inverse transformation to enable the computation of pixel-wise consistency. Second, the traditional mean square error (MSE) on unlabelled data is prone to collapsed predictions and this issue exacerbates with severe class imbalance (significantly more background pixels). We propose a N-pair consistency loss to avoid trivial predictions on unlabelled data. We evaluate SemiCurv on six curvilinear segmentation datasets, and find that with no more than 5% of the labelled data, it achieves close to 95% of the performance relative to its fully supervised counterpart.
1706.03549
Fran\c{c}ois Ollivier
John Masse and Clara Masse and Fran\c{c}ois Ollivier
Automatic differentiation of hybrid models Illustrated by Diffedge Graphic Methodology. (Survey)
47 p. Source files from computer experiments available
null
null
null
cs.SY cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the automatic differentiation of hybrid models, viz. models that may contain delays, logical tests and discontinuities or loops. We consider differentiation with respect to parameters, initial conditions or the time. We emphasize the case of a small number of derivations and iterated differentiations are mostly treated with a foccus on high order iterations of the same derivation. The models we consider may involve arithmetic operations, elementary functions, logical tests but also more elaborate components such as delays, integrators, equations and differential equations solvers. This survey has no pretention to exhaustivity but tries to fil a gap in the litterature where each kind of of component may be documented, but seldom their common use. The general approach is illustrated by computer algebra experiments, stressing the interest of performing differentiation, whenever possible, on high level objects, before any translation in Fortran or C code. We include ordinary differential systems with discontinuity, with a special interest for those comming from discontinuous Lagrangians. We conclude with an overview of the graphic methodology developped in the Diffedge software for Simulink hybrid models. Not all possibilities are covered, but the methodology can be adapted. The result of automatic differentiation is a new block diagram and so it can be easily translated to produce real time embedded programs. We welcome any comments or suggestions of references that we may have missed.
[ { "created": "Mon, 12 Jun 2017 10:20:59 GMT", "version": "v1" } ]
2017-06-13
[ [ "Masse", "John", "" ], [ "Masse", "Clara", "" ], [ "Ollivier", "François", "" ] ]
We investigate the automatic differentiation of hybrid models, viz. models that may contain delays, logical tests and discontinuities or loops. We consider differentiation with respect to parameters, initial conditions or the time. We emphasize the case of a small number of derivations and iterated differentiations are mostly treated with a foccus on high order iterations of the same derivation. The models we consider may involve arithmetic operations, elementary functions, logical tests but also more elaborate components such as delays, integrators, equations and differential equations solvers. This survey has no pretention to exhaustivity but tries to fil a gap in the litterature where each kind of of component may be documented, but seldom their common use. The general approach is illustrated by computer algebra experiments, stressing the interest of performing differentiation, whenever possible, on high level objects, before any translation in Fortran or C code. We include ordinary differential systems with discontinuity, with a special interest for those comming from discontinuous Lagrangians. We conclude with an overview of the graphic methodology developped in the Diffedge software for Simulink hybrid models. Not all possibilities are covered, but the methodology can be adapted. The result of automatic differentiation is a new block diagram and so it can be easily translated to produce real time embedded programs. We welcome any comments or suggestions of references that we may have missed.
2205.15204
Yanhong Annie Liu
Yanhong A. Liu, Scott D. Stoller, Yi Tong, Bo Lin, K. Tuncay Tekle
Programming with rules and everything else, seamlessly
null
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Logic rules are powerful for expressing complex reasoning and analysis problems. At the same time, they are inconvenient or impossible to use for many other aspects of applications. Integrating rules in a language with sets and functions, and furthermore with updates to objects, has been a subject of significant study. What's lacking is a language that integrates all constructs seamlessly. This paper presents a language, Alda, that supports all of rules, sets, functions, updates, and objects as seamlessly integrated built-ins, including concurrent and distributed processes. The key idea is to support predicates as set-valued variables that can be used and updated in any scope, and support queries and inference with both explicit and automatic calls to an inference function. We develop a complete formal semantics for Alda. We design a compilation framework that ensures the declarative semantics of rules, while also being able to exploit available optimizations. We describe a prototype implementation that builds on a powerful extension of Python and employs an efficient logic rule engine. We develop a range of benchmarks and present results of experiments to demonstrate Alda's power for programming and generally good performance.
[ { "created": "Mon, 30 May 2022 15:59:03 GMT", "version": "v1" } ]
2022-05-31
[ [ "Liu", "Yanhong A.", "" ], [ "Stoller", "Scott D.", "" ], [ "Tong", "Yi", "" ], [ "Lin", "Bo", "" ], [ "Tekle", "K. Tuncay", "" ] ]
Logic rules are powerful for expressing complex reasoning and analysis problems. At the same time, they are inconvenient or impossible to use for many other aspects of applications. Integrating rules in a language with sets and functions, and furthermore with updates to objects, has been a subject of significant study. What's lacking is a language that integrates all constructs seamlessly. This paper presents a language, Alda, that supports all of rules, sets, functions, updates, and objects as seamlessly integrated built-ins, including concurrent and distributed processes. The key idea is to support predicates as set-valued variables that can be used and updated in any scope, and support queries and inference with both explicit and automatic calls to an inference function. We develop a complete formal semantics for Alda. We design a compilation framework that ensures the declarative semantics of rules, while also being able to exploit available optimizations. We describe a prototype implementation that builds on a powerful extension of Python and employs an efficient logic rule engine. We develop a range of benchmarks and present results of experiments to demonstrate Alda's power for programming and generally good performance.
2001.06680
Guanbin Li
Jie Wu, Guanbin Li, Si Liu, Liang Lin
Tree-Structured Policy based Progressive Reinforcement Learning for Temporally Language Grounding in Video
To appear in AAAI2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Temporally language grounding in untrimmed videos is a newly-raised task in video understanding. Most of the existing methods suffer from inferior efficiency, lacking interpretability, and deviating from the human perception mechanism. Inspired by human's coarse-to-fine decision-making paradigm, we formulate a novel Tree-Structured Policy based Progressive Reinforcement Learning (TSP-PRL) framework to sequentially regulate the temporal boundary by an iterative refinement process. The semantic concepts are explicitly represented as the branches in the policy, which contributes to efficiently decomposing complex policies into an interpretable primitive action. Progressive reinforcement learning provides correct credit assignment via two task-oriented rewards that encourage mutual promotion within the tree-structured policy. We extensively evaluate TSP-PRL on the Charades-STA and ActivityNet datasets, and experimental results show that TSP-PRL achieves competitive performance over existing state-of-the-art methods.
[ { "created": "Sat, 18 Jan 2020 15:08:04 GMT", "version": "v1" } ]
2020-01-22
[ [ "Wu", "Jie", "" ], [ "Li", "Guanbin", "" ], [ "Liu", "Si", "" ], [ "Lin", "Liang", "" ] ]
Temporally language grounding in untrimmed videos is a newly-raised task in video understanding. Most of the existing methods suffer from inferior efficiency, lacking interpretability, and deviating from the human perception mechanism. Inspired by human's coarse-to-fine decision-making paradigm, we formulate a novel Tree-Structured Policy based Progressive Reinforcement Learning (TSP-PRL) framework to sequentially regulate the temporal boundary by an iterative refinement process. The semantic concepts are explicitly represented as the branches in the policy, which contributes to efficiently decomposing complex policies into an interpretable primitive action. Progressive reinforcement learning provides correct credit assignment via two task-oriented rewards that encourage mutual promotion within the tree-structured policy. We extensively evaluate TSP-PRL on the Charades-STA and ActivityNet datasets, and experimental results show that TSP-PRL achieves competitive performance over existing state-of-the-art methods.
2107.12778
Majid Forghani-Elahabad
Majid Forghani-elahabad
Assessing the performance of smart grid communication networks under both time and budget constraints
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The smart grid concept has emerged to address the existing problems in the traditional electric grid, which has been functioning for more than a hundred years. The most crucial difference between traditional grids and smart grids is the communication infrastructure applied to the latter. However, coupling between these networks can increase the risk of significant failures. Hence, assessing the performance of the smart grid communication networks is of great importance and thus is considered here. As transmission time and cost play essential roles in many real-world communication networks, both time and budget constraints are considered in this work. To evaluate the performance of communication networks, we assume that the data is transmitted from a source to a destination through a single path. We propose an algorithm that computes the exact probability of transmitting d units of data from the source to the destination within T units of time and the budget of b. The algorithm is illustrated through a benchmark network example. The complexity results are also provided. A rather large-size benchmark, that is, Pan European topology, along with one thousand randomly generated test problems are used to generate the experimental results which show clearly the superiority of our proposed algorithms to some existing algorithm in the literature.
[ { "created": "Tue, 27 Jul 2021 12:43:43 GMT", "version": "v1" } ]
2021-07-28
[ [ "Forghani-elahabad", "Majid", "" ] ]
The smart grid concept has emerged to address the existing problems in the traditional electric grid, which has been functioning for more than a hundred years. The most crucial difference between traditional grids and smart grids is the communication infrastructure applied to the latter. However, coupling between these networks can increase the risk of significant failures. Hence, assessing the performance of the smart grid communication networks is of great importance and thus is considered here. As transmission time and cost play essential roles in many real-world communication networks, both time and budget constraints are considered in this work. To evaluate the performance of communication networks, we assume that the data is transmitted from a source to a destination through a single path. We propose an algorithm that computes the exact probability of transmitting d units of data from the source to the destination within T units of time and the budget of b. The algorithm is illustrated through a benchmark network example. The complexity results are also provided. A rather large-size benchmark, that is, Pan European topology, along with one thousand randomly generated test problems are used to generate the experimental results which show clearly the superiority of our proposed algorithms to some existing algorithm in the literature.
2309.09635
Colin M. Gray
Colin M. Gray and Thomas Mildner and Nataliia Bielova
Temporal Analysis of Dark Patterns: A Case Study of a User's Odyssey to Conquer Prime Membership Cancellation through the "Iliad Flow"
null
null
null
null
cs.HC cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Dark patterns are ubiquitous in digital systems, impacting users throughout their journeys on many popular apps and websites. While substantial efforts from the research community in the last five years have led to consolidated taxonomies of dark patterns, including an emerging ontology, most applications of these descriptors have been focused on analysis of static images or as isolated pattern types. In this paper, we present a case study of Amazon Prime's "Iliad Flow" to illustrate the interplay of dark patterns across a user journey, grounded in insights from a US Federal Trade Commission complaint against the company. We use this case study to lay the groundwork for a methodology of Temporal Analysis of Dark Patterns (TADP), including considerations for characterization of individual dark patterns across a user journey, combinatorial effects of multiple dark patterns types, and implications for expert detection and automated detection.
[ { "created": "Mon, 18 Sep 2023 10:12:52 GMT", "version": "v1" } ]
2023-09-19
[ [ "Gray", "Colin M.", "" ], [ "Mildner", "Thomas", "" ], [ "Bielova", "Nataliia", "" ] ]
Dark patterns are ubiquitous in digital systems, impacting users throughout their journeys on many popular apps and websites. While substantial efforts from the research community in the last five years have led to consolidated taxonomies of dark patterns, including an emerging ontology, most applications of these descriptors have been focused on analysis of static images or as isolated pattern types. In this paper, we present a case study of Amazon Prime's "Iliad Flow" to illustrate the interplay of dark patterns across a user journey, grounded in insights from a US Federal Trade Commission complaint against the company. We use this case study to lay the groundwork for a methodology of Temporal Analysis of Dark Patterns (TADP), including considerations for characterization of individual dark patterns across a user journey, combinatorial effects of multiple dark patterns types, and implications for expert detection and automated detection.
1901.09892
Xiaolei Liu
Xiaolei Liu, Yuheng Luo, Xiaosong Zhang, Qingxin Zhu
A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm
null
null
10.1007/978-3-030-55304-3_14
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural networks play an increasingly important role in the field of machine learning and are included in many applications in society. Unfortunately, neural networks suffer from adversarial samples generated to attack them. However, most of the generation approaches either assume that the attacker has full knowledge of the neural network model or are limited by the type of attacked model. In this paper, we propose a new approach that generates a black-box attack to neural networks based on the swarm evolutionary algorithm. Benefiting from the improvements in the technology and theoretical characteristics of evolutionary algorithms, our approach has the advantages of effectiveness, black-box attack, generality, and randomness. Our experimental results show that both the MNIST images and the CIFAR-10 images can be perturbed to successful generate a black-box attack with 100\% probability on average. In addition, the proposed attack, which is successful on distilled neural networks with almost 100\% probability, is resistant to defensive distillation. The experimental results also indicate that the robustness of the artificial intelligence algorithm is related to the complexity of the model and the data set. In addition, we find that the adversarial samples to some extent reproduce the characteristics of the sample data learned by the neural network model.
[ { "created": "Sat, 26 Jan 2019 10:25:57 GMT", "version": "v1" } ]
2024-03-13
[ [ "Liu", "Xiaolei", "" ], [ "Luo", "Yuheng", "" ], [ "Zhang", "Xiaosong", "" ], [ "Zhu", "Qingxin", "" ] ]
Neural networks play an increasingly important role in the field of machine learning and are included in many applications in society. Unfortunately, neural networks suffer from adversarial samples generated to attack them. However, most of the generation approaches either assume that the attacker has full knowledge of the neural network model or are limited by the type of attacked model. In this paper, we propose a new approach that generates a black-box attack to neural networks based on the swarm evolutionary algorithm. Benefiting from the improvements in the technology and theoretical characteristics of evolutionary algorithms, our approach has the advantages of effectiveness, black-box attack, generality, and randomness. Our experimental results show that both the MNIST images and the CIFAR-10 images can be perturbed to successful generate a black-box attack with 100\% probability on average. In addition, the proposed attack, which is successful on distilled neural networks with almost 100\% probability, is resistant to defensive distillation. The experimental results also indicate that the robustness of the artificial intelligence algorithm is related to the complexity of the model and the data set. In addition, we find that the adversarial samples to some extent reproduce the characteristics of the sample data learned by the neural network model.
2208.00790
Zhouyingcheng Liao
Zhouyingcheng Liao, Jimei Yang, Jun Saito, Gerard Pons-Moll, Yang Zhou
Skeleton-free Pose Transfer for Stylized 3D Characters
Accepted at ECCV 2022. Project website https://zycliao.github.io/sfpt
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the first method that automatically transfers poses between stylized 3D characters without skeletal rigging. In contrast to previous attempts to learn pose transformations on fixed or topology-equivalent skeleton templates, our method focuses on a novel scenario to handle skeleton-free characters with diverse shapes, topologies, and mesh connectivities. The key idea of our method is to represent the characters in a unified articulation model so that the pose can be transferred through the correspondent parts. To achieve this, we propose a novel pose transfer network that predicts the character skinning weights and deformation transformations jointly to articulate the target character to match the desired pose. Our method is trained in a semi-supervised manner absorbing all existing character data with paired/unpaired poses and stylized shapes. It generalizes well to unseen stylized characters and inanimate objects. We conduct extensive experiments and demonstrate the effectiveness of our method on this novel task.
[ { "created": "Thu, 28 Jul 2022 20:05:57 GMT", "version": "v1" } ]
2022-08-02
[ [ "Liao", "Zhouyingcheng", "" ], [ "Yang", "Jimei", "" ], [ "Saito", "Jun", "" ], [ "Pons-Moll", "Gerard", "" ], [ "Zhou", "Yang", "" ] ]
We present the first method that automatically transfers poses between stylized 3D characters without skeletal rigging. In contrast to previous attempts to learn pose transformations on fixed or topology-equivalent skeleton templates, our method focuses on a novel scenario to handle skeleton-free characters with diverse shapes, topologies, and mesh connectivities. The key idea of our method is to represent the characters in a unified articulation model so that the pose can be transferred through the correspondent parts. To achieve this, we propose a novel pose transfer network that predicts the character skinning weights and deformation transformations jointly to articulate the target character to match the desired pose. Our method is trained in a semi-supervised manner absorbing all existing character data with paired/unpaired poses and stylized shapes. It generalizes well to unseen stylized characters and inanimate objects. We conduct extensive experiments and demonstrate the effectiveness of our method on this novel task.
1110.0271
Miguel Angel Martin-Delgado
Miguel-Angel Martin-Delgado
Alan Turing and the Origins of Complexity
Invited contribution to 'ARBOR: scientific journal of CSIC' special edition devoted to commemorate the Year of Alan Turing. This special issue is entitled "The Legacy of Alan Turing". Coordinators: Manuel de Leon, Alberto Ibort and David Martin de Diego
ARBOR Vol 189, No 764 (2013), a083
10.3989/arbor.2013.i764.
null
cs.CC cond-mat.stat-mech quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The 75th anniversary of Turing's seminal paper and his centennial year anniversary occur in 2011 and 2012, respectively. It is natural to review and assess Turing's contributions in diverse fields in the light of new developments that his thoughts has triggered in many scientific communities. Here, the main idea is to discuss how the work of Turing allows us to change our views on the foundations of Mathematics, much like quantum mechanics changed our conception of the world of Physics. Basic notions like computability and universality are discussed in a broad context, making special emphasis on how the notion of complexity can be given a precise meaning after Turing, i.e., not just qualitative but also quantitative. Turing's work is given some historical perspective with respect to some of his precursors, contemporaries and mathematicians who took up his ideas farther.
[ { "created": "Mon, 3 Oct 2011 06:21:55 GMT", "version": "v1" } ]
2014-02-10
[ [ "Martin-Delgado", "Miguel-Angel", "" ] ]
The 75th anniversary of Turing's seminal paper and his centennial year anniversary occur in 2011 and 2012, respectively. It is natural to review and assess Turing's contributions in diverse fields in the light of new developments that his thoughts has triggered in many scientific communities. Here, the main idea is to discuss how the work of Turing allows us to change our views on the foundations of Mathematics, much like quantum mechanics changed our conception of the world of Physics. Basic notions like computability and universality are discussed in a broad context, making special emphasis on how the notion of complexity can be given a precise meaning after Turing, i.e., not just qualitative but also quantitative. Turing's work is given some historical perspective with respect to some of his precursors, contemporaries and mathematicians who took up his ideas farther.
2407.05996
Moritz Reuss
Moritz Reuss, \"Omer Erdin\c{c} Ya\u{g}murlu, Fabian Wenzel, Rudolf Lioutikov
Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals
RSS 2024
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This work introduces the Multimodal Diffusion Transformer (MDT), a novel diffusion policy framework, that excels at learning versatile behavior from multimodal goal specifications with few language annotations. MDT leverages a diffusion-based multimodal transformer backbone and two self-supervised auxiliary objectives to master long-horizon manipulation tasks based on multimodal goals. The vast majority of imitation learning methods only learn from individual goal modalities, e.g. either language or goal images. However, existing large-scale imitation learning datasets are only partially labeled with language annotations, which prohibits current methods from learning language conditioned behavior from these datasets. MDT addresses this challenge by introducing a latent goal-conditioned state representation that is simultaneously trained on multimodal goal instructions. This state representation aligns image and language based goal embeddings and encodes sufficient information to predict future states. The representation is trained via two self-supervised auxiliary objectives, enhancing the performance of the presented transformer backbone. MDT shows exceptional performance on 164 tasks provided by the challenging CALVIN and LIBERO benchmarks, including a LIBERO version that contains less than $2\%$ language annotations. Furthermore, MDT establishes a new record on the CALVIN manipulation challenge, demonstrating an absolute performance improvement of $15\%$ over prior state-of-the-art methods that require large-scale pretraining and contain $10\times$ more learnable parameters. MDT shows its ability to solve long-horizon manipulation from sparsely annotated data in both simulated and real-world environments. Demonstrations and Code are available at https://intuitive-robots.github.io/mdt_policy/.
[ { "created": "Mon, 8 Jul 2024 14:46:44 GMT", "version": "v1" } ]
2024-07-09
[ [ "Reuss", "Moritz", "" ], [ "Yağmurlu", "Ömer Erdinç", "" ], [ "Wenzel", "Fabian", "" ], [ "Lioutikov", "Rudolf", "" ] ]
This work introduces the Multimodal Diffusion Transformer (MDT), a novel diffusion policy framework, that excels at learning versatile behavior from multimodal goal specifications with few language annotations. MDT leverages a diffusion-based multimodal transformer backbone and two self-supervised auxiliary objectives to master long-horizon manipulation tasks based on multimodal goals. The vast majority of imitation learning methods only learn from individual goal modalities, e.g. either language or goal images. However, existing large-scale imitation learning datasets are only partially labeled with language annotations, which prohibits current methods from learning language conditioned behavior from these datasets. MDT addresses this challenge by introducing a latent goal-conditioned state representation that is simultaneously trained on multimodal goal instructions. This state representation aligns image and language based goal embeddings and encodes sufficient information to predict future states. The representation is trained via two self-supervised auxiliary objectives, enhancing the performance of the presented transformer backbone. MDT shows exceptional performance on 164 tasks provided by the challenging CALVIN and LIBERO benchmarks, including a LIBERO version that contains less than $2\%$ language annotations. Furthermore, MDT establishes a new record on the CALVIN manipulation challenge, demonstrating an absolute performance improvement of $15\%$ over prior state-of-the-art methods that require large-scale pretraining and contain $10\times$ more learnable parameters. MDT shows its ability to solve long-horizon manipulation from sparsely annotated data in both simulated and real-world environments. Demonstrations and Code are available at https://intuitive-robots.github.io/mdt_policy/.
2211.06001
Yong Hong
Yong Hong, Deren Li, Shupei Luo, Xin Chen, Yi Yang, Mi Wang
An Improved End-to-End Multi-Target Tracking Method Based on Transformer Self-Attention
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This study proposes an improved end-to-end multi-target tracking algorithm that adapts to multi-view multi-scale scenes based on the self-attentive mechanism of the transformer's encoder-decoder structure. A multi-dimensional feature extraction backbone network is combined with a self-built semantic raster map, which is stored in the encoder for correlation and generates target position encoding and multi-dimensional feature vectors. The decoder incorporates four methods: spatial clustering and semantic filtering of multi-view targets, dynamic matching of multi-dimensional features, space-time logic-based multi-target tracking, and space-time convergence network (STCN)-based parameter passing. Through the fusion of multiple decoding methods, muti-camera targets are tracked in three dimensions: temporal logic, spatial logic, and feature matching. For the MOT17 dataset, this study's method significantly outperforms the current state-of-the-art method MiniTrackV2 [49] by 2.2% to 0.836 on Multiple Object Tracking Accuracy(MOTA) metric. Furthermore, this study proposes a retrospective mechanism for the first time, and adopts a reverse-order processing method to optimise the historical mislabeled targets for improving the Identification F1-score(IDF1). For the self-built dataset OVIT-MOT01, the IDF1 improves from 0.948 to 0.967, and the Multi-camera Tracking Accuracy(MCTA) improves from 0.878 to 0.909, which significantly improves the continuous tracking accuracy and scene adaptation. This research method introduces a new attentional tracking paradigm which is able to achieve state-of-the-art performance on multi-target tracking (MOT17 and OVIT-MOT01) tasks.
[ { "created": "Fri, 11 Nov 2022 04:58:46 GMT", "version": "v1" } ]
2022-11-14
[ [ "Hong", "Yong", "" ], [ "Li", "Deren", "" ], [ "Luo", "Shupei", "" ], [ "Chen", "Xin", "" ], [ "Yang", "Yi", "" ], [ "Wang", "Mi", "" ] ]
This study proposes an improved end-to-end multi-target tracking algorithm that adapts to multi-view multi-scale scenes based on the self-attentive mechanism of the transformer's encoder-decoder structure. A multi-dimensional feature extraction backbone network is combined with a self-built semantic raster map, which is stored in the encoder for correlation and generates target position encoding and multi-dimensional feature vectors. The decoder incorporates four methods: spatial clustering and semantic filtering of multi-view targets, dynamic matching of multi-dimensional features, space-time logic-based multi-target tracking, and space-time convergence network (STCN)-based parameter passing. Through the fusion of multiple decoding methods, muti-camera targets are tracked in three dimensions: temporal logic, spatial logic, and feature matching. For the MOT17 dataset, this study's method significantly outperforms the current state-of-the-art method MiniTrackV2 [49] by 2.2% to 0.836 on Multiple Object Tracking Accuracy(MOTA) metric. Furthermore, this study proposes a retrospective mechanism for the first time, and adopts a reverse-order processing method to optimise the historical mislabeled targets for improving the Identification F1-score(IDF1). For the self-built dataset OVIT-MOT01, the IDF1 improves from 0.948 to 0.967, and the Multi-camera Tracking Accuracy(MCTA) improves from 0.878 to 0.909, which significantly improves the continuous tracking accuracy and scene adaptation. This research method introduces a new attentional tracking paradigm which is able to achieve state-of-the-art performance on multi-target tracking (MOT17 and OVIT-MOT01) tasks.
2207.10245
Oskar van der Wal MSc
Oskar van der Wal, Jaap Jumelet, Katrin Schulz, Willem Zuidema
The Birth of Bias: A case study on the evolution of gender bias in an English language model
Accepted at the 4th Workshop on Gender Bias in Natural Language Processing (NAACL, 2022)
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Detecting and mitigating harmful biases in modern language models are widely recognized as crucial, open problems. In this paper, we take a step back and investigate how language models come to be biased in the first place. We use a relatively small language model, using the LSTM architecture trained on an English Wikipedia corpus. With full access to the data and to the model parameters as they change during every step while training, we can map in detail how the representation of gender develops, what patterns in the dataset drive this, and how the model's internal state relates to the bias in a downstream task (semantic textual similarity). We find that the representation of gender is dynamic and identify different phases during training. Furthermore, we show that gender information is represented increasingly locally in the input embeddings of the model and that, as a consequence, debiasing these can be effective in reducing the downstream bias. Monitoring the training dynamics, allows us to detect an asymmetry in how the female and male gender are represented in the input embeddings. This is important, as it may cause naive mitigation strategies to introduce new undesirable biases. We discuss the relevance of the findings for mitigation strategies more generally and the prospects of generalizing our methods to larger language models, the Transformer architecture, other languages and other undesirable biases.
[ { "created": "Thu, 21 Jul 2022 00:59:04 GMT", "version": "v1" } ]
2022-07-22
[ [ "van der Wal", "Oskar", "" ], [ "Jumelet", "Jaap", "" ], [ "Schulz", "Katrin", "" ], [ "Zuidema", "Willem", "" ] ]
Detecting and mitigating harmful biases in modern language models are widely recognized as crucial, open problems. In this paper, we take a step back and investigate how language models come to be biased in the first place. We use a relatively small language model, using the LSTM architecture trained on an English Wikipedia corpus. With full access to the data and to the model parameters as they change during every step while training, we can map in detail how the representation of gender develops, what patterns in the dataset drive this, and how the model's internal state relates to the bias in a downstream task (semantic textual similarity). We find that the representation of gender is dynamic and identify different phases during training. Furthermore, we show that gender information is represented increasingly locally in the input embeddings of the model and that, as a consequence, debiasing these can be effective in reducing the downstream bias. Monitoring the training dynamics, allows us to detect an asymmetry in how the female and male gender are represented in the input embeddings. This is important, as it may cause naive mitigation strategies to introduce new undesirable biases. We discuss the relevance of the findings for mitigation strategies more generally and the prospects of generalizing our methods to larger language models, the Transformer architecture, other languages and other undesirable biases.
1902.01642
Peer-Olaf Siebers
Daniel Stroud, Christian Wagner, Peer-Olaf Siebers
Agent-Based Simulation Modelling for Reflecting on Consequences of Digital Mental Health
16 pages, 18 figures, 3 tables, working paper
null
null
null
cs.MA cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The premise of this working paper is based around agent-based simulation models and how to go about creating them from given incomplete information. Agent-based simulations are stochastic simulations that revolve around groups of agents that each have their own characteristics and can make decisions. Such simulations can be used to emulate real life situations and to create hypothetical situations without the need for real-world testing prior. Here we describe the development of an agent-based simulation model for studying future digital mental health scenarios. An incomplete conceptual model has been used as the basis for this development. To define differences in responses to stimuli we employed fuzzy decision making logic. The model has been implemented but not been used for structured experimentation yet. This is planned as our next step.
[ { "created": "Tue, 5 Feb 2019 11:15:55 GMT", "version": "v1" } ]
2019-02-06
[ [ "Stroud", "Daniel", "" ], [ "Wagner", "Christian", "" ], [ "Siebers", "Peer-Olaf", "" ] ]
The premise of this working paper is based around agent-based simulation models and how to go about creating them from given incomplete information. Agent-based simulations are stochastic simulations that revolve around groups of agents that each have their own characteristics and can make decisions. Such simulations can be used to emulate real life situations and to create hypothetical situations without the need for real-world testing prior. Here we describe the development of an agent-based simulation model for studying future digital mental health scenarios. An incomplete conceptual model has been used as the basis for this development. To define differences in responses to stimuli we employed fuzzy decision making logic. The model has been implemented but not been used for structured experimentation yet. This is planned as our next step.
1111.0432
Sangkyun Lee
Sangkyun Lee and Stephen J. Wright
Approximate Stochastic Subgradient Estimation Training for Support Vector Machines
An extended version of the ICPRAM 2012 paper
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Subgradient algorithms for training support vector machines have been quite successful for solving large-scale and online learning problems. However, they have been restricted to linear kernels and strongly convex formulations. This paper describes efficient subgradient approaches without such limitations. Our approaches make use of randomized low-dimensional approximations to nonlinear kernels, and minimization of a reduced primal formulation using an algorithm based on robust stochastic approximation, which do not require strong convexity. Experiments illustrate that our approaches produce solutions of comparable prediction accuracy with the solutions acquired from existing SVM solvers, but often in much shorter time. We also suggest efficient prediction schemes that depend only on the dimension of kernel approximation, not on the number of support vectors.
[ { "created": "Wed, 2 Nov 2011 09:24:26 GMT", "version": "v1" }, { "created": "Thu, 3 Nov 2011 13:33:27 GMT", "version": "v2" } ]
2011-11-04
[ [ "Lee", "Sangkyun", "" ], [ "Wright", "Stephen J.", "" ] ]
Subgradient algorithms for training support vector machines have been quite successful for solving large-scale and online learning problems. However, they have been restricted to linear kernels and strongly convex formulations. This paper describes efficient subgradient approaches without such limitations. Our approaches make use of randomized low-dimensional approximations to nonlinear kernels, and minimization of a reduced primal formulation using an algorithm based on robust stochastic approximation, which do not require strong convexity. Experiments illustrate that our approaches produce solutions of comparable prediction accuracy with the solutions acquired from existing SVM solvers, but often in much shorter time. We also suggest efficient prediction schemes that depend only on the dimension of kernel approximation, not on the number of support vectors.
1405.6922
Omid Aghazadeh
Omid Aghazadeh and Stefan Carlsson
Large Scale, Large Margin Classification using Indefinite Similarity Measures
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the success of the popular kernelized support vector machines, they have two major limitations: they are restricted to Positive Semi-Definite (PSD) kernels, and their training complexity scales at least quadratically with the size of the data. Many natural measures of similarity between pairs of samples are not PSD e.g. invariant kernels, and those that are implicitly or explicitly defined by latent variable models. In this paper, we investigate scalable approaches for using indefinite similarity measures in large margin frameworks. In particular we show that a normalization of similarity to a subset of the data points constitutes a representation suitable for linear classifiers. The result is a classifier which is competitive to kernelized SVM in terms of accuracy, despite having better training and test time complexities. Experimental results demonstrate that on CIFAR-10 dataset, the model equipped with similarity measures invariant to rigid and non-rigid deformations, can be made more than 5 times sparser while being more accurate than kernelized SVM using RBF kernels.
[ { "created": "Tue, 27 May 2014 14:18:26 GMT", "version": "v1" } ]
2014-05-28
[ [ "Aghazadeh", "Omid", "" ], [ "Carlsson", "Stefan", "" ] ]
Despite the success of the popular kernelized support vector machines, they have two major limitations: they are restricted to Positive Semi-Definite (PSD) kernels, and their training complexity scales at least quadratically with the size of the data. Many natural measures of similarity between pairs of samples are not PSD e.g. invariant kernels, and those that are implicitly or explicitly defined by latent variable models. In this paper, we investigate scalable approaches for using indefinite similarity measures in large margin frameworks. In particular we show that a normalization of similarity to a subset of the data points constitutes a representation suitable for linear classifiers. The result is a classifier which is competitive to kernelized SVM in terms of accuracy, despite having better training and test time complexities. Experimental results demonstrate that on CIFAR-10 dataset, the model equipped with similarity measures invariant to rigid and non-rigid deformations, can be made more than 5 times sparser while being more accurate than kernelized SVM using RBF kernels.
1007.5406
Markus Lohrey
Markus Lohrey, Sebastian Maneth and Roy Mennicke
Tree structure compression with RePair
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we introduce a new linear time compression algorithm, called "Re-pair for Trees", which compresses ranked ordered trees using linear straight-line context-free tree grammars. Such grammars generalize straight-line context-free string grammars and allow basic tree operations, like traversal along edges, to be executed without prior decompression. Our algorithm can be considered as a generalization of the "Re-pair" algorithm developed by N. Jesper Larsson and Alistair Moffat in 2000. The latter algorithm is a dictionary-based compression algorithm for strings. We also introduce a succinct coding which is specialized in further compressing the grammars generated by our algorithm. This is accomplished without loosing the ability do directly execute queries on this compressed representation of the input tree. Finally, we compare the grammars and output files generated by a prototype of the Re-pair for Trees algorithm with those of similar compression algorithms. The obtained results show that that our algorithm outperforms its competitors in terms of compression ratio, runtime and memory usage.
[ { "created": "Fri, 30 Jul 2010 10:14:21 GMT", "version": "v1" } ]
2010-08-02
[ [ "Lohrey", "Markus", "" ], [ "Maneth", "Sebastian", "" ], [ "Mennicke", "Roy", "" ] ]
In this work we introduce a new linear time compression algorithm, called "Re-pair for Trees", which compresses ranked ordered trees using linear straight-line context-free tree grammars. Such grammars generalize straight-line context-free string grammars and allow basic tree operations, like traversal along edges, to be executed without prior decompression. Our algorithm can be considered as a generalization of the "Re-pair" algorithm developed by N. Jesper Larsson and Alistair Moffat in 2000. The latter algorithm is a dictionary-based compression algorithm for strings. We also introduce a succinct coding which is specialized in further compressing the grammars generated by our algorithm. This is accomplished without loosing the ability do directly execute queries on this compressed representation of the input tree. Finally, we compare the grammars and output files generated by a prototype of the Re-pair for Trees algorithm with those of similar compression algorithms. The obtained results show that that our algorithm outperforms its competitors in terms of compression ratio, runtime and memory usage.
1404.4936
Tao Zhou
Jin-Hu Liu, Tao Zhou, Zi-Ke Zhang, Zimo Yang, Chuang Liu, Wei-Min Li
Promoting cold-start items in recommender systems
6 pages, 6 figures
null
10.1371/journal.pone.0113457
null
cs.IR cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As one of major challenges, cold-start problem plagues nearly all recommender systems. In particular, new items will be overlooked, impeding the development of new products online. Given limited resources, how to utilize the knowledge of recommender systems and design efficient marketing strategy for new items is extremely important. In this paper, we convert this ticklish issue into a clear mathematical problem based on a bipartite network representation. Under the most widely used algorithm in real e-commerce recommender systems, so-called the item-based collaborative filtering, we show that to simply push new items to active users is not a good strategy. To our surprise, experiments on real recommender systems indicate that to connect new items with some less active users will statistically yield better performance, namely these new items will have more chance to appear in other users' recommendation lists. Further analysis suggests that the disassortative nature of recommender systems contributes to such observation. In a word, getting in-depth understanding on recommender systems could pave the way for the owners to popularize their cold-start products with low costs.
[ { "created": "Sat, 19 Apr 2014 08:16:47 GMT", "version": "v1" } ]
2015-06-19
[ [ "Liu", "Jin-Hu", "" ], [ "Zhou", "Tao", "" ], [ "Zhang", "Zi-Ke", "" ], [ "Yang", "Zimo", "" ], [ "Liu", "Chuang", "" ], [ "Li", "Wei-Min", "" ] ]
As one of major challenges, cold-start problem plagues nearly all recommender systems. In particular, new items will be overlooked, impeding the development of new products online. Given limited resources, how to utilize the knowledge of recommender systems and design efficient marketing strategy for new items is extremely important. In this paper, we convert this ticklish issue into a clear mathematical problem based on a bipartite network representation. Under the most widely used algorithm in real e-commerce recommender systems, so-called the item-based collaborative filtering, we show that to simply push new items to active users is not a good strategy. To our surprise, experiments on real recommender systems indicate that to connect new items with some less active users will statistically yield better performance, namely these new items will have more chance to appear in other users' recommendation lists. Further analysis suggests that the disassortative nature of recommender systems contributes to such observation. In a word, getting in-depth understanding on recommender systems could pave the way for the owners to popularize their cold-start products with low costs.
2005.04865
Nithin V. Sabu
Nithin V. Sabu, Neeraj Varshney and Abhishek K. Gupta
3-D Diffusive Molecular Communication with Two Fully-Absorbing Receivers: Hitting Probability and Performance Analysis
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exact analytical channel models for molecular communication via diffusion (MCvD) systems involving multiple fully absorbing receivers (FARs) in a three-dimensional (3- D) medium are hard to obtain due to the mathematical intractability of corresponding diffusion equations. This work, therefore, consider an MCvD system with two spherical FARs in a 3-D diffusion-limited medium and develop several insights using an approximate analytical expression for the hitting probability of information molecule (IM). Further, based on the hitting probability, a novel approximate closed-form analytical expression for the area under the receiver operating characteristic curve (AUC) is derived to analyze the detection performance at each FAR in the presence of other FAR. Finally, simulation results are presented to validate the analytical results using the particle-based and Monte-Carlo simulations and to yield important insights into the MCvD system performance with two FARs
[ { "created": "Mon, 11 May 2020 05:19:37 GMT", "version": "v1" }, { "created": "Sun, 5 Jul 2020 05:46:43 GMT", "version": "v2" }, { "created": "Mon, 14 Sep 2020 14:07:45 GMT", "version": "v3" } ]
2020-09-15
[ [ "Sabu", "Nithin V.", "" ], [ "Varshney", "Neeraj", "" ], [ "Gupta", "Abhishek K.", "" ] ]
Exact analytical channel models for molecular communication via diffusion (MCvD) systems involving multiple fully absorbing receivers (FARs) in a three-dimensional (3- D) medium are hard to obtain due to the mathematical intractability of corresponding diffusion equations. This work, therefore, consider an MCvD system with two spherical FARs in a 3-D diffusion-limited medium and develop several insights using an approximate analytical expression for the hitting probability of information molecule (IM). Further, based on the hitting probability, a novel approximate closed-form analytical expression for the area under the receiver operating characteristic curve (AUC) is derived to analyze the detection performance at each FAR in the presence of other FAR. Finally, simulation results are presented to validate the analytical results using the particle-based and Monte-Carlo simulations and to yield important insights into the MCvD system performance with two FARs
2106.15968
Salvatore Vilella
Salvatore Vilella, Alfonso Semeraro, Daniela Paolotti, Giancarlo Ruffo
The Impact of Disinformation on a Controversial Debate on Social Media
null
EPJ Data Science volume 11, Article number: 29 (2022)
10.1140/epjds/s13688-022-00342-w
null
cs.SI cs.CY
http://creativecommons.org/licenses/by/4.0/
In this work we study how pervasive is the presence of disinformation in the Italian debate around immigration on Twitter and the role of automated accounts in the diffusion of such content. By characterising the Twitter users with an \textit{Untrustworthiness} score, that tells us how frequently they engage with disinformation content, we are able to see that such bad information consumption habits are not equally distributed across the users; adopting a network analysis approach, we can identify communities characterised by a very high presence of users that frequently share content from unreliable news sources. Within this context, social bots tend to inject in the network more malicious content, that often remains confined in a limited number of clusters; instead, they target reliable content in order to diversify their reach. The evidence we gather suggests that, at least in this particular case study, there is a strong interplay between social bots and users engaging with unreliable content, influencing the diffusion of the latter across the network.
[ { "created": "Wed, 30 Jun 2021 10:29:07 GMT", "version": "v1" } ]
2023-02-03
[ [ "Vilella", "Salvatore", "" ], [ "Semeraro", "Alfonso", "" ], [ "Paolotti", "Daniela", "" ], [ "Ruffo", "Giancarlo", "" ] ]
In this work we study how pervasive is the presence of disinformation in the Italian debate around immigration on Twitter and the role of automated accounts in the diffusion of such content. By characterising the Twitter users with an \textit{Untrustworthiness} score, that tells us how frequently they engage with disinformation content, we are able to see that such bad information consumption habits are not equally distributed across the users; adopting a network analysis approach, we can identify communities characterised by a very high presence of users that frequently share content from unreliable news sources. Within this context, social bots tend to inject in the network more malicious content, that often remains confined in a limited number of clusters; instead, they target reliable content in order to diversify their reach. The evidence we gather suggests that, at least in this particular case study, there is a strong interplay between social bots and users engaging with unreliable content, influencing the diffusion of the latter across the network.
1808.07586
Michael Ekstrand
Michael D. Ekstrand and Daniel Kluver
Exploring Author Gender in Book Rating and Recommendation
Expanded version under review
null
null
null
cs.IR cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collaborative filtering algorithms find useful patterns in rating and consumption data and exploit these patterns to guide users to good items. Many of the patterns in rating datasets reflect important real-world differences between the various users and items in the data; other patterns may be irrelevant or possibly undesirable for social or ethical reasons, particularly if they reflect undesired discrimination, such as discrimination in publishing or purchasing against authors who are women or ethnic minorities. In this work, we examine the response of collaborative filtering recommender algorithms to the distribution of their input data with respect to a dimension of social concern, namely content creator gender. Using publicly-available book ratings data, we measure the distribution of the genders of the authors of books in user rating profiles and recommendation lists produced from this data. We find that common collaborative filtering algorithms differ in the gender distribution of their recommendation lists, and in the relationship of that output distribution to user profile distribution.
[ { "created": "Wed, 22 Aug 2018 23:00:26 GMT", "version": "v1" }, { "created": "Sat, 25 Jul 2020 00:14:02 GMT", "version": "v2" } ]
2020-07-28
[ [ "Ekstrand", "Michael D.", "" ], [ "Kluver", "Daniel", "" ] ]
Collaborative filtering algorithms find useful patterns in rating and consumption data and exploit these patterns to guide users to good items. Many of the patterns in rating datasets reflect important real-world differences between the various users and items in the data; other patterns may be irrelevant or possibly undesirable for social or ethical reasons, particularly if they reflect undesired discrimination, such as discrimination in publishing or purchasing against authors who are women or ethnic minorities. In this work, we examine the response of collaborative filtering recommender algorithms to the distribution of their input data with respect to a dimension of social concern, namely content creator gender. Using publicly-available book ratings data, we measure the distribution of the genders of the authors of books in user rating profiles and recommendation lists produced from this data. We find that common collaborative filtering algorithms differ in the gender distribution of their recommendation lists, and in the relationship of that output distribution to user profile distribution.
2401.11611
Xihaier Luo
Xihaier Luo, Wei Xu, Yihui Ren, Shinjae Yoo, Balu Nadiga
Continuous Field Reconstruction from Sparse Observations with Implicit Neural Networks
25 pages,21 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Reliably reconstructing physical fields from sparse sensor data is a challenge that frequently arises in many scientific domains. In practice, the process generating the data often is not understood to sufficient accuracy. Therefore, there is a growing interest in using the deep neural network route to address the problem. This work presents a novel approach that learns a continuous representation of the physical field using implicit neural representations (INRs). Specifically, after factorizing spatiotemporal variability into spatial and temporal components using the separation of variables technique, the method learns relevant basis functions from sparsely sampled irregular data points to develop a continuous representation of the data. In experimental evaluations, the proposed model outperforms recent INR methods, offering superior reconstruction quality on simulation data from a state-of-the-art climate model and a second dataset that comprises ultra-high resolution satellite-based sea surface temperature fields.
[ { "created": "Sun, 21 Jan 2024 22:18:29 GMT", "version": "v1" } ]
2024-01-23
[ [ "Luo", "Xihaier", "" ], [ "Xu", "Wei", "" ], [ "Ren", "Yihui", "" ], [ "Yoo", "Shinjae", "" ], [ "Nadiga", "Balu", "" ] ]
Reliably reconstructing physical fields from sparse sensor data is a challenge that frequently arises in many scientific domains. In practice, the process generating the data often is not understood to sufficient accuracy. Therefore, there is a growing interest in using the deep neural network route to address the problem. This work presents a novel approach that learns a continuous representation of the physical field using implicit neural representations (INRs). Specifically, after factorizing spatiotemporal variability into spatial and temporal components using the separation of variables technique, the method learns relevant basis functions from sparsely sampled irregular data points to develop a continuous representation of the data. In experimental evaluations, the proposed model outperforms recent INR methods, offering superior reconstruction quality on simulation data from a state-of-the-art climate model and a second dataset that comprises ultra-high resolution satellite-based sea surface temperature fields.
1202.0467
Walid Saad
Walid Saad, Zhu Han, Rong Zheng, Are Hj{\o}rungnes, Tamer Ba\c{s}ar, and H. Vincent Poor
Coalitional Games in Partition Form for Joint Spectrum Sensing and Access in Cognitive Radio Networks
IEEE Journal on Selected Topics in Signal Processing (JSTSP), Special Issue on Game Theory, to appear, 2012
null
10.1109/JSTSP.2011.2175699
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unlicensed secondary users (SUs) in cognitive radio networks are subject to an inherent tradeoff between spectrum sensing and spectrum access. Although each SU has an incentive to sense the primary user (PU) channels for locating spectrum holes, this exploration of the spectrum can come at the expense of a shorter transmission time, and, hence, a possibly smaller capacity for data transmission. This paper investigates the impact of this tradeoff on the cooperative strategies of a network of SUs that seek to cooperate in order to improve their view of the spectrum (sensing), reduce the possibility of interference among each other, and improve their transmission capacity (access). The problem is modeled as a coalitional game in partition form and an algorithm for coalition formation is proposed. Using the proposed algorithm, the SUs can make individual distributed decisions to join or leave a coalition while maximizing their utilities which capture the average time spent for sensing as well as the capacity achieved while accessing the spectrum. It is shown that, by using the proposed algorithm, the SUs can self-organize into a network partition composed of disjoint coalitions, with the members of each coalition cooperating to jointly optimize their sensing and access performance. Simulation results show the performance improvement that the proposed algorithm yields with respect to the non-cooperative case. The results also show how the algorithm allows the SUs to self-adapt to changes in the environment such as the change in the traffic of the PUs, or slow mobility.
[ { "created": "Thu, 2 Feb 2012 15:38:09 GMT", "version": "v1" } ]
2015-06-04
[ [ "Saad", "Walid", "" ], [ "Han", "Zhu", "" ], [ "Zheng", "Rong", "" ], [ "Hjørungnes", "Are", "" ], [ "Başar", "Tamer", "" ], [ "Poor", "H. Vincent", "" ] ]
Unlicensed secondary users (SUs) in cognitive radio networks are subject to an inherent tradeoff between spectrum sensing and spectrum access. Although each SU has an incentive to sense the primary user (PU) channels for locating spectrum holes, this exploration of the spectrum can come at the expense of a shorter transmission time, and, hence, a possibly smaller capacity for data transmission. This paper investigates the impact of this tradeoff on the cooperative strategies of a network of SUs that seek to cooperate in order to improve their view of the spectrum (sensing), reduce the possibility of interference among each other, and improve their transmission capacity (access). The problem is modeled as a coalitional game in partition form and an algorithm for coalition formation is proposed. Using the proposed algorithm, the SUs can make individual distributed decisions to join or leave a coalition while maximizing their utilities which capture the average time spent for sensing as well as the capacity achieved while accessing the spectrum. It is shown that, by using the proposed algorithm, the SUs can self-organize into a network partition composed of disjoint coalitions, with the members of each coalition cooperating to jointly optimize their sensing and access performance. Simulation results show the performance improvement that the proposed algorithm yields with respect to the non-cooperative case. The results also show how the algorithm allows the SUs to self-adapt to changes in the environment such as the change in the traffic of the PUs, or slow mobility.
1904.08138
Feiyang Chen
Feiyang Chen, Ziqian Luo, Yanyan Xu, Dengfeng Ke
Complementary Fusion of Multi-Features and Multi-Modalities in Sentiment Analysis
Accepted by AAAI2020 Workshop: AffCon2020
null
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Sentiment analysis, mostly based on text, has been rapidly developing in the last decade and has attracted widespread attention in both academia and industry. However, the information in the real world usually comes from multiple modalities, such as audio and text. Therefore, in this paper, based on audio and text, we consider the task of multimodal sentiment analysis and propose a novel fusion strategy including both multi-feature fusion and multi-modality fusion to improve the accuracy of audio-text sentiment analysis. We call it the DFF-ATMF (Deep Feature Fusion - Audio and Text Modality Fusion) model, which consists of two parallel branches, the audio modality based branch and the text modality based branch. Its core mechanisms are the fusion of multiple feature vectors and multiple modality attention. Experiments on the CMU-MOSI dataset and the recently released CMU-MOSEI dataset, both collected from YouTube for sentiment analysis, show the very competitive results of our DFF-ATMF model. Furthermore, by virtue of attention weight distribution heatmaps, we also demonstrate the deep features learned by using DFF-ATMF are complementary to each other and robust. Surprisingly, DFF-ATMF also achieves new state-of-the-art results on the IEMOCAP dataset, indicating that the proposed fusion strategy also has a good generalization ability for multimodal emotion recognition.
[ { "created": "Wed, 17 Apr 2019 08:46:53 GMT", "version": "v1" }, { "created": "Tue, 23 Apr 2019 02:43:45 GMT", "version": "v2" }, { "created": "Thu, 25 Apr 2019 03:40:18 GMT", "version": "v3" }, { "created": "Mon, 22 Jul 2019 02:22:51 GMT", "version": "v4" }, { "created": "Wed, 11 Dec 2019 17:29:01 GMT", "version": "v5" } ]
2019-12-12
[ [ "Chen", "Feiyang", "" ], [ "Luo", "Ziqian", "" ], [ "Xu", "Yanyan", "" ], [ "Ke", "Dengfeng", "" ] ]
Sentiment analysis, mostly based on text, has been rapidly developing in the last decade and has attracted widespread attention in both academia and industry. However, the information in the real world usually comes from multiple modalities, such as audio and text. Therefore, in this paper, based on audio and text, we consider the task of multimodal sentiment analysis and propose a novel fusion strategy including both multi-feature fusion and multi-modality fusion to improve the accuracy of audio-text sentiment analysis. We call it the DFF-ATMF (Deep Feature Fusion - Audio and Text Modality Fusion) model, which consists of two parallel branches, the audio modality based branch and the text modality based branch. Its core mechanisms are the fusion of multiple feature vectors and multiple modality attention. Experiments on the CMU-MOSI dataset and the recently released CMU-MOSEI dataset, both collected from YouTube for sentiment analysis, show the very competitive results of our DFF-ATMF model. Furthermore, by virtue of attention weight distribution heatmaps, we also demonstrate the deep features learned by using DFF-ATMF are complementary to each other and robust. Surprisingly, DFF-ATMF also achieves new state-of-the-art results on the IEMOCAP dataset, indicating that the proposed fusion strategy also has a good generalization ability for multimodal emotion recognition.
2308.07081
Jivnesh Sandhan
Jivnesh Sandhan, Amruta Barbadikar, Malay Maity, Pavankumar Satuluri, Tushar Sandhan, Ravi M. Gupta, Pawan Goyal and Laxmidhar Behera
Aesthetics of Sanskrit Poetry from the Perspective of Computational Linguistics: A Case Study Analysis on Siksastaka
15 pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Sanskrit poetry has played a significant role in shaping the literary and cultural landscape of the Indian subcontinent for centuries. However, not much attention has been devoted to uncovering the hidden beauty of Sanskrit poetry in computational linguistics. This article explores the intersection of Sanskrit poetry and computational linguistics by proposing a roadmap of an interpretable framework to analyze and classify the qualities and characteristics of fine Sanskrit poetry. We discuss the rich tradition of Sanskrit poetry and the significance of computational linguistics in automatically identifying the characteristics of fine poetry. The proposed framework involves a human-in-the-loop approach that combines deterministic aspects delegated to machines and deep semantics left to human experts. We provide a deep analysis of Siksastaka, a Sanskrit poem, from the perspective of 6 prominent kavyashastra schools, to illustrate the proposed framework. Additionally, we provide compound, dependency, anvaya (prose order linearised form), meter, rasa (mood), alankar (figure of speech), and riti (writing style) annotations for Siksastaka and a web application to illustrate the poem's analysis and annotations. Our key contributions include the proposed framework, the analysis of Siksastaka, the annotations and the web application for future research. Link for interactive analysis: https://sanskritshala.github.io/shikshastakam/
[ { "created": "Mon, 14 Aug 2023 11:26:25 GMT", "version": "v1" } ]
2023-08-15
[ [ "Sandhan", "Jivnesh", "" ], [ "Barbadikar", "Amruta", "" ], [ "Maity", "Malay", "" ], [ "Satuluri", "Pavankumar", "" ], [ "Sandhan", "Tushar", "" ], [ "Gupta", "Ravi M.", "" ], [ "Goyal", "Pawan", "" ], [ "Behera", "Laxmidhar", "" ] ]
Sanskrit poetry has played a significant role in shaping the literary and cultural landscape of the Indian subcontinent for centuries. However, not much attention has been devoted to uncovering the hidden beauty of Sanskrit poetry in computational linguistics. This article explores the intersection of Sanskrit poetry and computational linguistics by proposing a roadmap of an interpretable framework to analyze and classify the qualities and characteristics of fine Sanskrit poetry. We discuss the rich tradition of Sanskrit poetry and the significance of computational linguistics in automatically identifying the characteristics of fine poetry. The proposed framework involves a human-in-the-loop approach that combines deterministic aspects delegated to machines and deep semantics left to human experts. We provide a deep analysis of Siksastaka, a Sanskrit poem, from the perspective of 6 prominent kavyashastra schools, to illustrate the proposed framework. Additionally, we provide compound, dependency, anvaya (prose order linearised form), meter, rasa (mood), alankar (figure of speech), and riti (writing style) annotations for Siksastaka and a web application to illustrate the poem's analysis and annotations. Our key contributions include the proposed framework, the analysis of Siksastaka, the annotations and the web application for future research. Link for interactive analysis: https://sanskritshala.github.io/shikshastakam/
2104.08549
Muhammad Nabeel
Maxim Penner, Muhammad Nabeel, J\"urgen Peissig
Link-Level Performance Evaluation of IMT-2020 Candidate Technology: DECT-2020 New Radio
6 pages, 5 figures, 3 tables
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ETSI has recently introduced the DECT-2020 New Radio (NR) as an IMT-2020 candidate technology for the mMTC and URLLC use cases. To consider DECT-2020 NR as an IMT-2020 technology, the ITU-R has determined different independent evaluation groups to assess its performance against the IMT-2020 requirements. These independent evaluation groups are now in process of investigating the DECT-2020 NR. In order to successfully assess a technology, one important aspect is to fully understand the underlying physical layer and its performance in different environments. Therefore, in this paper, we focus on the physical layer of DECT-2020 NR and investigate its link-level performance with standard channel models provided by the ITU-R for evaluation. We perform extensive simulations to analyze the performance of DECT-2020 NR for both URLLC and mMTC use cases. The results presented in this work are beneficial for the independent evaluation groups and researchers as these results can help calibrating their physical layer performance curves. These results can also be used directly for future system-level evaluations of DECT-2020 NR.
[ { "created": "Sat, 17 Apr 2021 14:00:33 GMT", "version": "v1" } ]
2021-04-20
[ [ "Penner", "Maxim", "" ], [ "Nabeel", "Muhammad", "" ], [ "Peissig", "Jürgen", "" ] ]
The ETSI has recently introduced the DECT-2020 New Radio (NR) as an IMT-2020 candidate technology for the mMTC and URLLC use cases. To consider DECT-2020 NR as an IMT-2020 technology, the ITU-R has determined different independent evaluation groups to assess its performance against the IMT-2020 requirements. These independent evaluation groups are now in process of investigating the DECT-2020 NR. In order to successfully assess a technology, one important aspect is to fully understand the underlying physical layer and its performance in different environments. Therefore, in this paper, we focus on the physical layer of DECT-2020 NR and investigate its link-level performance with standard channel models provided by the ITU-R for evaluation. We perform extensive simulations to analyze the performance of DECT-2020 NR for both URLLC and mMTC use cases. The results presented in this work are beneficial for the independent evaluation groups and researchers as these results can help calibrating their physical layer performance curves. These results can also be used directly for future system-level evaluations of DECT-2020 NR.
1611.06547
Henk Moed
Henk F. Moed
A critical comparative analysis of five world university rankings
Author copy of a paper accepted for publication in Scientometrics, 15 Nov. 2016 v2
null
null
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To provide users insight into the value and limits of world university rankings, a comparative analysis is conducted of 5 ranking systems: ARWU, Leiden, THE, QS and U-Multirank. It links these systems with one another at the level of individual institutions, and analyses the overlap in institutional coverage, geographical coverage, how indicators are calculated from raw data, the skewness of indicator distributions, and statistical correlations between indicators. Four secondary analyses are presented investigating national academic systems and selected pairs of indicators. It is argued that current systems are still one-dimensional in the sense that they provide finalized, seemingly unrelated indicator values rather than offering a data set and tools to observe patterns in multi-faceted data. By systematically comparing different systems, more insight is provided into how their institutional coverage, rating methods, the selection of indicators and their normalizations influence the ranking positions of given institutions.
[ { "created": "Sun, 20 Nov 2016 17:06:50 GMT", "version": "v1" }, { "created": "Mon, 5 Dec 2016 14:14:04 GMT", "version": "v2" } ]
2016-12-06
[ [ "Moed", "Henk F.", "" ] ]
To provide users insight into the value and limits of world university rankings, a comparative analysis is conducted of 5 ranking systems: ARWU, Leiden, THE, QS and U-Multirank. It links these systems with one another at the level of individual institutions, and analyses the overlap in institutional coverage, geographical coverage, how indicators are calculated from raw data, the skewness of indicator distributions, and statistical correlations between indicators. Four secondary analyses are presented investigating national academic systems and selected pairs of indicators. It is argued that current systems are still one-dimensional in the sense that they provide finalized, seemingly unrelated indicator values rather than offering a data set and tools to observe patterns in multi-faceted data. By systematically comparing different systems, more insight is provided into how their institutional coverage, rating methods, the selection of indicators and their normalizations influence the ranking positions of given institutions.
2103.14191
Marcelo De Abranches
Marcelo Abranches, Karl Olson and Eric Keller
Infinity: A Scalable Infrastructure for In-Network Applications
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Network programmability is an area of research both defined by its potential and its current limitations. While programmable hardware enables customization of device operation, tailoring processing to finely tuned objectives, limited resources stifle much of the capability and scalability desired for future technologies. Current solutions to overcome these limitations simply shift the problem, temporarily offloading memory needs or processing to other systems while incurring both round-trip time and complexity costs. To overcome these unnecessary costs, we introduce Infinity, a resource disaggregation method to move processing to capable devices while continuing to forward as the original owner, limiting unnecessary buffering and round-trip processing. By forwarding both the processing need and associated data simultaneously we are able to scale operation with minimal overhead and delay, improving both capability and performance objectives for in-network processing.
[ { "created": "Fri, 26 Mar 2021 00:55:08 GMT", "version": "v1" } ]
2021-03-29
[ [ "Abranches", "Marcelo", "" ], [ "Olson", "Karl", "" ], [ "Keller", "Eric", "" ] ]
Network programmability is an area of research both defined by its potential and its current limitations. While programmable hardware enables customization of device operation, tailoring processing to finely tuned objectives, limited resources stifle much of the capability and scalability desired for future technologies. Current solutions to overcome these limitations simply shift the problem, temporarily offloading memory needs or processing to other systems while incurring both round-trip time and complexity costs. To overcome these unnecessary costs, we introduce Infinity, a resource disaggregation method to move processing to capable devices while continuing to forward as the original owner, limiting unnecessary buffering and round-trip processing. By forwarding both the processing need and associated data simultaneously we are able to scale operation with minimal overhead and delay, improving both capability and performance objectives for in-network processing.
2307.09533
Aditya Potukuchi
Charlie Carlson, Ewan Davies, Alexandra Kolla, and Aditya Potukuchi
Approximately counting independent sets in dense bipartite graphs via subspace enumeration
15 pages
null
null
null
cs.DS math.CO
http://creativecommons.org/licenses/by/4.0/
We give a randomized algorithm that approximates the number of independent sets in a dense, regular bipartite graph -- in the language of approximate counting, we give an FPRAS for #BIS on the class of dense, regular bipartite graphs. Efficient counting algorithms typically apply to ``high-temperature'' problems on bounded-degree graphs, and our contribution is a notable exception as it applies to dense graphs in a low-temperature setting. Our methods give a counting-focused complement to the long line of work in combinatorial optimization showing that CSPs such as Max-Cut and Unique Games are easy on dense graphs via spectral arguments. The proof exploits the fact that dense, regular graphs exhibit a kind of small-set expansion (i.e. bounded threshold rank), which via subspace enumeration lets us enumerate small cuts efficiently.
[ { "created": "Tue, 18 Jul 2023 18:23:24 GMT", "version": "v1" } ]
2023-07-20
[ [ "Carlson", "Charlie", "" ], [ "Davies", "Ewan", "" ], [ "Kolla", "Alexandra", "" ], [ "Potukuchi", "Aditya", "" ] ]
We give a randomized algorithm that approximates the number of independent sets in a dense, regular bipartite graph -- in the language of approximate counting, we give an FPRAS for #BIS on the class of dense, regular bipartite graphs. Efficient counting algorithms typically apply to ``high-temperature'' problems on bounded-degree graphs, and our contribution is a notable exception as it applies to dense graphs in a low-temperature setting. Our methods give a counting-focused complement to the long line of work in combinatorial optimization showing that CSPs such as Max-Cut and Unique Games are easy on dense graphs via spectral arguments. The proof exploits the fact that dense, regular graphs exhibit a kind of small-set expansion (i.e. bounded threshold rank), which via subspace enumeration lets us enumerate small cuts efficiently.
2012.01037
Baohan Xu
Rui An, Xingtian Shi, Baohan Xu
Fast Automatic Feature Selection for Multi-Period Sliding Window Aggregate in Time Series
ICDM 2020
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
As one of the most well-known artificial feature sampler, the sliding window is widely used in scenarios where spatial and temporal information exists, such as computer vision, natural language process, data stream, and time series. Among which time series is common in many scenarios like credit card payment, user behavior, and sensors. General feature selection for features extracted by sliding window aggregate calls for time-consuming iteration to generate features, and then traditional feature selection methods are employed to rank them. The decision of key parameter, i.e. the period of sliding windows, depends on the domain knowledge and calls for trivial. Currently, there is no automatic method to handle the sliding window aggregate features selection. As the time consumption of feature generation with different periods and sliding windows is huge, it is very hard to enumerate them all and then select them. In this paper, we propose a general framework using Markov Chain to solve this problem. This framework is very efficient and has high accuracy, such that it is able to perform feature selection on a variety of features and period options. We show the detail by 2 common sliding windows and 3 types of aggregation operators. And it is easy to extend more sliding windows and aggregation operators in this framework by employing existing theory about Markov Chain.
[ { "created": "Wed, 2 Dec 2020 09:14:30 GMT", "version": "v1" } ]
2020-12-03
[ [ "An", "Rui", "" ], [ "Shi", "Xingtian", "" ], [ "Xu", "Baohan", "" ] ]
As one of the most well-known artificial feature sampler, the sliding window is widely used in scenarios where spatial and temporal information exists, such as computer vision, natural language process, data stream, and time series. Among which time series is common in many scenarios like credit card payment, user behavior, and sensors. General feature selection for features extracted by sliding window aggregate calls for time-consuming iteration to generate features, and then traditional feature selection methods are employed to rank them. The decision of key parameter, i.e. the period of sliding windows, depends on the domain knowledge and calls for trivial. Currently, there is no automatic method to handle the sliding window aggregate features selection. As the time consumption of feature generation with different periods and sliding windows is huge, it is very hard to enumerate them all and then select them. In this paper, we propose a general framework using Markov Chain to solve this problem. This framework is very efficient and has high accuracy, such that it is able to perform feature selection on a variety of features and period options. We show the detail by 2 common sliding windows and 3 types of aggregation operators. And it is easy to extend more sliding windows and aggregation operators in this framework by employing existing theory about Markov Chain.
2106.12659
Stephen Kelly
Stephen Kelly, Tatiana Voegerl, Wolfgang Banzhaf, Cedric Gondro
Evolving Hierarchical Memory-Prediction Machines in Multi-Task Reinforcement Learning
null
null
null
null
cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
A fundamental aspect of behaviour is the ability to encode salient features of experience in memory and use these memories, in combination with current sensory information, to predict the best action for each situation such that long-term objectives are maximized. The world is highly dynamic, and behavioural agents must generalize across a variety of environments and objectives over time. This scenario can be modeled as a partially-observable multi-task reinforcement learning problem. We use genetic programming to evolve highly-generalized agents capable of operating in six unique environments from the control literature, including OpenAI's entire Classic Control suite. This requires the agent to support discrete and continuous actions simultaneously. No task-identification sensor inputs are provided, thus agents must identify tasks from the dynamics of state variables alone and define control policies for each task. We show that emergent hierarchical structure in the evolving programs leads to multi-task agents that succeed by performing a temporal decomposition and encoding of the problem environments in memory. The resulting agents are competitive with task-specific agents in all six environments. Furthermore, the hierarchical structure of programs allows for dynamic run-time complexity, which results in relatively efficient operation.
[ { "created": "Wed, 23 Jun 2021 21:34:32 GMT", "version": "v1" } ]
2021-06-25
[ [ "Kelly", "Stephen", "" ], [ "Voegerl", "Tatiana", "" ], [ "Banzhaf", "Wolfgang", "" ], [ "Gondro", "Cedric", "" ] ]
A fundamental aspect of behaviour is the ability to encode salient features of experience in memory and use these memories, in combination with current sensory information, to predict the best action for each situation such that long-term objectives are maximized. The world is highly dynamic, and behavioural agents must generalize across a variety of environments and objectives over time. This scenario can be modeled as a partially-observable multi-task reinforcement learning problem. We use genetic programming to evolve highly-generalized agents capable of operating in six unique environments from the control literature, including OpenAI's entire Classic Control suite. This requires the agent to support discrete and continuous actions simultaneously. No task-identification sensor inputs are provided, thus agents must identify tasks from the dynamics of state variables alone and define control policies for each task. We show that emergent hierarchical structure in the evolving programs leads to multi-task agents that succeed by performing a temporal decomposition and encoding of the problem environments in memory. The resulting agents are competitive with task-specific agents in all six environments. Furthermore, the hierarchical structure of programs allows for dynamic run-time complexity, which results in relatively efficient operation.
2006.09074
Amir Lellouche
Uriel Feige, Amir Lellouche
Quantitative Group Testing and the rank of random matrices
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a random Bernoulli matrix $ A\in \{0,1\}^{m\times n} $, an integer $ 0< k < n $ and the vector $ y:=Ax $, where $ x \in \{0,1\}^n $ is of Hamming weight $ k $, the objective in the {\em Quantitative Group Testing} (QGT) problem is to recover $ x $. This problem is more difficult the smaller $m$ is. For parameter ranges of interest to us, known polynomial time algorithms require values of $m$ that are much larger than $k$. In this work, we define a seemingly easier problem that we refer to as {\em Subset Select}. Given the same input as in QGT, the objective in Subset Select is to return a subset $ S \subseteq [n] $ of cardinality $ m $, such that for all $ i\in [n] $, if $ x_i = 1 $ then $ i\in S $. We show that if the square submatrix of $A$ defined by the columns indexed by $S$ has nearly full rank, then from the solution of the Subset Select problem we can recover in polynomial-time the solution $x$ to the QGT problem. We conjecture that for every polynomial time Subset Select algorithm, the resulting output matrix will satisfy the desired rank condition. We prove the conjecture for some classes of algorithms. Using this reduction, we provide some examples of how to improve known QGT algorithms. Using theoretical analysis and simulations, we demonstrate that the modified algorithms solve the QGT problem for values of $ m $ that are smaller than those required for the original algorithms.
[ { "created": "Tue, 16 Jun 2020 11:08:22 GMT", "version": "v1" } ]
2020-06-17
[ [ "Feige", "Uriel", "" ], [ "Lellouche", "Amir", "" ] ]
Given a random Bernoulli matrix $ A\in \{0,1\}^{m\times n} $, an integer $ 0< k < n $ and the vector $ y:=Ax $, where $ x \in \{0,1\}^n $ is of Hamming weight $ k $, the objective in the {\em Quantitative Group Testing} (QGT) problem is to recover $ x $. This problem is more difficult the smaller $m$ is. For parameter ranges of interest to us, known polynomial time algorithms require values of $m$ that are much larger than $k$. In this work, we define a seemingly easier problem that we refer to as {\em Subset Select}. Given the same input as in QGT, the objective in Subset Select is to return a subset $ S \subseteq [n] $ of cardinality $ m $, such that for all $ i\in [n] $, if $ x_i = 1 $ then $ i\in S $. We show that if the square submatrix of $A$ defined by the columns indexed by $S$ has nearly full rank, then from the solution of the Subset Select problem we can recover in polynomial-time the solution $x$ to the QGT problem. We conjecture that for every polynomial time Subset Select algorithm, the resulting output matrix will satisfy the desired rank condition. We prove the conjecture for some classes of algorithms. Using this reduction, we provide some examples of how to improve known QGT algorithms. Using theoretical analysis and simulations, we demonstrate that the modified algorithms solve the QGT problem for values of $ m $ that are smaller than those required for the original algorithms.
0906.0205
Forrest Sheng Bao
Yuanlin Zhang and Forrest Sheng Bao
A Survey of Tree Convex Sets Test
13 pages, 5 figures, 2 tables
null
null
null
cs.DS cs.CC
http://creativecommons.org/licenses/by-nc-sa/3.0/
Tree convex sets refer to a collection of sets such that each set in the collection is a subtree of a tree whose nodes are the elements of these sets. They extend the concept of row convex sets each of which is an interval over a total ordering of the elements of those sets. They have been applied to identify tractable Constraint Satisfaction Problems and Combinatorial Auction Problems. Recently, polynomial algorithms have been proposed to recognize tree convex sets. In this paper, we review the materials that are the key to a linear recognition algorithm.
[ { "created": "Mon, 1 Jun 2009 03:54:42 GMT", "version": "v1" } ]
2009-06-03
[ [ "Zhang", "Yuanlin", "" ], [ "Bao", "Forrest Sheng", "" ] ]
Tree convex sets refer to a collection of sets such that each set in the collection is a subtree of a tree whose nodes are the elements of these sets. They extend the concept of row convex sets each of which is an interval over a total ordering of the elements of those sets. They have been applied to identify tractable Constraint Satisfaction Problems and Combinatorial Auction Problems. Recently, polynomial algorithms have been proposed to recognize tree convex sets. In this paper, we review the materials that are the key to a linear recognition algorithm.
2303.12454
Hannes Waclawek
Stefan Huber, Hannes Waclawek
$\mathcal{C}^k$-continuous Spline Approximation with TensorFlow Gradient Descent Optimizers
This preprint has not undergone peer review or any post-submission improvements or corrections. The Version of Record of this contribution is published in Computer Aided Systems Theory - EUROCAST 2022 and is available online at https://doi.org/10.1007/978-3-031-25312-6_68
Moreno-D\'iaz, R., Pichler, F., Quesada-Arencibia, A. (eds) Computer Aided Systems Theory - EUROCAST 2022. EUROCAST 2022. Lecture Notes in Computer Science, vol 13789. Springer, Cham
10.1007/978-3-031-25312-6_68
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this work we present an "out-of-the-box" application of Machine Learning (ML) optimizers for an industrial optimization problem. We introduce a piecewise polynomial model (spline) for fitting of $\mathcal{C}^k$-continuos functions, which can be deployed in a cam approximation setting. We then use the gradient descent optimization context provided by the machine learning framework TensorFlow to optimize the model parameters with respect to approximation quality and $\mathcal{C}^k$-continuity and evaluate available optimizers. Our experiments show that the problem solution is feasible using TensorFlow gradient tapes and that AMSGrad and SGD show the best results among available TensorFlow optimizers. Furthermore, we introduce a novel regularization approach to improve SGD convergence. Although experiments show that remaining discontinuities after optimization are small, we can eliminate these errors using a presented algorithm which has impact only on affected derivatives in the local spline segment.
[ { "created": "Wed, 22 Mar 2023 10:52:21 GMT", "version": "v1" } ]
2023-03-23
[ [ "Huber", "Stefan", "" ], [ "Waclawek", "Hannes", "" ] ]
In this work we present an "out-of-the-box" application of Machine Learning (ML) optimizers for an industrial optimization problem. We introduce a piecewise polynomial model (spline) for fitting of $\mathcal{C}^k$-continuos functions, which can be deployed in a cam approximation setting. We then use the gradient descent optimization context provided by the machine learning framework TensorFlow to optimize the model parameters with respect to approximation quality and $\mathcal{C}^k$-continuity and evaluate available optimizers. Our experiments show that the problem solution is feasible using TensorFlow gradient tapes and that AMSGrad and SGD show the best results among available TensorFlow optimizers. Furthermore, we introduce a novel regularization approach to improve SGD convergence. Although experiments show that remaining discontinuities after optimization are small, we can eliminate these errors using a presented algorithm which has impact only on affected derivatives in the local spline segment.
2106.08867
Tim Murray-Browne
Tim Murray-Browne and Panagiotis Tigas
Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders
Published at the International Conference on New Interfaces for Musical Expression, June 2021. 3000 word short paper. 5 figures plus video which may be seen at https://timmb.com/sonified-body-r-and-d-lab
null
null
null
cs.HC cs.MM
http://creativecommons.org/licenses/by/4.0/
In many contexts, creating mappings for gestural interactions can form part of an artistic process. Creators seeking a mapping that is expressive, novel, and affords them a sense of authorship may not know how to program it up in a signal processing patch. Tools like Wekinator and MIMIC allow creators to use supervised machine learning to learn mappings from example input/output pairings. However, a creator may know a good mapping when they encounter it yet start with little sense of what the inputs or outputs should be. We call this an open-ended mapping process. Addressing this need, we introduce the latent mapping, which leverages the latent space of an unsupervised machine learning algorithm such as a Variational Autoencoder trained on a corpus of unlabelled gestural data from the creator. We illustrate it with Sonified Body, a system mapping full-body movement to sound which we explore in a residency with three dancers.
[ { "created": "Wed, 16 Jun 2021 15:40:53 GMT", "version": "v1" } ]
2021-06-17
[ [ "Murray-Browne", "Tim", "" ], [ "Tigas", "Panagiotis", "" ] ]
In many contexts, creating mappings for gestural interactions can form part of an artistic process. Creators seeking a mapping that is expressive, novel, and affords them a sense of authorship may not know how to program it up in a signal processing patch. Tools like Wekinator and MIMIC allow creators to use supervised machine learning to learn mappings from example input/output pairings. However, a creator may know a good mapping when they encounter it yet start with little sense of what the inputs or outputs should be. We call this an open-ended mapping process. Addressing this need, we introduce the latent mapping, which leverages the latent space of an unsupervised machine learning algorithm such as a Variational Autoencoder trained on a corpus of unlabelled gestural data from the creator. We illustrate it with Sonified Body, a system mapping full-body movement to sound which we explore in a residency with three dancers.
2104.00210
Jaeyong Chung
Phuoc Pham, Jacob Abraham, Jaeyong Chung
Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric Quantizer
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Quantizing weights and activations of deep neural networks is essential for deploying them in resource-constrained devices, or cloud platforms for at-scale services. While binarization is a special case of quantization, this extreme case often leads to several training difficulties, and necessitates specialized models and training methods. As a result, recent quantization methods do not provide binarization, thus losing the most resource-efficient option, and quantized and binarized networks have been distinct research areas. We examine binarization difficulties in a quantization framework and find that all we need to enable the binary training are a symmetric quantizer, good initialization, and careful hyperparameter selection. These techniques also lead to substantial improvements in multi-bit quantization. We demonstrate our unified quantization framework, denoted as UniQ, on the ImageNet dataset with various architectures such as ResNet-18,-34 and MobileNetV2. For multi-bit quantization, UniQ outperforms existing methods to achieve the state-of-the-art accuracy. In binarization, the achieved accuracy is comparable to existing state-of-the-art methods even without modifying the original architectures.
[ { "created": "Thu, 1 Apr 2021 02:33:31 GMT", "version": "v1" } ]
2021-04-02
[ [ "Pham", "Phuoc", "" ], [ "Abraham", "Jacob", "" ], [ "Chung", "Jaeyong", "" ] ]
Quantizing weights and activations of deep neural networks is essential for deploying them in resource-constrained devices, or cloud platforms for at-scale services. While binarization is a special case of quantization, this extreme case often leads to several training difficulties, and necessitates specialized models and training methods. As a result, recent quantization methods do not provide binarization, thus losing the most resource-efficient option, and quantized and binarized networks have been distinct research areas. We examine binarization difficulties in a quantization framework and find that all we need to enable the binary training are a symmetric quantizer, good initialization, and careful hyperparameter selection. These techniques also lead to substantial improvements in multi-bit quantization. We demonstrate our unified quantization framework, denoted as UniQ, on the ImageNet dataset with various architectures such as ResNet-18,-34 and MobileNetV2. For multi-bit quantization, UniQ outperforms existing methods to achieve the state-of-the-art accuracy. In binarization, the achieved accuracy is comparable to existing state-of-the-art methods even without modifying the original architectures.
1704.03764
Rodrigo Bruno Mr.
Rodrigo Bruno, Lu\'is Oliveira, Paulo Ferreira
NG2C: Pretenuring N-Generational GC for HotSpot Big Data Applications
Accepted at ISMM'17
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Big Data applications suffer from unpredictable and unacceptably high pause times due to Garbage Collection (GC). This is the case in latency-sensitive applications such as on-line credit-card fraud detection, graph-based computing for analysis on social networks, etc. Such pauses compromise latency requirements of the whole application stack and result from applications' aggressive buffering/caching of data, exposing an ill-suited GC design, which assumes that most objects will die young and does not consider that applications hold large amounts of middle-lived data in memory. To avoid such pauses, we propose NG2C, a new GC algorithm that combines pretenuring with an N-Generational heap. By being able to allocate objects into different generations, NG2C is able to group objects with similar lifetime profiles in the same generation. By allocating objects with similar lifetime profiles close to each other, i.e. in the same generation, we avoid object promotion (copying between generations) and heap fragmentation (which leads to heap compactions) both responsible for most of the duration of HotSpot GC pause times. NG2C is implemented for the OpenJDK 8 HotSpot Java Virtual Machine, as an extension of the Garbage First GC. We evaluate NG2C using Cassandra, Lucene, and GraphChi with three different GCs: Garbage First (G1), Concurrent Mark Sweep (CMS), and NG2C. Results show that NG2C decreases the worst observable GC pause time by up to 94.8% for Cassandra, 85.0% for Lucene and 96.45% for GraphChi, when compared to current collectors (G1 and CMS). In addition, NG2C has no negative impact on application throughput or memory usage.
[ { "created": "Wed, 12 Apr 2017 14:03:32 GMT", "version": "v1" } ]
2017-04-13
[ [ "Bruno", "Rodrigo", "" ], [ "Oliveira", "Luís", "" ], [ "Ferreira", "Paulo", "" ] ]
Big Data applications suffer from unpredictable and unacceptably high pause times due to Garbage Collection (GC). This is the case in latency-sensitive applications such as on-line credit-card fraud detection, graph-based computing for analysis on social networks, etc. Such pauses compromise latency requirements of the whole application stack and result from applications' aggressive buffering/caching of data, exposing an ill-suited GC design, which assumes that most objects will die young and does not consider that applications hold large amounts of middle-lived data in memory. To avoid such pauses, we propose NG2C, a new GC algorithm that combines pretenuring with an N-Generational heap. By being able to allocate objects into different generations, NG2C is able to group objects with similar lifetime profiles in the same generation. By allocating objects with similar lifetime profiles close to each other, i.e. in the same generation, we avoid object promotion (copying between generations) and heap fragmentation (which leads to heap compactions) both responsible for most of the duration of HotSpot GC pause times. NG2C is implemented for the OpenJDK 8 HotSpot Java Virtual Machine, as an extension of the Garbage First GC. We evaluate NG2C using Cassandra, Lucene, and GraphChi with three different GCs: Garbage First (G1), Concurrent Mark Sweep (CMS), and NG2C. Results show that NG2C decreases the worst observable GC pause time by up to 94.8% for Cassandra, 85.0% for Lucene and 96.45% for GraphChi, when compared to current collectors (G1 and CMS). In addition, NG2C has no negative impact on application throughput or memory usage.
1810.12566
Yi-Chen Chen
Yi-Chen Chen, Chia-Hao Shen, Sung-Feng Huang, Hung-yi Lee, Lin-shan Lee
Almost-unsupervised Speech Recognition with Close-to-zero Resource Based on Phonetic Structures Learned from Very Small Unpaired Speech and Text Data
null
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Producing a large amount of annotated speech data for training ASR systems remains difficult for more than 95% of languages all over the world which are low-resourced. However, we note human babies start to learn the language by the sounds of a small number of exemplar words without hearing a large amount of data. We initiate some preliminary work in this direction in this paper. Audio Word2Vec is used to obtain embeddings of spoken words which carry phonetic information extracted from the signals. An autoencoder is used to generate embeddings of text words based on the articulatory features for the phoneme sequences. Both sets of embeddings for spoken and text words describe similar phonetic structures among words in their respective latent spaces. A mapping relation from the audio embeddings to text embeddings actually gives the word-level ASR. This can be learned by aligning a small number of spoken words and the corresponding text words in the embedding spaces. In the initial experiments only 200 annotated spoken words and one hour of speech data without annotation gave a word accuracy of 27.5%, which is low but a good starting point.
[ { "created": "Tue, 30 Oct 2018 08:11:45 GMT", "version": "v1" } ]
2018-10-31
[ [ "Chen", "Yi-Chen", "" ], [ "Shen", "Chia-Hao", "" ], [ "Huang", "Sung-Feng", "" ], [ "Lee", "Hung-yi", "" ], [ "Lee", "Lin-shan", "" ] ]
Producing a large amount of annotated speech data for training ASR systems remains difficult for more than 95% of languages all over the world which are low-resourced. However, we note human babies start to learn the language by the sounds of a small number of exemplar words without hearing a large amount of data. We initiate some preliminary work in this direction in this paper. Audio Word2Vec is used to obtain embeddings of spoken words which carry phonetic information extracted from the signals. An autoencoder is used to generate embeddings of text words based on the articulatory features for the phoneme sequences. Both sets of embeddings for spoken and text words describe similar phonetic structures among words in their respective latent spaces. A mapping relation from the audio embeddings to text embeddings actually gives the word-level ASR. This can be learned by aligning a small number of spoken words and the corresponding text words in the embedding spaces. In the initial experiments only 200 annotated spoken words and one hour of speech data without annotation gave a word accuracy of 27.5%, which is low but a good starting point.
2405.06983
Muhammad Umar Farooq Qaisar
Muhammad Umar Farooq Qaisar, Weijie Yuan, Paolo Bellavista, Guangjie Han, and Adeel Ahmed
ISAC-Assisted Wireless Rechargeable Sensor Networks with Multiple Mobile Charging Vehicles
Accepted for publication in the Special Issue Q1'2024, "Integrating Sensing and Communication for Ubiquitous Internet of Things," IEEE Internet of Things Magazine
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
As IoT-based wireless sensor networks (WSNs) become more prevalent, the issue of energy shortages becomes more pressing. One potential solution is the use of wireless power transfer (WPT) technology, which is the key to building a new shape of wireless rechargeable sensor networks (WRSNs). However, efficient charging and scheduling are critical for WRSNs to function properly. Motivated by the fact that probabilistic techniques can help enhance the effectiveness of charging scheduling for WRSNs, this article addresses the aforementioned issue and proposes a novel ISAC-assisted WRSN protocol. In particular, our proposed protocol considers several factors to balance the charging load on each mobile charging vehicle (MCV), uses an efficient charging factor strategy to partially charge network devices, and employs the ISAC concept to reduce the traveling cost of each MCV and prevent charging conflicts. Simulation results demonstrate that this protocol outperforms other classic, cutting-edge protocols in multiple areas.
[ { "created": "Sat, 11 May 2024 10:41:54 GMT", "version": "v1" } ]
2024-05-14
[ [ "Qaisar", "Muhammad Umar Farooq", "" ], [ "Yuan", "Weijie", "" ], [ "Bellavista", "Paolo", "" ], [ "Han", "Guangjie", "" ], [ "Ahmed", "Adeel", "" ] ]
As IoT-based wireless sensor networks (WSNs) become more prevalent, the issue of energy shortages becomes more pressing. One potential solution is the use of wireless power transfer (WPT) technology, which is the key to building a new shape of wireless rechargeable sensor networks (WRSNs). However, efficient charging and scheduling are critical for WRSNs to function properly. Motivated by the fact that probabilistic techniques can help enhance the effectiveness of charging scheduling for WRSNs, this article addresses the aforementioned issue and proposes a novel ISAC-assisted WRSN protocol. In particular, our proposed protocol considers several factors to balance the charging load on each mobile charging vehicle (MCV), uses an efficient charging factor strategy to partially charge network devices, and employs the ISAC concept to reduce the traveling cost of each MCV and prevent charging conflicts. Simulation results demonstrate that this protocol outperforms other classic, cutting-edge protocols in multiple areas.
1804.06111
Biao Xiang
Biao Xiang, Ziqi Liu, Jun Zhou, Xiaolong Li
Feature Propagation on Graph: A New Perspective to Graph Representation Learning
null
null
null
null
cs.SI cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study feature propagation on graph, an inference process involved in graph representation learning tasks. It's to spread the features over the whole graph to the $t$-th orders, thus to expand the end's features. The process has been successfully adopted in graph embedding or graph neural networks, however few works studied the convergence of feature propagation. Without convergence guarantees, it may lead to unexpected numerical overflows and task failures. In this paper, we first define the concept of feature propagation on graph formally, and then study its convergence conditions to equilibrium states. We further link feature propagation to several established approaches such as node2vec and structure2vec. In the end of this paper, we extend existing approaches from represent nodes to edges (edge2vec) and demonstrate its applications on fraud transaction detection in real world scenario. Experiments show that it is quite competitive.
[ { "created": "Tue, 17 Apr 2018 08:54:19 GMT", "version": "v1" } ]
2018-04-18
[ [ "Xiang", "Biao", "" ], [ "Liu", "Ziqi", "" ], [ "Zhou", "Jun", "" ], [ "Li", "Xiaolong", "" ] ]
We study feature propagation on graph, an inference process involved in graph representation learning tasks. It's to spread the features over the whole graph to the $t$-th orders, thus to expand the end's features. The process has been successfully adopted in graph embedding or graph neural networks, however few works studied the convergence of feature propagation. Without convergence guarantees, it may lead to unexpected numerical overflows and task failures. In this paper, we first define the concept of feature propagation on graph formally, and then study its convergence conditions to equilibrium states. We further link feature propagation to several established approaches such as node2vec and structure2vec. In the end of this paper, we extend existing approaches from represent nodes to edges (edge2vec) and demonstrate its applications on fraud transaction detection in real world scenario. Experiments show that it is quite competitive.
cs/0506074
Paolo Liberatore
Paolo Liberatore
Redundancy in Logic II: 2CNF and Horn Propositional Formulae
Corrected figures on Theorem 10; added and modified some references
null
10.1016/j.artint.2007.06.003
null
cs.AI cs.LO
null
We report complexity results about redundancy of formulae in 2CNF form. We first consider the problem of checking redundancy and show some algorithms that are slightly better than the trivial one. We then analyze problems related to finding irredundant equivalent subsets (I.E.S.) of a given set. The concept of cyclicity proved to be relevant to the complexity of these problems. Some results about Horn formulae are also shown.
[ { "created": "Fri, 17 Jun 2005 19:28:29 GMT", "version": "v1" }, { "created": "Mon, 20 Jun 2005 15:07:15 GMT", "version": "v2" }, { "created": "Tue, 21 Jun 2005 14:16:22 GMT", "version": "v3" } ]
2021-04-12
[ [ "Liberatore", "Paolo", "" ] ]
We report complexity results about redundancy of formulae in 2CNF form. We first consider the problem of checking redundancy and show some algorithms that are slightly better than the trivial one. We then analyze problems related to finding irredundant equivalent subsets (I.E.S.) of a given set. The concept of cyclicity proved to be relevant to the complexity of these problems. Some results about Horn formulae are also shown.
1901.03264
Alex Dytso
Alex Dytso, Semih Yagli, H. Vincent Poor, Shlomo Shamai (Shitz)
The Capacity Achieving Distribution for the Amplitude Constrained Additive Gaussian Channel: An Upper Bound on the Number of Mass Points
Keywords: Amplitude constraint, power constraint, additive vector Gaussian noise channel, capacity, discrete distributions
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies an $n$-dimensional additive Gaussian noise channel with a peak-power-constrained input. It is well known that, in this case, when $n=1$ the capacity-achieving input distribution is discrete with finitely many mass points, and when $n>1$ the capacity-achieving input distribution is supported on finitely many concentric shells. However, due to the previous proof technique, neither the exact number of mass points/shells of the optimal input distribution nor a bound on it was available. This paper provides an alternative proof of the finiteness of the number mass points/shells of the capacity-achieving input distribution and produces the first firm bounds on the number of mass points and shells, paving an alternative way for approaching many such problems. Roughly, the paper consists of three parts. The first part considers the case of $n=1$. The first result, in this part, shows that the number of mass points in the capacity-achieving input distribution is within a factor of two from the downward shifted capacity-achieving output probability density function (pdf). The second result, by showing a bound on the number of zeros of the downward shifted capacity-achieving output pdf, provides a first firm upper on the number of mass points. Specifically, it is shown that the number of mass points is given by $O(\mathsf{A}^2)$ where $\mathsf{A}$ is the constraint on the input amplitude. The second part generalizes the results of the first part to the case of $n>1$. In particular, for every dimension $n>1$, it is shown that the number of shells is given by $O(\mathsf{A}^2)$ where $\mathsf{A}$ is the constraint on the input amplitude. Finally, the third part provides bounds on the number of points for the case of $n=1$ with an additional power constraint.
[ { "created": "Thu, 10 Jan 2019 16:51:08 GMT", "version": "v1" }, { "created": "Wed, 23 Jan 2019 23:04:10 GMT", "version": "v2" }, { "created": "Tue, 27 Aug 2019 21:52:48 GMT", "version": "v3" }, { "created": "Fri, 15 Nov 2019 02:40:44 GMT", "version": "v4" } ]
2019-11-18
[ [ "Dytso", "Alex", "", "Shitz" ], [ "Yagli", "Semih", "", "Shitz" ], [ "Poor", "H. Vincent", "", "Shitz" ], [ "Shamai", "Shlomo", "", "Shitz" ] ]
This paper studies an $n$-dimensional additive Gaussian noise channel with a peak-power-constrained input. It is well known that, in this case, when $n=1$ the capacity-achieving input distribution is discrete with finitely many mass points, and when $n>1$ the capacity-achieving input distribution is supported on finitely many concentric shells. However, due to the previous proof technique, neither the exact number of mass points/shells of the optimal input distribution nor a bound on it was available. This paper provides an alternative proof of the finiteness of the number mass points/shells of the capacity-achieving input distribution and produces the first firm bounds on the number of mass points and shells, paving an alternative way for approaching many such problems. Roughly, the paper consists of three parts. The first part considers the case of $n=1$. The first result, in this part, shows that the number of mass points in the capacity-achieving input distribution is within a factor of two from the downward shifted capacity-achieving output probability density function (pdf). The second result, by showing a bound on the number of zeros of the downward shifted capacity-achieving output pdf, provides a first firm upper on the number of mass points. Specifically, it is shown that the number of mass points is given by $O(\mathsf{A}^2)$ where $\mathsf{A}$ is the constraint on the input amplitude. The second part generalizes the results of the first part to the case of $n>1$. In particular, for every dimension $n>1$, it is shown that the number of shells is given by $O(\mathsf{A}^2)$ where $\mathsf{A}$ is the constraint on the input amplitude. Finally, the third part provides bounds on the number of points for the case of $n=1$ with an additional power constraint.
2002.04335
Eren Sezener
Eren Sezener and Peter Dayan
Static and Dynamic Values of Computation in MCTS
Presented in UAI 2020
PMLR 124:31-40, 2020
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monte-Carlo Tree Search (MCTS) is one of the most-widely used methods for planning, and has powered many recent advances in artificial intelligence. In MCTS, one typically performs computations (i.e., simulations) to collect statistics about the possible future consequences of actions, and then chooses accordingly. Many popular MCTS methods such as UCT and its variants decide which computations to perform by trading-off exploration and exploitation. In this work, we take a more direct approach, and explicitly quantify the value of a computation based on its expected impact on the quality of the action eventually chosen. Our approach goes beyond the "myopic" limitations of existing computation-value-based methods in two senses: (I) we are able to account for the impact of non-immediate (ie, future) computations (II) on non-immediate actions. We show that policies that greedily optimize computation values are optimal under certain assumptions and obtain results that are competitive with the state-of-the-art.
[ { "created": "Tue, 11 Feb 2020 12:05:58 GMT", "version": "v1" }, { "created": "Thu, 19 Nov 2020 12:28:19 GMT", "version": "v2" } ]
2020-11-20
[ [ "Sezener", "Eren", "" ], [ "Dayan", "Peter", "" ] ]
Monte-Carlo Tree Search (MCTS) is one of the most-widely used methods for planning, and has powered many recent advances in artificial intelligence. In MCTS, one typically performs computations (i.e., simulations) to collect statistics about the possible future consequences of actions, and then chooses accordingly. Many popular MCTS methods such as UCT and its variants decide which computations to perform by trading-off exploration and exploitation. In this work, we take a more direct approach, and explicitly quantify the value of a computation based on its expected impact on the quality of the action eventually chosen. Our approach goes beyond the "myopic" limitations of existing computation-value-based methods in two senses: (I) we are able to account for the impact of non-immediate (ie, future) computations (II) on non-immediate actions. We show that policies that greedily optimize computation values are optimal under certain assumptions and obtain results that are competitive with the state-of-the-art.
1905.11392
Suihua Cai
Wenchao Lin, Suihua Cai, Baodian Wei, Xiao Ma
Successive Cancellation List Decoding of Semi-random Unit Memory Convolutional Codes
Submitted to IEEE Transactions on Information Theory. arXiv admin note: substantial text overlap with arXiv:1902.09808
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present in this paper a special class of unit memory convolutional codes (UMCCs), called semi-random UMCCs (SRUMCCs), where the information block is first encoded by a short block code and then transmitted in a block Markov (random) superposition manner. We propose a successive cancellation list decoding algorithm, by which a list of candidate codewords are generated serially until one passes an empirical divergence test instead of the conventional cyclic redundancy check (CRC). The threshold for testing the correctness of candidate codewords can be learned off-line based on the statistical behavior of the introduced empirical divergence function (EDF). The performance-complexity tradeoff and the performance-delay tradeoff can be achieved by adjusting the statistical threshold and the decoding window size. To analyze the performance, a closed-form upper bound and a simulated lower bound are derived. Simulation results verify our analysis and show that: 1) The proposed list decoding algorithm with empirical divergence test outperforms the sequential decoding in high signal-to-noise ratio (SNR) region; 2) Taking the tail-biting convolutional codes (TBCC) as the basic codes, the proposed list decoding of SRUMCCs have comparable performance with the polar codes under the constraint of equivalent decoding delay.
[ { "created": "Mon, 27 May 2019 10:20:32 GMT", "version": "v1" }, { "created": "Fri, 24 Jul 2020 01:57:20 GMT", "version": "v2" } ]
2020-07-27
[ [ "Lin", "Wenchao", "" ], [ "Cai", "Suihua", "" ], [ "Wei", "Baodian", "" ], [ "Ma", "Xiao", "" ] ]
We present in this paper a special class of unit memory convolutional codes (UMCCs), called semi-random UMCCs (SRUMCCs), where the information block is first encoded by a short block code and then transmitted in a block Markov (random) superposition manner. We propose a successive cancellation list decoding algorithm, by which a list of candidate codewords are generated serially until one passes an empirical divergence test instead of the conventional cyclic redundancy check (CRC). The threshold for testing the correctness of candidate codewords can be learned off-line based on the statistical behavior of the introduced empirical divergence function (EDF). The performance-complexity tradeoff and the performance-delay tradeoff can be achieved by adjusting the statistical threshold and the decoding window size. To analyze the performance, a closed-form upper bound and a simulated lower bound are derived. Simulation results verify our analysis and show that: 1) The proposed list decoding algorithm with empirical divergence test outperforms the sequential decoding in high signal-to-noise ratio (SNR) region; 2) Taking the tail-biting convolutional codes (TBCC) as the basic codes, the proposed list decoding of SRUMCCs have comparable performance with the polar codes under the constraint of equivalent decoding delay.
2006.06135
Zhi Xu
Devavrat Shah, Dogyoon Song, Zhi Xu, Yuzhe Yang
Sample Efficient Reinforcement Learning via Low-Rank Matrix Estimation
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the question of learning $Q$-function in a sample efficient manner for reinforcement learning with continuous state and action spaces under a generative model. If $Q$-function is Lipschitz continuous, then the minimal sample complexity for estimating $\epsilon$-optimal $Q$-function is known to scale as ${\Omega}(\frac{1}{\epsilon^{d_1+d_2 +2}})$ per classical non-parametric learning theory, where $d_1$ and $d_2$ denote the dimensions of the state and action spaces respectively. The $Q$-function, when viewed as a kernel, induces a Hilbert-Schmidt operator and hence possesses square-summable spectrum. This motivates us to consider a parametric class of $Q$-functions parameterized by its "rank" $r$, which contains all Lipschitz $Q$-functions as $r \to \infty$. As our key contribution, we develop a simple, iterative learning algorithm that finds $\epsilon$-optimal $Q$-function with sample complexity of $\widetilde{O}(\frac{1}{\epsilon^{\max(d_1, d_2)+2}})$ when the optimal $Q$-function has low rank $r$ and the discounting factor $\gamma$ is below a certain threshold. Thus, this provides an exponential improvement in sample complexity. To enable our result, we develop a novel Matrix Estimation algorithm that faithfully estimates an unknown low-rank matrix in the $\ell_\infty$ sense even in the presence of arbitrary bounded noise, which might be of interest in its own right. Empirical results on several stochastic control tasks confirm the efficacy of our "low-rank" algorithms.
[ { "created": "Thu, 11 Jun 2020 00:55:35 GMT", "version": "v1" } ]
2020-06-12
[ [ "Shah", "Devavrat", "" ], [ "Song", "Dogyoon", "" ], [ "Xu", "Zhi", "" ], [ "Yang", "Yuzhe", "" ] ]
We consider the question of learning $Q$-function in a sample efficient manner for reinforcement learning with continuous state and action spaces under a generative model. If $Q$-function is Lipschitz continuous, then the minimal sample complexity for estimating $\epsilon$-optimal $Q$-function is known to scale as ${\Omega}(\frac{1}{\epsilon^{d_1+d_2 +2}})$ per classical non-parametric learning theory, where $d_1$ and $d_2$ denote the dimensions of the state and action spaces respectively. The $Q$-function, when viewed as a kernel, induces a Hilbert-Schmidt operator and hence possesses square-summable spectrum. This motivates us to consider a parametric class of $Q$-functions parameterized by its "rank" $r$, which contains all Lipschitz $Q$-functions as $r \to \infty$. As our key contribution, we develop a simple, iterative learning algorithm that finds $\epsilon$-optimal $Q$-function with sample complexity of $\widetilde{O}(\frac{1}{\epsilon^{\max(d_1, d_2)+2}})$ when the optimal $Q$-function has low rank $r$ and the discounting factor $\gamma$ is below a certain threshold. Thus, this provides an exponential improvement in sample complexity. To enable our result, we develop a novel Matrix Estimation algorithm that faithfully estimates an unknown low-rank matrix in the $\ell_\infty$ sense even in the presence of arbitrary bounded noise, which might be of interest in its own right. Empirical results on several stochastic control tasks confirm the efficacy of our "low-rank" algorithms.
2402.08096
Andrew Bai
Andrew Bai, Chih-Kuan Yeh, Cho-Jui Hsieh, Ankur Taly
Which Pretrain Samples to Rehearse when Finetuning Pretrained Models?
17 pages, 13 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Fine-tuning pretrained foundational models on specific tasks is now the de facto approach for text and vision tasks. A known pitfall of this approach is the forgetting of pretraining knowledge that happens during finetuning. Rehearsing samples randomly from the pretrain dataset is a common approach to alleviate such forgetting. However, we find that random mixing unintentionally includes samples which are not (yet) forgotten or unlearnable by the model. We propose a novel sampling scheme, mix-cd, that identifies and prioritizes samples that actually face forgetting, which we call collateral damage. Since directly identifying collateral damage samples is computationally expensive, we propose a procedure to estimate the distribution of such samples by tracking the statistics of finetuned samples. Our approach is lightweight, easy to implement, and can be seamlessly integrated into existing models, offering an effective means to retain pretrain performance without additional computational costs.
[ { "created": "Mon, 12 Feb 2024 22:32:12 GMT", "version": "v1" } ]
2024-02-14
[ [ "Bai", "Andrew", "" ], [ "Yeh", "Chih-Kuan", "" ], [ "Hsieh", "Cho-Jui", "" ], [ "Taly", "Ankur", "" ] ]
Fine-tuning pretrained foundational models on specific tasks is now the de facto approach for text and vision tasks. A known pitfall of this approach is the forgetting of pretraining knowledge that happens during finetuning. Rehearsing samples randomly from the pretrain dataset is a common approach to alleviate such forgetting. However, we find that random mixing unintentionally includes samples which are not (yet) forgotten or unlearnable by the model. We propose a novel sampling scheme, mix-cd, that identifies and prioritizes samples that actually face forgetting, which we call collateral damage. Since directly identifying collateral damage samples is computationally expensive, we propose a procedure to estimate the distribution of such samples by tracking the statistics of finetuned samples. Our approach is lightweight, easy to implement, and can be seamlessly integrated into existing models, offering an effective means to retain pretrain performance without additional computational costs.
2004.13254
Martin Skrodzki
Martin Skrodzki
How the deprecation of Java applets affected online visualization frameworks -- a case study
null
VisGap - The Gap between Visualization Research and Visualization Software, The Eurographics Association, 2020
10.2312/visgap.20201111
RIKEN-iTHEMS-Report-20
cs.SE cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The JavaView visualization framework was designed at the end of the 1990s as a software that provides - among other services - easy, interactive geometry visualizations on web pages. We discuss how this and other design goals were met and present several applications to highlight the contemporary use-cases of the framework. However, as JavaView's easy web exports was based on Java Applets, the deprecation of this technology disabled one main functionality of the software. The remainder of the article uses JavaView as an example to highlight the effects of changes in the underlying programming language on a visualization toolkit. We discuss possible reactions of software to such challenges, where the JavaView framework serves as an example to illustrate development decisions. These discussions are guided by the broader, underlying question as to how long it is sensible to maintain a software.
[ { "created": "Tue, 28 Apr 2020 02:51:55 GMT", "version": "v1" }, { "created": "Thu, 30 Apr 2020 07:02:09 GMT", "version": "v2" } ]
2020-06-11
[ [ "Skrodzki", "Martin", "" ] ]
The JavaView visualization framework was designed at the end of the 1990s as a software that provides - among other services - easy, interactive geometry visualizations on web pages. We discuss how this and other design goals were met and present several applications to highlight the contemporary use-cases of the framework. However, as JavaView's easy web exports was based on Java Applets, the deprecation of this technology disabled one main functionality of the software. The remainder of the article uses JavaView as an example to highlight the effects of changes in the underlying programming language on a visualization toolkit. We discuss possible reactions of software to such challenges, where the JavaView framework serves as an example to illustrate development decisions. These discussions are guided by the broader, underlying question as to how long it is sensible to maintain a software.
2405.02454
Michael Burnham
Michael Burnham
What is Sentiment Meant to Mean to Language Models?
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Sentiment analysis is one of the most widely used techniques in text analysis. Recent advancements with Large Language Models have made it more accurate and accessible than ever, allowing researchers to classify text with only a plain English prompt. However, "sentiment" entails a wide variety of concepts depending on the domain and tools used. It has been used to mean emotion, opinions, market movements, or simply a general ``good-bad'' dimension. This raises a question: What exactly are language models doing when prompted to label documents by sentiment? This paper first overviews how sentiment is defined across different contexts, highlighting that it is a confounded measurement construct in that it entails multiple variables, such as emotional valence and opinion, without disentangling them. I then test three language models across two data sets with prompts requesting sentiment, valence, and stance classification. I find that sentiment labels most strongly correlate with valence labels. I further find that classification improves when researchers more precisely specify their dimension of interest rather than using the less well-defined concept of sentiment. I conclude by encouraging researchers to move beyond "sentiment" when feasible and use a more precise measurement construct.
[ { "created": "Fri, 3 May 2024 19:37:37 GMT", "version": "v1" } ]
2024-05-07
[ [ "Burnham", "Michael", "" ] ]
Sentiment analysis is one of the most widely used techniques in text analysis. Recent advancements with Large Language Models have made it more accurate and accessible than ever, allowing researchers to classify text with only a plain English prompt. However, "sentiment" entails a wide variety of concepts depending on the domain and tools used. It has been used to mean emotion, opinions, market movements, or simply a general ``good-bad'' dimension. This raises a question: What exactly are language models doing when prompted to label documents by sentiment? This paper first overviews how sentiment is defined across different contexts, highlighting that it is a confounded measurement construct in that it entails multiple variables, such as emotional valence and opinion, without disentangling them. I then test three language models across two data sets with prompts requesting sentiment, valence, and stance classification. I find that sentiment labels most strongly correlate with valence labels. I further find that classification improves when researchers more precisely specify their dimension of interest rather than using the less well-defined concept of sentiment. I conclude by encouraging researchers to move beyond "sentiment" when feasible and use a more precise measurement construct.
1910.04209
Jerry Ma
Jerry Ma, Denis Yarats
On the adequacy of untuned warmup for adaptive optimization
AAAI 2021
null
null
null
cs.LG cs.NE stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Adaptive optimization algorithms such as Adam are widely used in deep learning. The stability of such algorithms is often improved with a warmup schedule for the learning rate. Motivated by the difficulty of choosing and tuning warmup schedules, recent work proposes automatic variance rectification of Adam's adaptive learning rate, claiming that this rectified approach ("RAdam") surpasses the vanilla Adam algorithm and reduces the need for expensive tuning of Adam with warmup. In this work, we refute this analysis and provide an alternative explanation for the necessity of warmup based on the magnitude of the update term, which is of greater relevance to training stability. We then provide some "rule-of-thumb" warmup schedules, and we demonstrate that simple untuned warmup of Adam performs more-or-less identically to RAdam in typical practical settings. We conclude by suggesting that practitioners stick to linear warmup with Adam, with a sensible default being linear warmup over $2 / (1 - \beta_2)$ training iterations.
[ { "created": "Wed, 9 Oct 2019 19:25:03 GMT", "version": "v1" }, { "created": "Sun, 13 Dec 2020 01:58:24 GMT", "version": "v2" }, { "created": "Sat, 20 Mar 2021 03:43:16 GMT", "version": "v3" } ]
2021-03-23
[ [ "Ma", "Jerry", "" ], [ "Yarats", "Denis", "" ] ]
Adaptive optimization algorithms such as Adam are widely used in deep learning. The stability of such algorithms is often improved with a warmup schedule for the learning rate. Motivated by the difficulty of choosing and tuning warmup schedules, recent work proposes automatic variance rectification of Adam's adaptive learning rate, claiming that this rectified approach ("RAdam") surpasses the vanilla Adam algorithm and reduces the need for expensive tuning of Adam with warmup. In this work, we refute this analysis and provide an alternative explanation for the necessity of warmup based on the magnitude of the update term, which is of greater relevance to training stability. We then provide some "rule-of-thumb" warmup schedules, and we demonstrate that simple untuned warmup of Adam performs more-or-less identically to RAdam in typical practical settings. We conclude by suggesting that practitioners stick to linear warmup with Adam, with a sensible default being linear warmup over $2 / (1 - \beta_2)$ training iterations.
2112.06810
Animesh Trivedi
Peter-Jan Gootzen and Animesh Trivedi
Bento and the Art of Repeated Research
null
null
null
null
cs.OS cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bento provides a new approach to developing file systems, with safety and high-velocity development in mind. This is achieved by using Rust, a modern and memory-safe systems programming language, and by providing a framework to run a single file system implementation in kernel space with the VFS or in user space with FUSE. In this paper, the benchmarking experiments from the Bento paper are repeated. We fail to exactly reproduce the results of the Bento paper, but more or less find the same patterns albeit with more outlying results. Additionally we unsuccessfully run a standardized test suite, and expand the set of experiments with latency benchmarks and throughput benchmarks using a RAM block device. The latency benchmarks show that ext4 with journaling consistently outperforms Bento-fs and the RAM throughput benchmarks show no additional consistent performance pattern. During this experimentation, a set of 12 bugs was encountered and analyzed. We find that the ratio of memory related bugs is lower than other systems programming projects that use C as opposed to Rust, thus supporting the claims of the Bento framework.
[ { "created": "Mon, 13 Dec 2021 17:13:00 GMT", "version": "v1" } ]
2021-12-15
[ [ "Gootzen", "Peter-Jan", "" ], [ "Trivedi", "Animesh", "" ] ]
Bento provides a new approach to developing file systems, with safety and high-velocity development in mind. This is achieved by using Rust, a modern and memory-safe systems programming language, and by providing a framework to run a single file system implementation in kernel space with the VFS or in user space with FUSE. In this paper, the benchmarking experiments from the Bento paper are repeated. We fail to exactly reproduce the results of the Bento paper, but more or less find the same patterns albeit with more outlying results. Additionally we unsuccessfully run a standardized test suite, and expand the set of experiments with latency benchmarks and throughput benchmarks using a RAM block device. The latency benchmarks show that ext4 with journaling consistently outperforms Bento-fs and the RAM throughput benchmarks show no additional consistent performance pattern. During this experimentation, a set of 12 bugs was encountered and analyzed. We find that the ratio of memory related bugs is lower than other systems programming projects that use C as opposed to Rust, thus supporting the claims of the Bento framework.
2005.10187
Leonardo Maccari
Leonardo Maccari, Valeria Cagno
Do we need a Contact Tracing App?
null
null
null
null
cs.CY cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of this paper is to shed some light on the usefulness of a contact tracing smartphone app for the containment of the COVID-19 pandemic. We review the basics of contact tracing during the spread of a virus, we contextualize the numbers to the case of COVID-19 and we analyse the state of the art for proximity detection using Bluetooth Low Energy. Our contribution is to assess if there is scientific evidence of the benefit of a contact tracing app in slowing down the spread of the virus using present technologies. Our conclusion is that such evidence is lacking, and we should re-think the introduction of such a privacy-invasive measure.
[ { "created": "Wed, 20 May 2020 16:50:57 GMT", "version": "v1" }, { "created": "Wed, 29 Jul 2020 13:50:57 GMT", "version": "v2" } ]
2020-07-30
[ [ "Maccari", "Leonardo", "" ], [ "Cagno", "Valeria", "" ] ]
The goal of this paper is to shed some light on the usefulness of a contact tracing smartphone app for the containment of the COVID-19 pandemic. We review the basics of contact tracing during the spread of a virus, we contextualize the numbers to the case of COVID-19 and we analyse the state of the art for proximity detection using Bluetooth Low Energy. Our contribution is to assess if there is scientific evidence of the benefit of a contact tracing app in slowing down the spread of the virus using present technologies. Our conclusion is that such evidence is lacking, and we should re-think the introduction of such a privacy-invasive measure.
1611.05172
Charith Perera
Luiz H. Nunes, Julio C. Estrella, Charith Perera, Stephan Reiff-Marganiec, Alexandre N. Delbem
Multi-criteria IoT Resource Discovery: A Comparative Analysis
Software: Practice and Experience
Software: Practice and Experience 2017
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growth of real world objects with embedded and globally networked sensors allows to consolidate the Internet of Things paradigm and increase the number of applications in the domains of ubiquitous and context-aware computing. The merging between Cloud Computing and Internet of Things named Cloud of Things will be the key to handle thousands of sensors and their data. One of the main challenges in the Cloud of Things is context-aware sensor search and selection. Typically, sensors require to be searched using two or more conflicting context properties. Most of the existing work uses some kind of multi-criteria decision analysis to perform the sensor search and selection, but does not show any concern for the quality of the selection presented by these methods. In this paper, we analyse the behaviour of the SAW, TOPSIS and VIKOR multi-objective decision methods and their quality of selection comparing them with the Pareto-optimality solutions. The gathered results allow to analyse and compare these algorithms regarding their behaviour, the number of optimal solutions and redundancy.
[ { "created": "Wed, 16 Nov 2016 07:26:46 GMT", "version": "v1" } ]
2016-11-17
[ [ "Nunes", "Luiz H.", "" ], [ "Estrella", "Julio C.", "" ], [ "Perera", "Charith", "" ], [ "Reiff-Marganiec", "Stephan", "" ], [ "Delbem", "Alexandre N.", "" ] ]
The growth of real world objects with embedded and globally networked sensors allows to consolidate the Internet of Things paradigm and increase the number of applications in the domains of ubiquitous and context-aware computing. The merging between Cloud Computing and Internet of Things named Cloud of Things will be the key to handle thousands of sensors and their data. One of the main challenges in the Cloud of Things is context-aware sensor search and selection. Typically, sensors require to be searched using two or more conflicting context properties. Most of the existing work uses some kind of multi-criteria decision analysis to perform the sensor search and selection, but does not show any concern for the quality of the selection presented by these methods. In this paper, we analyse the behaviour of the SAW, TOPSIS and VIKOR multi-objective decision methods and their quality of selection comparing them with the Pareto-optimality solutions. The gathered results allow to analyse and compare these algorithms regarding their behaviour, the number of optimal solutions and redundancy.
cs/0602056
Domenico Camarda Dr.
Dino Borri, Domenico Camarda
Building Scenarios for Environmental Management and Planning: An IT-Based Approach
null
null
null
null
cs.MA
null
Oftentimes, the need to build multidiscipline knowledge bases, oriented to policy scenarios, entails the involvement of stakeholders in manifold domains, with a juxtaposition of different languages whose semantics can hardly allow inter-domain transfers. A useful support for planning is the building up of durable IT based interactive platforms, where it is possible to modify initial positions toward a semantic convergence. The present paper shows an area-based application of these tools, for the integrated distance-management of different forms of knowledge expressed by selected stakeholders about environmental planning issues, in order to build alternative development scenarios. Keywords: Environmental planning, Scenario building, Multi-source knowledge, IT-based
[ { "created": "Wed, 15 Feb 2006 14:46:05 GMT", "version": "v1" } ]
2007-05-23
[ [ "Borri", "Dino", "" ], [ "Camarda", "Domenico", "" ] ]
Oftentimes, the need to build multidiscipline knowledge bases, oriented to policy scenarios, entails the involvement of stakeholders in manifold domains, with a juxtaposition of different languages whose semantics can hardly allow inter-domain transfers. A useful support for planning is the building up of durable IT based interactive platforms, where it is possible to modify initial positions toward a semantic convergence. The present paper shows an area-based application of these tools, for the integrated distance-management of different forms of knowledge expressed by selected stakeholders about environmental planning issues, in order to build alternative development scenarios. Keywords: Environmental planning, Scenario building, Multi-source knowledge, IT-based
1606.03077
Ilias Diakonikolas
Ilias Diakonikolas and Daniel M. Kane and Alistair Stewart
Efficient Robust Proper Learning of Log-concave Distributions
null
null
null
null
cs.DS cs.LG math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the {\em robust proper learning} of univariate log-concave distributions (over continuous and discrete domains). Given a set of samples drawn from an unknown target distribution, we want to compute a log-concave hypothesis distribution that is as close as possible to the target, in total variation distance. In this work, we give the first computationally efficient algorithm for this learning problem. Our algorithm achieves the information-theoretically optimal sample size (up to a constant factor), runs in polynomial time, and is robust to model misspecification with nearly-optimal error guarantees. Specifically, we give an algorithm that, on input $n=O(1/\eps^{5/2})$ samples from an unknown distribution $f$, runs in time $\widetilde{O}(n^{8/5})$, and outputs a log-concave hypothesis $h$ that (with high probability) satisfies $\dtv(h, f) = O(\opt)+\eps$, where $\opt$ is the minimum total variation distance between $f$ and the class of log-concave distributions. Our approach to the robust proper learning problem is quite flexible and may be applicable to many other univariate distribution families.
[ { "created": "Thu, 9 Jun 2016 19:32:20 GMT", "version": "v1" } ]
2016-06-10
[ [ "Diakonikolas", "Ilias", "" ], [ "Kane", "Daniel M.", "" ], [ "Stewart", "Alistair", "" ] ]
We study the {\em robust proper learning} of univariate log-concave distributions (over continuous and discrete domains). Given a set of samples drawn from an unknown target distribution, we want to compute a log-concave hypothesis distribution that is as close as possible to the target, in total variation distance. In this work, we give the first computationally efficient algorithm for this learning problem. Our algorithm achieves the information-theoretically optimal sample size (up to a constant factor), runs in polynomial time, and is robust to model misspecification with nearly-optimal error guarantees. Specifically, we give an algorithm that, on input $n=O(1/\eps^{5/2})$ samples from an unknown distribution $f$, runs in time $\widetilde{O}(n^{8/5})$, and outputs a log-concave hypothesis $h$ that (with high probability) satisfies $\dtv(h, f) = O(\opt)+\eps$, where $\opt$ is the minimum total variation distance between $f$ and the class of log-concave distributions. Our approach to the robust proper learning problem is quite flexible and may be applicable to many other univariate distribution families.
1504.05218
Kiril Solovey
Kiril Solovey and Jingjin Yu and Or Zamir and Dan Halperin
Motion Planning for Unlabeled Discs with Optimality Guarantees
null
null
null
null
cs.CG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of path planning for unlabeled (indistinguishable) unit-disc robots in a planar environment cluttered with polygonal obstacles. We introduce an algorithm which minimizes the total path length, i.e., the sum of lengths of the individual paths. Our algorithm is guaranteed to find a solution if one exists, or report that none exists otherwise. It runs in time $\tilde{O}(m^4+m^2n^2)$, where $m$ is the number of robots and $n$ is the total complexity of the workspace. Moreover, the total length of the returned solution is at most $\text{OPT}+4m$, where OPT is the optimal solution cost. To the best of our knowledge this is the first algorithm for the problem that has such guarantees. The algorithm has been implemented in an exact manner and we present experimental results that attest to its efficiency.
[ { "created": "Mon, 20 Apr 2015 20:24:13 GMT", "version": "v1" } ]
2015-04-22
[ [ "Solovey", "Kiril", "" ], [ "Yu", "Jingjin", "" ], [ "Zamir", "Or", "" ], [ "Halperin", "Dan", "" ] ]
We study the problem of path planning for unlabeled (indistinguishable) unit-disc robots in a planar environment cluttered with polygonal obstacles. We introduce an algorithm which minimizes the total path length, i.e., the sum of lengths of the individual paths. Our algorithm is guaranteed to find a solution if one exists, or report that none exists otherwise. It runs in time $\tilde{O}(m^4+m^2n^2)$, where $m$ is the number of robots and $n$ is the total complexity of the workspace. Moreover, the total length of the returned solution is at most $\text{OPT}+4m$, where OPT is the optimal solution cost. To the best of our knowledge this is the first algorithm for the problem that has such guarantees. The algorithm has been implemented in an exact manner and we present experimental results that attest to its efficiency.
2404.19149
Evangelos Ververas
Evangelos Ververas, Rolandos Alexandros Potamias, Jifei Song, Jiankang Deng, Stefanos Zafeiriou
SAGS: Structure-Aware 3D Gaussian Splatting
15 pages, 8 figures, 3 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Following the advent of NeRFs, 3D Gaussian Splatting (3D-GS) has paved the way to real-time neural rendering overcoming the computational burden of volumetric methods. Following the pioneering work of 3D-GS, several methods have attempted to achieve compressible and high-fidelity performance alternatives. However, by employing a geometry-agnostic optimization scheme, these methods neglect the inherent 3D structure of the scene, thereby restricting the expressivity and the quality of the representation, resulting in various floating points and artifacts. In this work, we propose a structure-aware Gaussian Splatting method (SAGS) that implicitly encodes the geometry of the scene, which reflects to state-of-the-art rendering performance and reduced storage requirements on benchmark novel-view synthesis datasets. SAGS is founded on a local-global graph representation that facilitates the learning of complex scenes and enforces meaningful point displacements that preserve the scene's geometry. Additionally, we introduce a lightweight version of SAGS, using a simple yet effective mid-point interpolation scheme, which showcases a compact representation of the scene with up to 24$\times$ size reduction without the reliance on any compression strategies. Extensive experiments across multiple benchmark datasets demonstrate the superiority of SAGS compared to state-of-the-art 3D-GS methods under both rendering quality and model size. Besides, we demonstrate that our structure-aware method can effectively mitigate floating artifacts and irregular distortions of previous methods while obtaining precise depth maps. Project page https://eververas.github.io/SAGS/.
[ { "created": "Mon, 29 Apr 2024 23:26:30 GMT", "version": "v1" } ]
2024-05-01
[ [ "Ververas", "Evangelos", "" ], [ "Potamias", "Rolandos Alexandros", "" ], [ "Song", "Jifei", "" ], [ "Deng", "Jiankang", "" ], [ "Zafeiriou", "Stefanos", "" ] ]
Following the advent of NeRFs, 3D Gaussian Splatting (3D-GS) has paved the way to real-time neural rendering overcoming the computational burden of volumetric methods. Following the pioneering work of 3D-GS, several methods have attempted to achieve compressible and high-fidelity performance alternatives. However, by employing a geometry-agnostic optimization scheme, these methods neglect the inherent 3D structure of the scene, thereby restricting the expressivity and the quality of the representation, resulting in various floating points and artifacts. In this work, we propose a structure-aware Gaussian Splatting method (SAGS) that implicitly encodes the geometry of the scene, which reflects to state-of-the-art rendering performance and reduced storage requirements on benchmark novel-view synthesis datasets. SAGS is founded on a local-global graph representation that facilitates the learning of complex scenes and enforces meaningful point displacements that preserve the scene's geometry. Additionally, we introduce a lightweight version of SAGS, using a simple yet effective mid-point interpolation scheme, which showcases a compact representation of the scene with up to 24$\times$ size reduction without the reliance on any compression strategies. Extensive experiments across multiple benchmark datasets demonstrate the superiority of SAGS compared to state-of-the-art 3D-GS methods under both rendering quality and model size. Besides, we demonstrate that our structure-aware method can effectively mitigate floating artifacts and irregular distortions of previous methods while obtaining precise depth maps. Project page https://eververas.github.io/SAGS/.
2312.00575
Khai Loong Aw
Khai Loong Aw, Syrielle Montariol, Badr AlKhamissi, Martin Schrimpf, Antoine Bosselut
Instruction-tuning Aligns LLMs to the Human Brain
COLM 2024
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Instruction-tuning is a widely adopted finetuning method that enables large language models (LLMs) to generate output that more closely resembles human responses. However, no studies have shown that instruction-tuning actually teaches LLMs to process language in a similar manner as humans. We investigate the effect of instruction-tuning on aligning LLM and human language processing mechanisms in two ways: (1) brain alignment, the similarity of LLM internal representations to neural activity in the human language system, and (2) behavioral alignment, the similarity of LLM and human behavior on a reading task. We assess 25 vanilla and instruction-tuned LLMs on three datasets involving humans reading naturalistic stories and sentences, and find that instruction-tuning generally enhances brain alignment (~6%), but has no similar effect on behavioral alignment. To identify factors underlying this improvement in brain alignment, we compute correlations between brain alignment and various LLM properties, such as model size, problem-solving, and world knowledge understanding. Notably, we find a strong positive correlation between brain alignment and model size (r = 0.95), as well as performance on tasks requiring world knowledge (r = 0.81). Our results demonstrate that instruction-tuning LLMs improves both world knowledge representations and brain alignment, suggesting that the mechanisms that encode world knowledge in LLMs also improve representational alignment to the human brain.
[ { "created": "Fri, 1 Dec 2023 13:31:02 GMT", "version": "v1" }, { "created": "Fri, 9 Aug 2024 04:33:58 GMT", "version": "v2" } ]
2024-08-12
[ [ "Aw", "Khai Loong", "" ], [ "Montariol", "Syrielle", "" ], [ "AlKhamissi", "Badr", "" ], [ "Schrimpf", "Martin", "" ], [ "Bosselut", "Antoine", "" ] ]
Instruction-tuning is a widely adopted finetuning method that enables large language models (LLMs) to generate output that more closely resembles human responses. However, no studies have shown that instruction-tuning actually teaches LLMs to process language in a similar manner as humans. We investigate the effect of instruction-tuning on aligning LLM and human language processing mechanisms in two ways: (1) brain alignment, the similarity of LLM internal representations to neural activity in the human language system, and (2) behavioral alignment, the similarity of LLM and human behavior on a reading task. We assess 25 vanilla and instruction-tuned LLMs on three datasets involving humans reading naturalistic stories and sentences, and find that instruction-tuning generally enhances brain alignment (~6%), but has no similar effect on behavioral alignment. To identify factors underlying this improvement in brain alignment, we compute correlations between brain alignment and various LLM properties, such as model size, problem-solving, and world knowledge understanding. Notably, we find a strong positive correlation between brain alignment and model size (r = 0.95), as well as performance on tasks requiring world knowledge (r = 0.81). Our results demonstrate that instruction-tuning LLMs improves both world knowledge representations and brain alignment, suggesting that the mechanisms that encode world knowledge in LLMs also improve representational alignment to the human brain.
2210.13212
Weilong Guan
Weilong Guan, Kaihan Yang, Yinsheng Chen, Zhong Guan
A Dimension-Augmented Physics-Informed Neural Network (DaPINN) with High Level Accuracy and Efficiency
33 pages, 12 figures
null
10.1016/j.jcp.2023.112360
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Physics-informed neural networks (PINNs) have been widely applied in different fields due to their effectiveness in solving partial differential equations (PDEs). However, the accuracy and efficiency of PINNs need to be considerably improved for scientific and commercial use. To address this issue, we systematically propose a novel dimension-augmented physics-informed neural network (DaPINN), which simultaneously and significantly improves the accuracy and efficiency of the PINN. In the DaPINN model, we introduce inductive bias in the neural network to enhance network generalizability by adding a special regularization term to the loss function. Furthermore, we manipulate the network input dimension by inserting additional sample features and incorporating the expanded dimensionality in the loss function. Moreover, we verify the effectiveness of power series augmentation, Fourier series augmentation and replica augmentation, in both forward and backward problems. In most experiments, the error of DaPINN is 1$\sim$2 orders of magnitude lower than that of PINN. The results show that the DaPINN outperforms the original PINN in terms of both accuracy and efficiency with a reduced dependence on the number of sample points. We also discuss the complexity of the DaPINN and its compatibility with other methods.
[ { "created": "Wed, 19 Oct 2022 15:54:37 GMT", "version": "v1" } ]
2023-08-16
[ [ "Guan", "Weilong", "" ], [ "Yang", "Kaihan", "" ], [ "Chen", "Yinsheng", "" ], [ "Guan", "Zhong", "" ] ]
Physics-informed neural networks (PINNs) have been widely applied in different fields due to their effectiveness in solving partial differential equations (PDEs). However, the accuracy and efficiency of PINNs need to be considerably improved for scientific and commercial use. To address this issue, we systematically propose a novel dimension-augmented physics-informed neural network (DaPINN), which simultaneously and significantly improves the accuracy and efficiency of the PINN. In the DaPINN model, we introduce inductive bias in the neural network to enhance network generalizability by adding a special regularization term to the loss function. Furthermore, we manipulate the network input dimension by inserting additional sample features and incorporating the expanded dimensionality in the loss function. Moreover, we verify the effectiveness of power series augmentation, Fourier series augmentation and replica augmentation, in both forward and backward problems. In most experiments, the error of DaPINN is 1$\sim$2 orders of magnitude lower than that of PINN. The results show that the DaPINN outperforms the original PINN in terms of both accuracy and efficiency with a reduced dependence on the number of sample points. We also discuss the complexity of the DaPINN and its compatibility with other methods.
1203.6096
Yehuda Afek
Yehuda Afek and Eli Gafni
Asynchrony from Synchrony
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider synchronous dynamic networks which like radio networks may have asymmetric communication links, and are affected by communication rather than processor failures. In this paper we investigate the minimal message survivability in a per round basis that allows for the minimal global cooperation, i.e., allows to solve any task that is wait-free read-write solvable. The paper completely characterizes this survivability requirement. Message survivability is formalized by considering adversaries that have a limited power to remove messages in a round. Removal of a message on a link in one direction does not necessarily imply the removal of the message on that link in the other direction. Surprisingly there exist a single strongest adversary which solves any wait-free read/write task. Any different adversary that solves any wait-free read/write task is weaker, and any stronger adversary will not solve any wait-free read/write task. ABD \cite{ABD} who considered processor failure, arrived at an adversary that is $n/2$ resilient, consequently can solve tasks, such as $n/2$-set-consensus, which are not read/write wait-free solvable. With message adversaries, we arrive at an adversary which has exactly the read-write wait-free power. Furthermore, this adversary allows for a considerably simpler (simplest that we know of) proof that the protocol complex of any read/write wait-free task is a subdivided simplex, finally making this proof accessible for students with no algebraic-topology prerequisites, and alternatively dispensing with the assumption that the Immediate Snapshot complex is a subdivided simplex.
[ { "created": "Tue, 27 Mar 2012 22:26:02 GMT", "version": "v1" } ]
2012-03-29
[ [ "Afek", "Yehuda", "" ], [ "Gafni", "Eli", "" ] ]
We consider synchronous dynamic networks which like radio networks may have asymmetric communication links, and are affected by communication rather than processor failures. In this paper we investigate the minimal message survivability in a per round basis that allows for the minimal global cooperation, i.e., allows to solve any task that is wait-free read-write solvable. The paper completely characterizes this survivability requirement. Message survivability is formalized by considering adversaries that have a limited power to remove messages in a round. Removal of a message on a link in one direction does not necessarily imply the removal of the message on that link in the other direction. Surprisingly there exist a single strongest adversary which solves any wait-free read/write task. Any different adversary that solves any wait-free read/write task is weaker, and any stronger adversary will not solve any wait-free read/write task. ABD \cite{ABD} who considered processor failure, arrived at an adversary that is $n/2$ resilient, consequently can solve tasks, such as $n/2$-set-consensus, which are not read/write wait-free solvable. With message adversaries, we arrive at an adversary which has exactly the read-write wait-free power. Furthermore, this adversary allows for a considerably simpler (simplest that we know of) proof that the protocol complex of any read/write wait-free task is a subdivided simplex, finally making this proof accessible for students with no algebraic-topology prerequisites, and alternatively dispensing with the assumption that the Immediate Snapshot complex is a subdivided simplex.
cs/0412023
Praveen Boinee
P. Boinee, F. Barbarino, A. De Angelis
Multidimensional data classification with artificial neural networks
8 pages, 4 figures, Submitted to EURASIP Journal on Applied Signal Processing, 2004
null
null
null
cs.NE cs.AI
null
Multi-dimensional data classification is an important and challenging problem in many astro-particle experiments. Neural networks have proved to be versatile and robust in multi-dimensional data classification. In this article we shall study the classification of gamma from the hadrons for the MAGIC Experiment. Two neural networks have been used for the classification task. One is Multi-Layer Perceptron based on supervised learning and other is Self-Organising Map (SOM), which is based on unsupervised learning technique. The results have been shown and the possible ways of combining these networks have been proposed to yield better and faster classification results.
[ { "created": "Mon, 6 Dec 2004 20:23:15 GMT", "version": "v1" } ]
2007-05-23
[ [ "Boinee", "P.", "" ], [ "Barbarino", "F.", "" ], [ "De Angelis", "A.", "" ] ]
Multi-dimensional data classification is an important and challenging problem in many astro-particle experiments. Neural networks have proved to be versatile and robust in multi-dimensional data classification. In this article we shall study the classification of gamma from the hadrons for the MAGIC Experiment. Two neural networks have been used for the classification task. One is Multi-Layer Perceptron based on supervised learning and other is Self-Organising Map (SOM), which is based on unsupervised learning technique. The results have been shown and the possible ways of combining these networks have been proposed to yield better and faster classification results.
2112.07179
Mahdi Fahmideh
Mahdi Fahmideh, Anuradha Gunawardana, Shiping Chen, Jun Shen, Brian Yecies
Blockchain Developments and Innovations
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Blockchain has received expanding interest from various domains. Institutions, enterprises, governments, and agencies are interested in Blockchain potential to augment their software systems. The unique requirements and characteristics of Blockchain platforms raise new challenges involving extensive enhancement to conventional software development processes to meet the needs of these domains. Software engineering approaches supporting Blockchain-oriented developments have been slow to materialize, despite proposals in the literature, and they have yet to be objectively analyzed. A critical appraisal of these innovations is crucial to identify their respective strengths and weaknesses. We present an analytical evaluation of several prominent Blockchain-oriented methods through a comprehensive, criteria-based evaluation framework. The results can be used for comparing, adapting, and developing a new generation of Blockchain-oriented software development processes and innovations.
[ { "created": "Tue, 14 Dec 2021 05:51:58 GMT", "version": "v1" } ]
2021-12-15
[ [ "Fahmideh", "Mahdi", "" ], [ "Gunawardana", "Anuradha", "" ], [ "Chen", "Shiping", "" ], [ "Shen", "Jun", "" ], [ "Yecies", "Brian", "" ] ]
Blockchain has received expanding interest from various domains. Institutions, enterprises, governments, and agencies are interested in Blockchain potential to augment their software systems. The unique requirements and characteristics of Blockchain platforms raise new challenges involving extensive enhancement to conventional software development processes to meet the needs of these domains. Software engineering approaches supporting Blockchain-oriented developments have been slow to materialize, despite proposals in the literature, and they have yet to be objectively analyzed. A critical appraisal of these innovations is crucial to identify their respective strengths and weaknesses. We present an analytical evaluation of several prominent Blockchain-oriented methods through a comprehensive, criteria-based evaluation framework. The results can be used for comparing, adapting, and developing a new generation of Blockchain-oriented software development processes and innovations.
2102.10685
Esha Shandilya
Esha Shandilya, Yiwen Wang, Xuan Zhao and Mingming Fan
EvoK: Connecting loved ones through Heart Rate sharing
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
In this work, we present EvoK, a new way of sharing one's heart rate with feedback from their close contacts to alleviate social isolation and loneliness. EvoK consists of a pair of wearable prototype devices (i.e., sender and receiver). The sender is designed as a headband enabling continuous sensing of heart rate with aesthetic designs to maximize social acceptance. The receiver is designed as a wristwatch enabling unobtrusive receiving of the loved one's continuous heart rate with multi-modal notification systems.
[ { "created": "Sun, 21 Feb 2021 21:04:16 GMT", "version": "v1" } ]
2021-02-23
[ [ "Shandilya", "Esha", "" ], [ "Wang", "Yiwen", "" ], [ "Zhao", "Xuan", "" ], [ "Fan", "Mingming", "" ] ]
In this work, we present EvoK, a new way of sharing one's heart rate with feedback from their close contacts to alleviate social isolation and loneliness. EvoK consists of a pair of wearable prototype devices (i.e., sender and receiver). The sender is designed as a headband enabling continuous sensing of heart rate with aesthetic designs to maximize social acceptance. The receiver is designed as a wristwatch enabling unobtrusive receiving of the loved one's continuous heart rate with multi-modal notification systems.
2101.05469
Jason Wei
Jason Wei, Chengyu Huang, Shiqi Xu, Soroush Vosoughi
Text Augmentation in a Multi-Task View
Accepted to EACL 2021
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Traditional data augmentation aims to increase the coverage of the input distribution by generating augmented examples that strongly resemble original samples in an online fashion where augmented examples dominate training. In this paper, we propose an alternative perspective -- a multi-task view (MTV) of data augmentation -- in which the primary task trains on original examples and the auxiliary task trains on augmented examples. In MTV data augmentation, both original and augmented samples are weighted substantively during training, relaxing the constraint that augmented examples must resemble original data and thereby allowing us to apply stronger levels of augmentation. In empirical experiments using four common data augmentation techniques on three benchmark text classification datasets, we find that the MTV leads to higher and more robust performance improvements than traditional augmentation.
[ { "created": "Thu, 14 Jan 2021 05:59:23 GMT", "version": "v1" } ]
2021-01-15
[ [ "Wei", "Jason", "" ], [ "Huang", "Chengyu", "" ], [ "Xu", "Shiqi", "" ], [ "Vosoughi", "Soroush", "" ] ]
Traditional data augmentation aims to increase the coverage of the input distribution by generating augmented examples that strongly resemble original samples in an online fashion where augmented examples dominate training. In this paper, we propose an alternative perspective -- a multi-task view (MTV) of data augmentation -- in which the primary task trains on original examples and the auxiliary task trains on augmented examples. In MTV data augmentation, both original and augmented samples are weighted substantively during training, relaxing the constraint that augmented examples must resemble original data and thereby allowing us to apply stronger levels of augmentation. In empirical experiments using four common data augmentation techniques on three benchmark text classification datasets, we find that the MTV leads to higher and more robust performance improvements than traditional augmentation.
2008.03868
Xiaoming Chen
Jianhang Chu, Xiaoming Chen, Caijun Zhong, Zhaoyang Zhang
Robust Design for NOMA-based Multi-Beam LEO Satellite Internet of Things
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate the issue of massive access in a beyond fifth-generation (B5G) multi-beam low earth orbit (LEO) satellite internet of things (IoT) network in the presence of channel phase uncertainty due to channel state information (CSI) conveyance from the devices to the satellite via the gateway. Rather than time division multiple access (TDMA) or frequency division multiple access (FDMA) with multi-color pattern, a new non-orthogonal multiple access (NOMA) scheme is adopted to support massive IoT distributed over a very wide range. Considering the limited energy on the LEO satellite, two robust beamforming algorithms against channel phase uncertainty are proposed for minimizing the total power consumption in the scenarios of noncritical IoT applications and critical IoT applications, respectively. Both thoeretical analysis and simulation results validate the effectiveness and robustness of the proposed algorithms for supporting massive access in satellite IoT.
[ { "created": "Mon, 10 Aug 2020 02:41:45 GMT", "version": "v1" } ]
2020-08-11
[ [ "Chu", "Jianhang", "" ], [ "Chen", "Xiaoming", "" ], [ "Zhong", "Caijun", "" ], [ "Zhang", "Zhaoyang", "" ] ]
In this paper, we investigate the issue of massive access in a beyond fifth-generation (B5G) multi-beam low earth orbit (LEO) satellite internet of things (IoT) network in the presence of channel phase uncertainty due to channel state information (CSI) conveyance from the devices to the satellite via the gateway. Rather than time division multiple access (TDMA) or frequency division multiple access (FDMA) with multi-color pattern, a new non-orthogonal multiple access (NOMA) scheme is adopted to support massive IoT distributed over a very wide range. Considering the limited energy on the LEO satellite, two robust beamforming algorithms against channel phase uncertainty are proposed for minimizing the total power consumption in the scenarios of noncritical IoT applications and critical IoT applications, respectively. Both thoeretical analysis and simulation results validate the effectiveness and robustness of the proposed algorithms for supporting massive access in satellite IoT.
2007.06122
Ya Xiao
Ya Xiao, Yang Zhao, Nicholas Allen, Nathan Keynes, Danfeng (Daphne) Yao, Cristina Cifuentes
Industrial Experience of Finding Cryptographic Vulnerabilities in Large-scale Codebases
8 pages, 5 figures
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Enterprise environment often screens large-scale (millions of lines of code) codebases with static analysis tools to find bugs and vulnerabilities. Parfait is a static code analysis tool used in Oracle to find security vulnerabilities in industrial codebases. Recently, many studies show that there are complicated cryptographic vulnerabilities caused by misusing cryptographic APIs in Java. In this paper, we describe how we realize a precise and scalable detection of these complicated cryptographic vulnerabilities based on Parfait framework. The key challenge in the detection of cryptographic vulnerabilities is the high false alarm rate caused by pseudo-influences. Pseudo-influences happen if security-irrelevant constants are used in constructing security-critical values. Static analysis is usually unable to distinguish them from hard-coded constants that expose sensitive information. We tackle this problem by specializing the backward dataflow analysis used in Parfait with refinement insights, an idea from the tool CryptoGuard. We evaluate our analyzer on a comprehensive Java cryptographic vulnerability benchmark and eleven large real-world applications. The results show that the Parfait-based cryptographic vulnerability detector can find real-world cryptographic vulnerabilities in large-scale codebases with high true-positive rates and low runtime cost.
[ { "created": "Sun, 12 Jul 2020 22:52:40 GMT", "version": "v1" }, { "created": "Sat, 1 Jan 2022 22:06:04 GMT", "version": "v2" } ]
2022-01-04
[ [ "Xiao", "Ya", "", "Daphne" ], [ "Zhao", "Yang", "", "Daphne" ], [ "Allen", "Nicholas", "", "Daphne" ], [ "Keynes", "Nathan", "", "Daphne" ], [ "Danfeng", "", "", "Daphne" ], [ "Yao", "", "" ], [ "Cifuentes", "Cristina", "" ] ]
Enterprise environment often screens large-scale (millions of lines of code) codebases with static analysis tools to find bugs and vulnerabilities. Parfait is a static code analysis tool used in Oracle to find security vulnerabilities in industrial codebases. Recently, many studies show that there are complicated cryptographic vulnerabilities caused by misusing cryptographic APIs in Java. In this paper, we describe how we realize a precise and scalable detection of these complicated cryptographic vulnerabilities based on Parfait framework. The key challenge in the detection of cryptographic vulnerabilities is the high false alarm rate caused by pseudo-influences. Pseudo-influences happen if security-irrelevant constants are used in constructing security-critical values. Static analysis is usually unable to distinguish them from hard-coded constants that expose sensitive information. We tackle this problem by specializing the backward dataflow analysis used in Parfait with refinement insights, an idea from the tool CryptoGuard. We evaluate our analyzer on a comprehensive Java cryptographic vulnerability benchmark and eleven large real-world applications. The results show that the Parfait-based cryptographic vulnerability detector can find real-world cryptographic vulnerabilities in large-scale codebases with high true-positive rates and low runtime cost.
1909.12483
Zhijie Yang
Xiting Zhao, Zhijie Yang and S\"oren Schwertfeger
Mapping with Reflection -- Detection and Utilization of Reflection in 3D Lidar Scans
Accepted at SSRR 2020
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a method to detect reflection of 3D light detection and ranging (Lidar) scans and uses it to classify the points and also map objects outside the line of sight. Our software uses several approaches to analyze the point cloud, including intensity peak detection, dual return detection, plane fitting, and finding the boundaries. These approaches can classify the point cloud and detect the reflection in it. By mirroring the reflection points on the detected window pane and adding classification labels on the points, we can improve the map quality in a Simultaneous Localization and Mapping (SLAM) framework. Experiments using real scan data and ground truth data showcase the effectiveness of our method.
[ { "created": "Fri, 27 Sep 2019 03:43:44 GMT", "version": "v1" }, { "created": "Tue, 27 Oct 2020 05:42:47 GMT", "version": "v2" } ]
2020-10-28
[ [ "Zhao", "Xiting", "" ], [ "Yang", "Zhijie", "" ], [ "Schwertfeger", "Sören", "" ] ]
This paper presents a method to detect reflection of 3D light detection and ranging (Lidar) scans and uses it to classify the points and also map objects outside the line of sight. Our software uses several approaches to analyze the point cloud, including intensity peak detection, dual return detection, plane fitting, and finding the boundaries. These approaches can classify the point cloud and detect the reflection in it. By mirroring the reflection points on the detected window pane and adding classification labels on the points, we can improve the map quality in a Simultaneous Localization and Mapping (SLAM) framework. Experiments using real scan data and ground truth data showcase the effectiveness of our method.
1207.5497
Yongge Wang
Yongge Wang
Password Protected Smart Card and Memory Stick Authentication Against Off-line Dictionary Attacks
null
SEC 2012, IFIP AICT 376, pp. 489-500, 2012
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the security requirements for remote authentication with password protected smart card. In recent years, several protocols for password-based authenticated key exchange have been proposed. These protocols are used for the protection of password based authentication between a client and a remote server. In this paper, we will focus on the password based authentication between a smart card owner and smart card via an untrusted card reader. In a typical scenario, a smart card owner inserts the smart card into an untrusted card reader and input the password via the card reader in order for the smart card to carry out the process of authentication with a remote server. In this case, we want to guarantee that the card reader will not be able to impersonate the card owner in future without the smart card itself. Furthermore, the smart card could be stolen. If this happens, we want the assurance that an adversary could not use the smart card to impersonate the card owner even though the sample space of passwords may be small enough to be enumerated by an off-line adversary. At the end of this paper, we further extend our results to credential storage on portable non-tamper resistant storage devices such as USB memory sticks.
[ { "created": "Mon, 23 Jul 2012 19:49:57 GMT", "version": "v1" } ]
2012-07-24
[ [ "Wang", "Yongge", "" ] ]
In this paper, we study the security requirements for remote authentication with password protected smart card. In recent years, several protocols for password-based authenticated key exchange have been proposed. These protocols are used for the protection of password based authentication between a client and a remote server. In this paper, we will focus on the password based authentication between a smart card owner and smart card via an untrusted card reader. In a typical scenario, a smart card owner inserts the smart card into an untrusted card reader and input the password via the card reader in order for the smart card to carry out the process of authentication with a remote server. In this case, we want to guarantee that the card reader will not be able to impersonate the card owner in future without the smart card itself. Furthermore, the smart card could be stolen. If this happens, we want the assurance that an adversary could not use the smart card to impersonate the card owner even though the sample space of passwords may be small enough to be enumerated by an off-line adversary. At the end of this paper, we further extend our results to credential storage on portable non-tamper resistant storage devices such as USB memory sticks.
2304.11835
Yonggan Fu
Yonggan Fu, Yuecheng Li, Chenghui Li, Jason Saragih, Peizhao Zhang, Xiaoliang Dai, Yingyan Lin
Auto-CARD: Efficient and Robust Codec Avatar Driving for Real-time Mobile Telepresence
Accepted by CVPR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time and robust photorealistic avatars for telepresence in AR/VR have been highly desired for enabling immersive photorealistic telepresence. However, there still exists one key bottleneck: the considerable computational expense needed to accurately infer facial expressions captured from headset-mounted cameras with a quality level that can match the realism of the avatar's human appearance. To this end, we propose a framework called Auto-CARD, which for the first time enables real-time and robust driving of Codec Avatars when exclusively using merely on-device computing resources. This is achieved by minimizing two sources of redundancy. First, we develop a dedicated neural architecture search technique called AVE-NAS for avatar encoding in AR/VR, which explicitly boosts both the searched architectures' robustness in the presence of extreme facial expressions and hardware friendliness on fast evolving AR/VR headsets. Second, we leverage the temporal redundancy in consecutively captured images during continuous rendering and develop a mechanism dubbed LATEX to skip the computation of redundant frames. Specifically, we first identify an opportunity from the linearity of the latent space derived by the avatar decoder and then propose to perform adaptive latent extrapolation for redundant frames. For evaluation, we demonstrate the efficacy of our Auto-CARD framework in real-time Codec Avatar driving settings, where we achieve a 5.05x speed-up on Meta Quest 2 while maintaining a comparable or even better animation quality than state-of-the-art avatar encoder designs.
[ { "created": "Mon, 24 Apr 2023 05:45:12 GMT", "version": "v1" } ]
2023-04-25
[ [ "Fu", "Yonggan", "" ], [ "Li", "Yuecheng", "" ], [ "Li", "Chenghui", "" ], [ "Saragih", "Jason", "" ], [ "Zhang", "Peizhao", "" ], [ "Dai", "Xiaoliang", "" ], [ "Lin", "Yingyan", "" ] ]
Real-time and robust photorealistic avatars for telepresence in AR/VR have been highly desired for enabling immersive photorealistic telepresence. However, there still exists one key bottleneck: the considerable computational expense needed to accurately infer facial expressions captured from headset-mounted cameras with a quality level that can match the realism of the avatar's human appearance. To this end, we propose a framework called Auto-CARD, which for the first time enables real-time and robust driving of Codec Avatars when exclusively using merely on-device computing resources. This is achieved by minimizing two sources of redundancy. First, we develop a dedicated neural architecture search technique called AVE-NAS for avatar encoding in AR/VR, which explicitly boosts both the searched architectures' robustness in the presence of extreme facial expressions and hardware friendliness on fast evolving AR/VR headsets. Second, we leverage the temporal redundancy in consecutively captured images during continuous rendering and develop a mechanism dubbed LATEX to skip the computation of redundant frames. Specifically, we first identify an opportunity from the linearity of the latent space derived by the avatar decoder and then propose to perform adaptive latent extrapolation for redundant frames. For evaluation, we demonstrate the efficacy of our Auto-CARD framework in real-time Codec Avatar driving settings, where we achieve a 5.05x speed-up on Meta Quest 2 while maintaining a comparable or even better animation quality than state-of-the-art avatar encoder designs.
2308.08343
Jordan Awan
Jordan Awan and Aishwarya Ramasethu
Optimizing Noise for $f$-Differential Privacy via Anti-Concentration and Stochastic Dominance
17 pages before appendix, 25 pages total, 6 figures
null
null
null
cs.CR math.PR math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we establish anti-concentration inequalities for additive noise mechanisms which achieve $f$-differential privacy ($f$-DP), a notion of privacy phrased in terms of a tradeoff function (a.k.a. ROC curve) $f$ which limits the ability of an adversary to determine which individuals were in the database. We show that canonical noise distributions (CNDs), proposed by Awan and Vadhan (2023), match the anti-concentration bounds at half-integer values, indicating that their tail behavior is near-optimal. We also show that all CNDs are sub-exponential, regardless of the $f$-DP guarantee. In the case of log-concave CNDs, we show that they are the stochastically smallest noise compared to any other noise distributions with the same privacy guarantee. In terms of integer-valued noise, we propose a new notion of discrete CND and prove that a discrete CND always exists, can be constructed by rounding a continuous CND, and that the discrete CND is unique when designed for a statistic with sensitivity 1. We further show that the discrete CND at sensitivity 1 is stochastically smallest compared to other integer-valued noises. Our theoretical results shed light on the different types of privacy guarantees possible in the $f$-DP framework and can be incorporated in more complex mechanisms to optimize performance.
[ { "created": "Wed, 16 Aug 2023 13:09:27 GMT", "version": "v1" } ]
2023-08-17
[ [ "Awan", "Jordan", "" ], [ "Ramasethu", "Aishwarya", "" ] ]
In this paper, we establish anti-concentration inequalities for additive noise mechanisms which achieve $f$-differential privacy ($f$-DP), a notion of privacy phrased in terms of a tradeoff function (a.k.a. ROC curve) $f$ which limits the ability of an adversary to determine which individuals were in the database. We show that canonical noise distributions (CNDs), proposed by Awan and Vadhan (2023), match the anti-concentration bounds at half-integer values, indicating that their tail behavior is near-optimal. We also show that all CNDs are sub-exponential, regardless of the $f$-DP guarantee. In the case of log-concave CNDs, we show that they are the stochastically smallest noise compared to any other noise distributions with the same privacy guarantee. In terms of integer-valued noise, we propose a new notion of discrete CND and prove that a discrete CND always exists, can be constructed by rounding a continuous CND, and that the discrete CND is unique when designed for a statistic with sensitivity 1. We further show that the discrete CND at sensitivity 1 is stochastically smallest compared to other integer-valued noises. Our theoretical results shed light on the different types of privacy guarantees possible in the $f$-DP framework and can be incorporated in more complex mechanisms to optimize performance.
1802.03594
\'Alvaro Peris
\'Alvaro Peris and Francisco Casacuberta
Online Learning for Effort Reduction in Interactive Neural Machine Translation
Accepted in Computer Speech & Language
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Neural machine translation systems require large amounts of training data and resources. Even with this, the quality of the translations may be insufficient for some users or domains. In such cases, the output of the system must be revised by a human agent. This can be done in a post-editing stage or following an interactive machine translation protocol. We explore the incremental update of neural machine translation systems during the post-editing or interactive translation processes. Such modifications aim to incorporate the new knowledge, from the edited sentences, into the translation system. Updates to the model are performed on-the-fly, as sentences are corrected, via online learning techniques. In addition, we implement a novel interactive, adaptive system, able to react to single-character interactions. This system greatly reduces the human effort required for obtaining high-quality translations. In order to stress our proposals, we conduct exhaustive experiments varying the amount and type of data available for training. Results show that online learning effectively achieves the objective of reducing the human effort required during the post-editing or the interactive machine translation stages. Moreover, these adaptive systems also perform well in scenarios with scarce resources. We show that a neural machine translation system can be rapidly adapted to a specific domain, exclusively by means of online learning techniques.
[ { "created": "Sat, 10 Feb 2018 14:07:58 GMT", "version": "v1" }, { "created": "Mon, 8 Apr 2019 08:18:33 GMT", "version": "v2" } ]
2019-04-09
[ [ "Peris", "Álvaro", "" ], [ "Casacuberta", "Francisco", "" ] ]
Neural machine translation systems require large amounts of training data and resources. Even with this, the quality of the translations may be insufficient for some users or domains. In such cases, the output of the system must be revised by a human agent. This can be done in a post-editing stage or following an interactive machine translation protocol. We explore the incremental update of neural machine translation systems during the post-editing or interactive translation processes. Such modifications aim to incorporate the new knowledge, from the edited sentences, into the translation system. Updates to the model are performed on-the-fly, as sentences are corrected, via online learning techniques. In addition, we implement a novel interactive, adaptive system, able to react to single-character interactions. This system greatly reduces the human effort required for obtaining high-quality translations. In order to stress our proposals, we conduct exhaustive experiments varying the amount and type of data available for training. Results show that online learning effectively achieves the objective of reducing the human effort required during the post-editing or the interactive machine translation stages. Moreover, these adaptive systems also perform well in scenarios with scarce resources. We show that a neural machine translation system can be rapidly adapted to a specific domain, exclusively by means of online learning techniques.
1701.06790
Rachid Echahed
Rachid Echahed and Aude Maignan
Parallel Graph Rewriting with Overlapping Rules
26 pages
null
null
null
cs.FL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We tackle the problem of simultaneous transformations of networks represented as graphs. Roughly speaking, one may distinguish two kinds of simultaneous or parallel rewrite relations over complex structures such as graphs: (i) those which transform disjoint subgraphs in parallel and hence can be simulated by successive mere sequential and local transformations and (ii) those which transform overlapping subgraphs simultaneously. In the latter situations, parallel transformations cannot be simulated in general by means of successive local rewrite steps. We investigate this last problem in the framework of overlapping graph transformation systems. As parallel transformation of a graph does not produce a graph in general, we propose first some sufficient conditions that ensure the closure of graphs by parallel rewrite relations. Then we mainly introduce and discuss two parallel rewrite relations over graphs. One relation is functional and thus deterministic, the other one is not functional for which we propose sufficient conditions which ensure its confluence.
[ { "created": "Tue, 24 Jan 2017 10:02:55 GMT", "version": "v1" } ]
2017-01-25
[ [ "Echahed", "Rachid", "" ], [ "Maignan", "Aude", "" ] ]
We tackle the problem of simultaneous transformations of networks represented as graphs. Roughly speaking, one may distinguish two kinds of simultaneous or parallel rewrite relations over complex structures such as graphs: (i) those which transform disjoint subgraphs in parallel and hence can be simulated by successive mere sequential and local transformations and (ii) those which transform overlapping subgraphs simultaneously. In the latter situations, parallel transformations cannot be simulated in general by means of successive local rewrite steps. We investigate this last problem in the framework of overlapping graph transformation systems. As parallel transformation of a graph does not produce a graph in general, we propose first some sufficient conditions that ensure the closure of graphs by parallel rewrite relations. Then we mainly introduce and discuss two parallel rewrite relations over graphs. One relation is functional and thus deterministic, the other one is not functional for which we propose sufficient conditions which ensure its confluence.
2001.07049
Lukas Holzbaur
Lukas Holzbaur, Camilla Hollanti, Antonia Wachter-Zeh
Computational Code-Based Single-Server Private Information Retrieval
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new computational private information retrieval (PIR) scheme based on random linear codes is presented. A matrix of messages from a McEliece scheme is used to query the server with carefully chosen errors. The server responds with the sum of the scalar multiple of the rows of the query matrix and the files. The user recovers the desired file by erasure decoding the response. Contrary to code-based cryptographic systems, the scheme presented here enables to use truly random codes, not only codes disguised as such. Further, we show the relation to the so-called error subspace search problem and quotient error search problem, which we assume to be difficult, and show that the scheme is secure against attacks based on solving these problems.
[ { "created": "Mon, 20 Jan 2020 10:31:56 GMT", "version": "v1" }, { "created": "Thu, 14 May 2020 15:58:04 GMT", "version": "v2" } ]
2020-05-15
[ [ "Holzbaur", "Lukas", "" ], [ "Hollanti", "Camilla", "" ], [ "Wachter-Zeh", "Antonia", "" ] ]
A new computational private information retrieval (PIR) scheme based on random linear codes is presented. A matrix of messages from a McEliece scheme is used to query the server with carefully chosen errors. The server responds with the sum of the scalar multiple of the rows of the query matrix and the files. The user recovers the desired file by erasure decoding the response. Contrary to code-based cryptographic systems, the scheme presented here enables to use truly random codes, not only codes disguised as such. Further, we show the relation to the so-called error subspace search problem and quotient error search problem, which we assume to be difficult, and show that the scheme is secure against attacks based on solving these problems.
2006.10916
Seyed Esmaeili
Seyed A. Esmaeili, Brian Brubach, Leonidas Tsepenekas, John P. Dickerson
Probabilistic Fair Clustering
null
null
null
null
cs.LG cs.AI cs.DS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In clustering problems, a central decision-maker is given a complete metric graph over vertices and must provide a clustering of vertices that minimizes some objective function. In fair clustering problems, vertices are endowed with a color (e.g., membership in a group), and the features of a valid clustering might also include the representation of colors in that clustering. Prior work in fair clustering assumes complete knowledge of group membership. In this paper, we generalize prior work by assuming imperfect knowledge of group membership through probabilistic assignments. We present clustering algorithms in this more general setting with approximation ratio guarantees. We also address the problem of "metric membership", where different groups have a notion of order and distance. Experiments are conducted using our proposed algorithms as well as baselines to validate our approach and also surface nuanced concerns when group membership is not known deterministically.
[ { "created": "Fri, 19 Jun 2020 01:34:21 GMT", "version": "v1" }, { "created": "Thu, 4 Nov 2021 10:12:10 GMT", "version": "v2" }, { "created": "Fri, 2 Jun 2023 20:04:45 GMT", "version": "v3" } ]
2023-06-06
[ [ "Esmaeili", "Seyed A.", "" ], [ "Brubach", "Brian", "" ], [ "Tsepenekas", "Leonidas", "" ], [ "Dickerson", "John P.", "" ] ]
In clustering problems, a central decision-maker is given a complete metric graph over vertices and must provide a clustering of vertices that minimizes some objective function. In fair clustering problems, vertices are endowed with a color (e.g., membership in a group), and the features of a valid clustering might also include the representation of colors in that clustering. Prior work in fair clustering assumes complete knowledge of group membership. In this paper, we generalize prior work by assuming imperfect knowledge of group membership through probabilistic assignments. We present clustering algorithms in this more general setting with approximation ratio guarantees. We also address the problem of "metric membership", where different groups have a notion of order and distance. Experiments are conducted using our proposed algorithms as well as baselines to validate our approach and also surface nuanced concerns when group membership is not known deterministically.